file:///C|/Perl/bin/result.

html

INFORMATICA INTERVIEW QUESTIONS - Extracted from GeekInterview.com by Deepak Babu
http://prdeepakbabu.wordpress.com

1.Informatica - Why we use lookup transformations?

DISCALIMER: The questions / data available here are from geekinterview.com. It has been compiled to single document for the ease of browsing through the informatica relevant questions. For any details, please refer www.geekinterview.com. We are not responsible for any data inaccuracy.

QUESTION #1 Lookup Transformations can access data from relational tables that are not sources in mapping. With Lookup transformation, we can accomplish the following tasks:
Get a related value-Get the Employee Name from Employee table based on the Employee IDPerform Calculation. Update slowly changing dimension tables - We can use unconnected lookup transformation to determine whether the records already exist in the target or not. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. January 19, 2006 01:12:33 sithusithu Member Since: December 2005 Contribution: 161 #1

RE: Why we use lookup transformations?

======================================= Nice Question If we don't have a look our datawarehouse will be have more unwanted duplicates Use a Lookup transformation in your mapping to look up data in a relational table view or synonym. Import a lookup definition from any relational database to which both the Informatica Client and Server can connect. You can use multiple Lookup transformations in a mapping Cheers Sithu

file:///C|/Perl/bin/result.html (1 of 363)4/1/2009 7:50:58 PM

file:///C|/Perl/bin/result.html

======================================= Lookup Transformations used to search data from relational tables/FLAT Files that are not used in mapping. Types of Lookup: 1. Connected Lookup 2. UnConnected Lookup ======================================= The main use of lookup is to get a related value either from a relational sources or flat files ======================================= The following reasons for using lookups.....

1)We use Lookup transformations that query the largest amounts of data to improve overall performance. By doing that we can reduce the number of lookups on the same table.

2)If a mapping contains Lookup transformations we will enable lookup caching if this option is not enabled . We will use a persistent cache to improve performance of the lookup whenever possible. We will explore the possibility of using concurrent caches to improve session performance. We will use the Lookup SQL Override option to add a WHERE clause to the default SQL statement if it is not defined We will add ORDER BY clause in lookup SQL statement if there is no order by defined. We will use SQL override to suppress the default ORDER BY statement and enter an override ORDER BY with fewer columns. Indexing the Lookup Table We can improve performance for the following types of lookups: For cached lookups we will index the lookup table using the columns in the lookup ORDER BY statement. For Un-cached lookups we will Index the lookup table using the columns in the lookup where condition.
file:///C|/Perl/bin/result.html (2 of 363)4/1/2009 7:50:58 PM

file:///C|/Perl/bin/result.html

3)In some cases we use lookup instead of Joiner as lookup is faster than joiner in some cases when lookup contains the master data only.

4)This lookup helps in terms of performance tuning of the mappings also.

======================================= Look up Transformation is like a set of Reference for the traget table.For example suppose you are travelling by an auto ricksha..In the morning you notice that the auto driver showing you some card and saying that today onwards there is a hike in petrol.so you have to pay more. So the card which he is showing is a set of reference for there costumer..In the same way the lookup transformation works. These are of 2 types : a) Connected Lookup b) Un-connected lookup Connected lookup is connected in a single pipeline from a source to a target where as Un Connected Lookup is isolated with in the mapping and is called with the help of a Expression Transformation.

======================================= Look up tranformations are used to Get a related value Updating slowly changing dimension Caluculating expressions =======================================

2.Informatica - While importing the relational source definition from database, what are the meta data of source U i
QUESTION #2 Source name Database location Column names Datatypes Key constraints

Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer.

file:///C|/Perl/bin/result.html (3 of 363)4/1/2009 7:50:58 PM

file:///C|/Perl/bin/result.html

September 28, 2006 06:30:08 srinvas vadlakonda RE: While importing the relational source defintion fr...

#1

======================================= source name data types key constraints database location ======================================= Relational sources are tables views synonyms. Source name Database location Column name Datatype Key Constraints. For synonyms you will have to manually create the constraints. =======================================

3.Informatica - How many ways you can update a relational source defintion and what r they?
QUESTION #3 Two ways 1. Edit the definition 2. Reimport the defintion

Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. January 30, 2006 04:59:06 gazulas Member Since: January 2006 Contribution: 17 #1

RE: How many ways you can update a relational source d...

======================================= in 2 ways we can do it 1) by reimport the source definition 2) by edit the source definition

=======================================

4.Informatica - Where should U place the flat file to import the
file:///C|/Perl/bin/result.html (4 of 363)4/1/2009 7:50:58 PM

file:///C|/Perl/bin/result.html

flat file defintion to the designer?
QUESTION #4 Place it in local folder
Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. December 13, 2005 08:42:59 rishi RE: Where should U place the flat file to import the f... #1

======================================= There is no such restrication to place the source file. In performance point of view its better to place the file in server local src folder. if you need path please check the server properties availble at workflow manager. It doesn't mean we should not place in any other folder if we place in server src folder by default src will be selected at time session creation.

======================================= file must be in a directory local to the client machine. ======================================= Basically the flat file should be stored in the src folder in the informatica server folder. Now logically it should pick up the file from any location but it gives an error of invalid identifier or not able to read the first row. So its better to keep the file in the src folder.which is already created when the informatica is installed ======================================= We can place source file any where in network but it will consume more time to fetch data from source file but if the source file is present on server srcfile then it will fetch data from source upto 25 times faster than previous. =======================================

5.Informatica - To provide support for Mainframes source data, which files r used as a source definitions?
QUESTION #5 COBOL files
Click Here to view complete document

file:///C|/Perl/bin/result.html (5 of 363)4/1/2009 7:50:58 PM

file:///C|/Perl/bin/result.html

No best answer available. Please pick the good answer available or submit your answer. October 07, 2005 11:49:42 Shaks Krishnamurthy RE: To provide support for Mainframes source data,whic... #1

======================================= COBOL Copy-book files ======================================= The mainframe files are Used as VSAM files in Informatica by using the Normaliser transformation =======================================

6.Informatica - Which transformation should u need while using the cobol sources as source defintions?

QUESTION #6 Normalizer transformaiton which is used to normalize the data. Since cobol sources r oftenly consists of Denormailzed data.
Click Here to view complete document Submitted by: sithusithu

Normalizer transformaiton Cheers, Sithu

Above answer was rated as good by the following members: ramonasiraj ======================================= Normalizer transformaiton Cheers Sithu

file:///C|/Perl/bin/result.html (6 of 363)4/1/2009 7:50:58 PM

file:///C|/Perl/bin/result.html

======================================= Normalizer transformaiton which is used to normalize the data =======================================

7.Informatica - How can U create or import flat file definition in to the warehouse designer?

QUESTION #7 U can not create or import flat file defintion in to warehouse designer directly.Instead U must analyze the file in source analyzer,then drag it into the warehouse designer.When U drag the flat file source defintion into warehouse desginer workspace,the warehouse designer creates a relational target defintion not a file defintion.If u want to load to a file,configure the session to write to a flat file.When the informatica server runs the session,it creates and loads the flatfile.
Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. August 22, 2005 03:23:12 Praveen RE: How can U create or import flat file definition in to the warehouse designer? #1

======================================= U can create flat file definition in warehouse designer.in the warehouse designer u can create new target: select the type as flat file. save it and u can enter various columns for that created target by editing its properties.Once the target is created save it. u can import it from the mapping designer. ======================================= Yes you can import flat file directly into Warehouse designer. This way it will import the field definitions directly. ======================================= 1) Manually create the flat file target definition in warehouse designer 2)create target definition from a source definition. This is done my dropping a source definition in warehouse designer. 3)Import flat file definitionusing a flat file wizard. ( file must be local to the client machine) ======================================= While creating flatfiles manually we drag and drop the structure from SQ if the structure we need is the same as of source for this we need to check-in the source and then drag and drop it into the Flatfile if

file:///C|/Perl/bin/result.html (7 of 363)4/1/2009 7:50:58 PM

html not all the columns in the source will be changed as primary keys.html (8 of 363)4/1/2009 7:50:58 PM . Please pick the good answer available or submit your answer. 2005 23:38:47 phani RE: What is the maplet? #1 ======================================= For Ex:Suppose we have several fact tables that require a series of dimension keys.What is the maplet? QUESTION #8 Maplet is a set of transformations that you build in the maplet designer and U can use in multiple mapings. Click Here to view complete document No best answer available.file:///C|/Perl/bin/result. ======================================= 8.Informatica . when the mapplet is displayed with in the mapping only input & output ports are displayed so that the internal logic is hidden from end-user point of view.Then we can create a mapplet which contains a series of Lkp transformations to find each dimension key and use it in each fact table mapping instead of creating the same Lkp logic in each mapping. December 08. ======================================= Part(sub set) of the Mapping is known as Mapplet Cheers Sithu ======================================= Set of transforamations where the logic can be reusble ======================================= A mapplet should have a mapplet input transformation which recives input values and a output transformation which passes the final modified data to back to the mapping. ======================================= file:///C|/Perl/bin/result.

======================================= Mapplet Is In Mapplet Designer It is used to create Mapplets. 2005 16:06:23 sir RE: what is a transforamation? #1 ======================================= a transformation is repository object that pass data to the next stage(i.file:///C|/Perl/bin/result. November 23. Please pick the good answer available or submit your answer. If we want a series of dimension keys in the final fact table we will use mapping designer. ======================================= set of operation Cheers Sithu file:///C|/Perl/bin/result.html Reusable mapping is known as mapplet & reusable transformation with mapplet ======================================= Maplet is a reusable business logic which can be used in mappings ======================================= A maplet is a reusable object which contains one or more than one transformation which is used to populate the data from source to target based on the business logic and we can use the same logic in different mappings without creating the mapping again.e to the next transformation or target) with/with out modifying the data ======================================= It is a process of converting given input to desired output.what is a transforamation? Click Here to view complete document QUESTION #9 It is a repostitory object that generates. ======================================= A mapplet is a reusable object that represents a set of transformations. Mapplet can be designed using mapping designer in informatica power center ======================================= Basically mapplet is a subset of the mapping in which we can have the information of the each dimension keys by keeping the different mappings created individually.Informatica .html (9 of 363)4/1/2009 7:50:58 PM . ======================================= 9.modifies or passes data. No best answer available.

Please pick the good answer available or submit your answer. For Example An AGGREGATOR Transformation Performs Calculations On Groups Of Data.for reusable transformation ======================================= 11.. #1 ======================================= There r 2 type of tool r used 4 creating transformation.Informatica .Informatica ..What r the active and passive transforamtions? QUESTION #11 An active transforamtion can change the number of rows that file:///C|/Perl/bin/result. That Generates Modifies or Passes Data.just like Mapping designer Mapplet designer ======================================= Mapping Designer Maplet Designer Transformation Deveoper .What r the designer tools for creating tranformations? QUESTION #10 Mapping designer Tansformation developer Mapplet designer Click Here to view complete document No best answer available..html ======================================= Transformation is a repository object of converting a given input to desired output.... The Designer Provides a Set of Transformations That perform Specific Functions. ======================================= A TransFormation Is a Repository Object. 2007 05:29:40 MANOJ KUMAR PANIGRAHI RE: What r the designer tools for creating tranformati.html (10 of 363)4/1/2009 7:50:58 PM .It can generates modifies and passes the data. February 21.. ======================================= 10.file:///C|/Perl/bin/result.

file:///C|/Perl/bin/result.html (11 of 363)4/1/2009 7:50:58 PM . Cheers Sithu ======================================= Active Transformation : A Transformation which change the number of rows when data is flowing from source to target Passive Transformation : A transformation which does not change the number of rows when the data is flowing from source to target ======================================= 12.What r the connected or unconnected transforamations? QUESTION #12 An unconnected transforamtion is not connected to other transformations in the mapping.html pass through it. Click Here to view complete document No best answer available.Informatica .Connected transforamation is connected to other transforamtions in the mapping. January 24. Please pick the good answer available or submit your answer. An active transformation can change the number of rows that pass through it such as a Filter transformation that removes rows that do not meet the filter condition. 2006 03:32:14 sithusithu Member Since: December 2005 Contribution: 161 #1 RE: What r the active and passive transforamtions? ======================================= Transformations can be active or passive.A passive transformation does not change the number of rows that pass through it. A passive transformation does not change the number of rows that pass through it such as an Expression transformation that performs a calculation on data and passes all rows through the transformation. Click Here to view complete document file:///C|/Perl/bin/result.

You get better performance. but it can be called inside another transformation.file:///C|/Perl/bin/result.html No best answer available. ======================================= Connect Transformation : A transformation which participates in the mapping data flow Connected transformation can receive multiple inputs and provides multiple outputs Unconnected : A unconnected transformation does not participate in the mapping data flow It can receive multiple inputs and provides single output file:///C|/Perl/bin/result. use unconnected transforms when you wanna call the same transform many times in a single mapping. ======================================= In addition to first answer uncondition transformation are directly connected and can/used in as many as other transformations. August 22. Please pick the good answer available or submit your answer.html (12 of 363)4/1/2009 7:50:58 PM . much like calling a program by name and by reference. 2005 03:26:32 Praveen RE: What r the connected or unconnected transforamations? #1 ======================================= An unconnected transformation cant be connected to another transformation. ======================================= Here is the deal Connected transformation is a part of your data flow in the pipeline while unconnected Transformation is not. If you are using a transformation several times use unconditional.

file:///C|/Perl/bin/result.How many ways u create ports? QUESTION #13 Two ways 1.Drag the port from another transforamtion 2. When u need to incorporate this transformation into maping.Informatica .html Thanks Rekha ======================================= 13. September 28.Drag the port from another transforamtion 2.Since the instance of reusable transforamation is a pointer to that transforamtion.Later if U change the definition of the transformation .This feature can save U great deal of work.Click the add buttion on the ports tab. 2006 06:31:21 srinivas.What r the reusable transforamtions? QUESTION #14 Reusable transformations can be used in multiple mappings. Click Here to view complete document No best answer available.html (13 of 363)4/1/2009 7:50:58 PM .its instances automatically reflect these changes.Informatica . Click Here to view complete document Submitted by: sithusithu file:///C|/Perl/bin/result.U add an instance of it to maping.U can change the transforamation in the transformation developer.all instances of it inherit the changes.vadlakonda RE: How many ways u create ports? #1 ======================================= Two ways 1.Click the add buttion on the ports tab. ======================================= we can copy and paste the ports in the ports tab ======================================= 14. Please pick the good answer available or submit your answer.

using transformation developer 2. using transformation developer 2. that is know as reusable transformation You can design using 2 methods 1.html (14 of 363)4/1/2009 7:50:58 PM . create normal one and promote it to reusable Cheers Sithu Above answer was rated as good by the following members: ramonasiraj ======================================= A transformation can reused that is know as reusable transformation You can design using 2 methods 1. 2) by using transformation developer here what ever transformation is developed it is reusable and it can be used in mapping designer where we can further change its properties as per our requirement.file:///C|/Perl/bin/result.html A transformation can reused. file:///C|/Perl/bin/result. as the property suggests it has to be reused: so for reusing we can do it in two different ways 1) by creating normal transformation and making it reusable by checking it in the check box of the properties of the edit transformation. create normal one and promote it to reusable Cheers Sithu ======================================= Hai to all friends out there the transformation that can be reused is called a reusable transformation.

3.html ======================================= 1. ======================================= 15.Design it in the transformation developer. A reusable transformation can be used in multiple transformations 2.The designer stores each reusable transformation as metada separate from any mappings that use the transformation.html (15 of 363)4/1/2009 7:50:58 PM .What r the methods for creating reusable transforamtions? QUESTION #15 Two methods 1. Once U promote a standard transformation to reusable status.one can only create an External Procedure transformation as a reusable transformation.Informatica . Please pick the good answer available or submit your answer. Every reusable transformation falls within a category of transformations available in the Designer 4. If u change the properties of a reusable transformation in mapping. Click Here to view complete document No best answer available. U can promote it to the status of reusable transformation. 2. 2005 12:22:21 Praveen Vasudev RE: methods for creating reusable transforamtions? #1 file:///C|/Perl/bin/result.U can revert it to the original reusable transformation properties by clicking the revert button.U can demote it to a standard transformation at any time. September 12.Promote a standard transformation from the mapping designer.After U add a transformation to the mapping .file:///C|/Perl/bin/result.

What r the unsupported repository objects for a mapplet? QUESTION #16 COBOL source definition Joiner transformations Normalizer transformations Non reusable sequence generator transformations.html (16 of 363)4/1/2009 7:50:58 PM .After U add a transformation to the mapping U can promote it to the status of reusable transformation.file:///C|/Perl/bin/result.Design it in the transformation developer. If u change the properties of a reusable transformation in mapping U can revert it to the original reusable transformation properties by clicking the revert button. Answer: Two methods 1.5 style Look Up functions XML source definitions IBM MQ source defintions file:///C|/Perl/bin/result.Informatica . Once U promote a standard transformation to reusable status U CANNOT demote it to a standard transformation at any time. ======================================= You can design using 2 methods 1. by default its a reusable transform.Promote a standard transformation from the mapping designer.html ======================================= PLEASE THINK TWICE BEFORE YOU POST AN ANSWER. create normal one and promote it to reusable Cheers Sithu ======================================= 16. 2. Pre or post session stored procedures Target defintions Power mart 3. using transformation developer 2.

Please pick the good answer available or submit your answer. These are the instructions that the Informatica Server uses to transform and move data.file:///C|/Perl/bin/result. ======================================= q Source definitions.html (17 of 363)4/1/2009 7:50:58 PM .. Definitions of database objects or files that contain the target data. Definitions of database objects (tables views synonyms) or files that provide source data. Sessions and workflows store information about how and when the Informatica Server moves data. January 19. q Multi-dimensional metadata. q Mapplets. A set of source and target definitions along with transformations containing business logic that you build into the transformation. A session is a type of task that you can put in a workflow. q Sessions and workflows. A set of transformations that you can use in multiple mappings. q Mappings. Each session corresponds to a single mapping. A workflow is a set of instructions that describes how and when to run tasks related to extracting transforming and loading data. q Target definitions.. q Reusable transformations. Target definitions that are configured as cubes and dimensions. Transformations that you can use in multiple mappings. Cheers Sithu ======================================= Hi The following answer is from Informatica Help Documnetation q You cannot include the following objects in a mapplet: q q q q Normalizer transformations Cobol sources XML Source Qualifier transformations XML sources file:///C|/Perl/bin/result. 2006 04:23:12 sithusithu Member Since: December 2005 Contribution: 161 #1 RE: What r the unsupported repository objects for a ma.html Click Here to view complete document No best answer available.

U declare and use the parameter in a maping or maplet.html q q q Target definitions Pre. 2005 12:30:13 Praveen Vasudev #1 file:///C|/Perl/bin/result.html (18 of 363)4/1/2009 7:50:58 PM .session stored procedures −Other mapplets -PowerMart 3. September 12.5-style LOOKUP functions -non reusable sequence generator ======================================= QUESTION #17 Maping parameter represents a constant value that U can define before running a session.The informatica server saves the value of maping variable to the repository at the end of session run and uses that value next time U run the session. Please pick the good answer available or submit your answer.Then define the value of parameter in a parameter file for the session.file:///C|/Perl/bin/result.a maping variable represents a value that can change throughout the session.and post.What r the mapping paramaters and maping variables? ======================================= normaliser xml source qualifier and cobol sources cannot be used ======================================= −Normalizer transformations −Cobol sources −XML Source Qualifier transformations −XML sources −Target definitions −Pre.and post. When u use the maping parameter . Unlike a mapping parameter.session stored procedures Other mapplets Shivaji Thaneru 17.A mapping parameter retains the same value throughout the entire session. Click Here to view complete document No best answer available.Informatica .

html RE: mapping varibles ======================================= Please refer to the documentation for more understanding.html (19 of 363)4/1/2009 7:50:58 PM . The Informatica Server first generates an SQL query and scans the query to replace each mapping parameter or variable with its start value. ======================================= You can use mapping parameters and variables in the SQL query user-defined join and source filter of a Source Qualifier transformation. You can also use the system variable $$$SessStartTime. Mapping reusability can be achieved by using mapping parameters. file:///C|/Perl/bin/result.file:///C|/Perl/bin/result. Cheers Sithu ======================================= Mapping parameter represents a constant value defined before mapping run. Mapping variables have two identities: Start value and Current value Start value Current value ( when the session starts the execution of the undelying mapping) Start value <> Current value ( while the session is in progress and the variable value changes in one ore more occasions) Current value at the end of the session is nothing but the start value for the subsequent run of the same session. Then it executes the query on the source database.

Mapping variable can be used in incremental loading process.html (20 of 363)4/1/2009 7:50:58 PM . You can also use it in Source Qualifier transformations and reusable transformations.html Mapping variable represents a value that can be changed during the mapping run. ======================================= 18.Informatica . After you create a parameter you can use it in the Expression Editor of any transformation in a mapping or mapplet. You might want to use a workflow parameter/variable if you want it to be visible with other mappings/sessions Above answer was rated as good by the following members: ramonasiraj ======================================= NO. Click Here to view complete document Submitted by: Ray NO. This provision is provided in Informatica 7.x versions.file:///C|/Perl/bin/result. I have used it. You might want to use a workflow parameter/variable if you want it to be visible with other mappings/sessions ======================================= Hi The following sentences extracted from Informatica help as it is. We can use mapping parameters or variables in any transformation of the same maping or mapplet in which U have created maping parameters or variables.Can U use the maping parameters or variables created in one maping into another maping? QUESTION #18 NO.1. Shivaji Thaneru ======================================= I differ on this.Did it support the above to answers. Please check this in properties. we can use global variable in sessions as well as in mappings. Regards -Vaibhav ======================================= hi file:///C|/Perl/bin/result.

. You cannot use it in another mapping say Map2..Because reusable tranformation is not contained with any maplet or maping.Informatica . ======================================= 20.Can u use the maping parameters or variables created in one maping into any other reusable transform QUESTION #19 Yes. And when you use the xformation in a mapping during execution of the session it validates if the mapping parameter that is used in the xformation is defined with this mapping or not. 2007 17:06:04 mahesh4346 Member Since: January 2007 Contribution: 6 #1 RE: Can u use the maping parameters or variables creat. ======================================= But when one cant use Mapping parameters and variables of one mapping in another Mapping then how can that be used in reusable transformation when Reusable transformations themselves can be used among multiple mappings?So I think one cant use Mapping parameters and variables in reusable transformationsPlease correct me if i am wrong ======================================= Hi you can use the mapping parameters or variables in a reusable transformation.Informatica . Please pick the good answer available or submit your answer. If not the session fails.How can U improve session performance in aggregator transformation? QUESTION #20 Use sorted input.file:///C|/Perl/bin/result. a mapping parameter can be used in reusable transformation but does it mean u can use the mapping parameter whereever the instances of the reusable transformation are used? ======================================= The scope of a mapping variable is the mapping in which it is defined. A variable Var1 defined in mapping Map1 can only be used in Map1.html (21 of 363)4/1/2009 7:50:58 PM .html Thanx Shivaji but the statement does not completely answer the question. file:///C|/Perl/bin/result. February 02. ======================================= 19. Click Here to view complete document No best answer available.

2005 12:34:09 Praveen Vasudev RE: #1 ======================================= use sorted input: 1. September 12. use a sorter before the aggregator 2. the key order is also very important. Limit the number of connected input/output or output ports to reduce the amount of data the Aggregator transformation stores in the data cache. Sorted input reduces the amount of data cached during the session and improves session performance.html Click Here to view complete document No best answer available. Use sorted input to decrease the use of aggregate caches. Please pick the good answer available or submit your answer.html (22 of 363)4/1/2009 7:50:58 PM . Limit connected input/output or output ports. Use this option with the Sorter transformation to pass sorted data to the Aggregator transformation. donot forget to check the option on the aggregator that tell the aggregator that the input is sorted on the same keys as group by. file:///C|/Perl/bin/result. ======================================= hi You can use the following guidelines to optimize the performance of an Aggregator transformation.file:///C|/Perl/bin/result. Filter before aggregating.

When u run a session that uses an aggregator transformation.Becaue it passes the new data to the mapping and uses historical data to perform aggrigation ======================================= to improve session performance in aggregator transformation enable the session option Incremental Aggregation ======================================= -Use sorted input to decrease the use of aggregate caches.it stores overflow values in cache files. Limit the number of connected input/output or output ports to reduce the amount of data the Aggregator transformation stores in the data cache. ======================================= 21.html (23 of 363)4/1/2009 7:50:58 PM .html If you use a Filter transformation in the mapping place the transformation before the Aggregator transformation to reduce unnecessary aggregation. -Filter the data before aggregating it.the informatica server creates index and data caches in memory to process the transformation. file:///C|/Perl/bin/result. Shivaji T ======================================= Following are the 3 ways with which we can improve the session performance:- a) Use sorted input to decrease the use of aggregate caches. Click Here to view complete document No best answer available.If the informatica server requires more space.Informatica . Please pick the good answer available or submit your answer. b) Limit connected input/output or output ports c) Filter before aggregating (if you are using any filter condition) ======================================= By using Incrimental aggrigation also we can improve performence.What is aggregate cache in aggregator transforamtion? QUESTION #21 The aggregator stores data in the aggregate cache until it completes aggregate calculations.file:///C|/Perl/bin/result. -Limit connected input/output or output ports.

Where as u doesn’t need matching keys to join two sources.file:///C|/Perl/bin/result.What r the diffrence between joiner transformation and source qualifier transformation? QUESTION #22 U can join hetrogenious data sources in joiner transformation which we can not achieve in source qualifier transformation. ======================================= When you run a workflow that uses an Aggregator transformation the Informatica Server creates index and data caches in memory to process the transformation. U need matching keys to join two relational sources in source qualifier transformation.. file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer. 2006 05:00:00 sithusithu Member Since: December 2005 Contribution: 161 #1 RE: What is aggregate cache in aggregator transforamti. Click Here to view complete document No best answer available.html January 19.html (24 of 363)4/1/2009 7:50:58 PM . Index cache contains group values and data cache consists of row values.Informatica . Two relational sources should come from same datasource in sourcequalifier. Cheers Sithu ======================================= Aggregate cache contains data values while aggregate calculations are being performed. If the Informatica Server requires more space it stores overflow values in cache files. Aggregate cache is made up of index cache and data cache. ======================================= when server runs the session with aggregate transformation it stores data in memory until it completes the aggregation when u partition a source the server creates one memory cache and one disk cache for each partition .U can join relatinal sources which r coming from diffrent sources also.it routes the data from one partition to another based on group key values of the transformation ======================================= 22..

Also note that since it runs in the database you must make sure that the filter condition in the Source Qualifier transformation only uses standard SQL. Since a source qualifier reduces the number of rows used throughout the mapping it provides better performance. 2006 01:45:56 sithusithu Member Since: December 2005 Contribution: 161 #1 RE: What r the diffrence between joiner transformation. The main difference is that the source qualifier limits the row set extracted from a source while the Filter transformation limits the row set sent to a target. But the difference is that in Source qualifier both the keys must have primary key . Shivaji Thaneru ======================================= hi as per my knowledge you need matching keys to join two relational sources both in Source qualifier as well as in Joiner transformation.html January 27.html (25 of 363)4/1/2009 7:50:58 PM . ======================================= Source qualifier Homogeneous source Joiner Heterogeneous source Cheers Sithu ======================================= Hi The Source Qualifier transformation provides an alternate way to filter rows. However the Source Qualifier transformation only lets you filter rows from relational sources while the Filter transformation filters rows from any type of source.file:///C|/Perl/bin/result..foreign key relation Whereas in Joiner transformation its not needed. file:///C|/Perl/bin/result. Rather than filtering rows from within a mapping the Source Qualifier transformation filters rows when read from a source..

.1): file:///C|/Perl/bin/result.. Please pick the good answer available or submit your answer.Informatica .html ======================================= source qualifier is used for reading the data from the database where as joiner transformation is used for joining two data tables. Both input pipelines originate from the same Normalizer transformation. Both input pipelines originate from the same Joiner transformation. #1 ======================================= This is no longer valid in version 7.In which condtions we can not use joiner transformation(Limitaions of joiner transformation)? QUESTION #23 Both pipelines begin with the same original data source.2 Now we can use a joiner even if the data is coming from the same source. Click Here to view complete document No best answer available. Either input pipelines contains an Update Strategy transformation. Either input pipelines contains a connected or unconnected Sequence Generator transformation. SK ======================================= You cannot use a Joiner transformation in the following situations(according to infa 7. source qualifier can also be used to join two tables but the condition is that both the table should be from relational database and it should have the primary key with the same data structure. ======================================= 23. January 25. 2006 12:18:35 Surendra RE: In which condtions we can not use joiner transform.file:///C|/Perl/bin/result.html (26 of 363)4/1/2009 7:50:58 PM . Both input pipelines originate from the same Source Qualifier transformation. using joiner we can join data from two heterogeneous sources like two flat files or one file from relational and one file from flat.

======================================= No in a joiner transformation you can only use an equal to ( ) as a join condition Any other sort of comparison operators is not allowed > < ! <> etc are not allowed as a join condition Utsav ======================================= Yes joiner only supports equality condition The Joiner transformation does not match null values.file:///C|/Perl/bin/result. To join rows with null values you can replace null input with default values and then join on the default values. Can we have a ! condition in joiner.html (27 of 363)4/1/2009 7:50:58 PM . ======================================= 24.Informatica . ======================================= I don't understand the second one which says we have a sequence generator? Please can you explain on that one? ======================================= Can you please let me know the correct and clear answer for Limitations of joiner transformation? swapna ======================================= You cannot use a Joiner transformation in the following situations(according to infa 7. ======================================= We cannot use joiner transformation in the following two conditions:1.1): When You connect a Sequence Generator transformation directly before the Joiner transformation. for more information check out the informatica manual7. oYou connect a Sequence Generator transformation directly before the Joiner transformation. When our data comes through Update Strategy transformation or in other words after Update strategy we cannot add joiner transformation 2.html oEither input pipeline contains an Update Strategy transformation. For example if both EMP_ID1 and EMP_ID2 from the example above contain a row with a null value the PowerCenter Server does not consider them a match and does not join the two rows.what r the settiings that u use to cofigure the joiner transformation? file:///C|/Perl/bin/result. We cannot connect a Sequence Generator transformation directly before the Joiner transformation.1 ======================================= What about join conditions.

html (28 of 363)4/1/2009 7:50:58 PM . which you set in the Properties tab: q q q q Normal (Default) Master Outer Detail Outer Full Outer Cheers.file:///C|/Perl/bin/result.html QUESTION #24 Master and detail source Type of join Condition of the join Click Here to view complete document Submitted by: sithusithu q q q Master and detail source Type of join Condition of the join the Joiner transformation supports the following join types. Sithu Above answer was rated as good by the following members: vivek1708 ======================================= q q q Master and detail source Type of join Condition of the join the Joiner transformation supports the following join types which you set in the Properties tab: q q q q Normal (Default) Master Outer Detail Outer Full Outer file:///C|/Perl/bin/result.

file:///C|/Perl/bin/result. 10) SORTED INPUT: Check box will be there and will have to check it if the input to the joiner is sorted.n 4) JOIN TYPE: (Normal or master or detail or full) 5) NULL ORDERING IN MASTER 6) NULL ORDERING IN DETAIL 7) TRACING LEVEL: Level of detail about the operations. 3) JOIN CONDITION : Like join on a.html (29 of 363)4/1/2009 7:50:58 PM .s v.html Cheers Sithu ======================================= There are number of properties that you use to configure a joiner transformation are: 1) CASE SENSITIVE STRING COMPARISON: To join the string based on the case sensitive basis. 2) WORKING DIRECTORY: Where to create the caches. 9) DATA CACHE: Store value of each row of data. 8) INDEX CACHE: Store group value of the input if any.file:///C|/Perl/bin/result.

Informatica . This description appears in the Repository Manager making it easier for you or others to understand or remember what the transformation does. In the Mapping Designer choose Transformation-Create. Please pick the good answer available or submit your answer. 2005 12:38:39 Praveen Vasudev RE: #1 ======================================= Normal (Default) -.all detail rows and only matching rows from master Detail outer -.all master rows and only matching rows from detail Full outer -. ======================================= 25.file:///C|/Perl/bin/result.html 11) TRANSFORMATION SCOPE: The data to taken into consideration.What r the join types in joiner transformation? QUESTION #25 Normal (Default) Master outer Detail outer Full outer Click Here to view complete document No best answer available. The naming convention for Joiner transformations is JNR_TransformationName. (transcation or all input) transaction if it depends only on the processed rows while look up if it depends on other data when it processes a row. Enter a name click OK. Select the Joiner transformation. September 12.all rows from both master and detail ( matching or non matching) ======================================= follw this 1. Enter a description for the transformation. Ex-joiner using the same source in the pipeline so data is within the scope so transaction. using a lookup so it depends on other data or if dynamic cache is enabled it has to process on the other incoming data so will have to go for all input.html (30 of 363)4/1/2009 7:50:58 PM .only matching rows from both master and detail Master outer -. file:///C|/Perl/bin/result.

The Designer configures the second set of source fields and master fields by default. Select the Ports tab.html The Designer creates the Joiner transformation. 1. Double-click the title bar of the Joiner transformation to open the Edit Transformations dialog box. 1. Choose Repository-Save to save changes to the mapping. Select the Properties tab and enter any additional settings for the transformations. You can specify a default value if the target database does not handle NULLs. You can edit this property later. Change the master/detail relationship if necessary by selecting the master source in the M column. Select and drag all the desired input/output ports from the second source into the Joiner transformation. Cheers file:///C|/Perl/bin/result. Keep in mind that you cannot use a Sequence Generator or Update Strategy transformation as a source to a Joiner transformation. 1. 1.file:///C|/Perl/bin/result. Certain ports are likely to contain NULL values since the fields in one of the sources may be empty. Add default values for specific ports as necessary.html (31 of 363)4/1/2009 7:50:58 PM . Click the Add button to add a condition. 1. 1. 1. Drag all the desired input/output ports from the first source into the Joiner transformation. 1. You can add multiple conditions. Click any box in the M column to switch the master/detail relationship for the sources. 1. The Joiner transformation only supports equivalent ( ) joins: 10. Tip: Designating the source with fewer unique records as master increases performance during a join. The master and detail ports must have matching datatypes. Click OK. Select the Condition tab and set the condition. 1. The Designer creates input/output ports for the source fields in the Joiner as detail fields by default.

html (32 of 363)4/1/2009 7:50:58 PM . To improve performance for an unsorted Joiner transformation use the source with fewer rows as the master source. After building the caches.x and above : When the PowerCenter Server processes a Joiner transformation it reads rows from both sources concurrently and builds the index and data cache based on the master rows. Click Here to view complete document Submitted by: bneha15 For version 7. Cheers Sithu ======================================= For version 7. The directory can be a mapped or mounted drive. The PowerCenter Server then performs the join based on the detail source data and the cache data. use the source with fewer rows as the master source. the Informatica Server reads all the records from the master source and builds index and data caches based on the master rows. To improve performance for an unsorted Joiner transformation. The PowerCenter Server then performs the join based on the detail source data and the cache data.html Sithu ======================================= 26. Above answer was rated as good by the following members: vivek1708 ======================================= From a performance perspective.Informatica . To improve performance for a sorted Joiner transformation. the Joiner transformation reads records from the detail source and perform joins. If you override the directory make sure the directory exists and contains enough disk space for the cache files. ======================================= Specifies the directory used to cache master records and the index to these records. it reads rows from both sources concurrently and builds the index and data cache based on the master rows. use the source with fewer duplicate key values as the master.What r the joiner caches? QUESTION #26 When a Joiner transformation occurs in a session. By default the cached files are created in a directory specified by the server variable $PMCacheDir. To improve file:///C|/Perl/bin/result.x and above : When the PowerCenter Server processes a Joiner transformation.file:///C|/Perl/bin/result.always makes the smaller of the two joining tables to be the master.

Use a lookup when you have a set of reference members that you do not need to organize hierarchically. For Ex:Suppose the source contains only Empno but we want Empname also in the mapping. file:///C|/Perl/bin/result. ======================================= A lookup is a simple single-level reference structure with no parent/child relationships.Informatica .It compares the lookup transformation port values to lookup table column values based on the look up condition.Then instead of adding another tbl which contains Empname as a source we can Lkp the table and get the Empname in target. Informatica server queries the look up table based on the lookup ports in the transformation. 2005 00:06:38 phani RE: what is the look up transformation? #1 ======================================= Using it we can access the data from a relational table which is not a source in the mapping. Use a lookup when you have a set of reference members that you do not need to organize hierarchically. ======================================= In DecisionStream a lookup is a simple single-level reference structure with no parent/child relationships. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. You can use multiple Lookup transformations in a mapping. December 09.view.what is the look up transformation? QUESTION #27 Use lookup transformation in u’r mapping to lookup data in a relational table.html (33 of 363)4/1/2009 7:50:58 PM .file:///C|/Perl/bin/result. ======================================= 27.html performance for a sorted Joiner transformation use the source with fewer duplicate key values as the master.HTH ======================================= Use a Lookup transformation in your mapping to look up data in a relational table view or synonym.synonym. Import a lookup definition from any relational database to which both the Informatica Client and Server can connect.

2006 22:26:47 samba RE: Why use the lookup transformation ? #1 file:///C|/Perl/bin/result.Informatica .file:///C|/Perl/bin/result. if your source table includes employee ID. Perform a calculation.Why use the lookup transformation ? QUESTION #28 To perform the following tasks. such as gross sales per invoice or sales tax. but not the calculated value (such as net sales). For example. Update slowly changing dimension tables. You can use a Lookup transformation to determine whether records already exist in the target. I hope this would be helpful for you. Click Here to view complete document No best answer available. Cheers Sridhar ======================================= 28. Get a related value.html Cheers Sithu ======================================= Lookup transformation in a mapping is used to look up data in a flat file or a relational table view or synonym. Please pick the good answer available or submit your answer. Many normalized tables include values used in a calculation. but you want to include the employee name in your target table to make your summary data easier to read. You can import a lookup definition from any flat file or relational database to which both the PowerCenter Client and Server can connect.html (34 of 363)4/1/2009 7:50:58 PM . You can use multiple Lookup transformations in a mapping. August 21.

. by using lookup we can get a related value with join conditon and performs caluclations two types of lookups are there 1)connected 2)Unconnected connected lookup is with in pipeline only but unconnected lookup is not connected to pipeline unconneceted lookup returns single column value only let me know if you want to know any additions information cheers samba ======================================= hey with regard to look up is there a dynamic look up and static look up ? if so how do you set it.html (35 of 363)4/1/2009 7:50:58 PM . and is there a combinationation of dynamic connected lookups.usually we use when there are no columns in common ======================================= For maintaining the slowly changing diamensions ======================================= Hi The ans to ur question is Yes There are 2 types of lookups: Dynamic and Normal (which you have termed as Static)..? ======================================= look up has two types Connected and Unconnected.file:///C|/Perl/bin/result.where Lookup Port looks up the corresponding column for the value and resturn port returns the value...it has Input port Output Port Lookup Port and Return Port.and static unconnected look ups. To configure just double click on the lookup transformation and go to properties tab file:///C|/Perl/bin/result..html ======================================= Lookup table is nothing but the lookup on a table view synonym and falt file.Usually we use look-up so as to get the related Value from a table .

Please let me know if there are any questions. If u dont select this option then the lookup is merely a normal lookup. ======================================= 29. Unconnected lookup 1.dynamic lookup cache. select that. Thanks.What r the types of lookup? QUESTION #29 Connected and unconnected Click Here to view complete document No best answer available.. November 08. Connected lookup 2. Please pick the good answer available or submit your answer..file:///C|/Perl/bin/result.Informatica . 2005 18:44:53 swati RE: What r the types of lookup? #1 ======================================= i>connected ii>unconnected iii>cached iv>uncached ======================================= 1. file:///C|/Perl/bin/result.html (36 of 363)4/1/2009 7:50:58 PM .html There'll be an option .

Shared cache Cheers Sithu ======================================= hello boss/madam only two types of lookup are there they are: 1) Connected lookup 2) Unconnected lookup. U can use a static cache.html (37 of 363)4/1/2009 7:50:58 PM Receives input values from the result of a lkp expression in a another transformation. Re-cache from database 3.html Persistent cache 2. I don't understand why people are specifying the cache types I want to know that now a days caches are also taken into this category of lookup. If yes do specify on the answer list thankyou ======================================= 30. Cache includes all lookup out put ports in the lookup condition and the lookup/return port.file:///C|/Perl/bin/result. Dynamic cache 5. U can use a dynamic or static cache Cache includes all lookup columns used in the maping Support user defined default values file:///C|/Perl/bin/result.Differences between connected and unconnected lookup? QUESTION #30 Connected lookup Unconnected lookup Receives input values diectly from the pipe line. Static cache 4. Does not support user defiend default values .Informatica .

Cache includes all lookup/output ports in the lookup condition and the lookup/return port. 2006 03:25:15 Prasanna RE: Differences between connected and unconnected look. Can return multiple columns from the same row or insert into the dynamic lookup cache. ======================================= In addition to this: In Connected lookup if the condition is not satisfied it returns '0'. You can use a static cache. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result..html (38 of 363)4/1/2009 7:50:58 PM . file:///C|/Perl/bin/result. Designate one return port (R). February 03.. In UnConnected lookup if the condition is not satisfied it returns 'NULL'. Cache includes all lookup columns used in the mapping (that is lookup source columns included in the lookup condition and lookup source columns linked as output ports to other transformations). ======================================= Hi Differences Between Connected and Unconnected Lookups Connected Lookup Receives input values directly from the pipeline. Unconnected Lookup Receives input values from the result of a :LKP expression in another transformation. Returns one column from each row. #1 ======================================= In addition: Connected Lookip can return/pass multiple rows/groups of data whereas unconnected can return only one port. You can use a dynamic or static cache.html Click Here to view complete document No best answer available.

what is meant by lookup caches? ======================================= QUESTION #31 The informatica server builds a cache in memory when it processes the first row af a data in a cached look up transformation. Please pick the good answer available or submit your answer. If there is a match for the lookup condition the PowerCenter Server returns the result of the lookup condition for all lookup/output ports.html If there is no match for the lookup condition the PowerCenter Server returns the default value for all output ports. Pass one output value to another transformation.The informatica server stores condition values in the index cache and output values in the data cache. Click Here to view complete document No best answer available. 31. The lookup/output/return port passes the value to the transformation calling : LKP expression. Does not support user-defined default values. Link lookup/output ports to another transformation. If you configure dynamic caching the PowerCenter Server inserts rows into the cache or leaves it unchanged.Informatica .file:///C|/Perl/bin/result. Pass multiple output values to another transformation. If you configure dynamic caching the PowerCenter Server either updates the row the in the cache or leaves the row unchanged. Shivaji Thaneru If there is no match for the lookup condition the PowerCenter Server returns NULL.It allocates memory for the cache based on the amount u configure in the transformation or session properties.html (39 of 363)4/1/2009 7:50:58 PM . If there is a match for the lookup condition the PowerCenter Server returns the result of the lookup condition into the return port. Supports user-defined default values. September 28. 2006 06:34:33 srinivas vadlakonda RE: what is meant by lookup caches? #1 file:///C|/Perl/bin/result.

The Caches are of Three types 1) Persistent 2) Dynamic 3) Static and 4) Shared Cache.u can create a look up transformation to use dynamic cache. Recache from database: If the persistent cache is not synchronized with he lookup table.the informatica server does not update the cache while it prosesses the lookup transformation. which stores the Lookup data based on certain Conditions.It caches the lookup table and lookup values in the cache for each row that comes into the transformation.Informatica . The informatica server dynamically inerts data to the target table. December 13. ======================================= 32. Static cache: U can configure a static or readonly cache for only lookup table.U can share unnamed cache between transformations inthe same maping. Click Here to view complete document No best answer available.html (40 of 363)4/1/2009 7:50:58 PM .html ======================================= lookup cache is the temporary memory that is created by the informatica server to hold the lookup data and to perform the lookup conditions ======================================= A LookUP Cache is a Temporary Memory Area which is Created by the Informatica Server.What r the types of lookup caches? QUESTION #32 Persistent cache: U can save the lookup cache files and reuse them the next time the informatica server processes a lookup transformation configured to use the cache.file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer. shared cache: U can share the lookup cache between multiple transactions.U can configure the lookup transformation to rebuild the lookup cache. 2005 06:02:36 Sithu #1 file:///C|/Perl/bin/result.when the lookup condition is true. Dynamic cache: If u want to cache the target table and insert new rows into cache and the target.By default informatica server creates a static cache.

Static cache 2.html (41 of 363)4/1/2009 7:50:58 PM . file:///C|/Perl/bin/result. Dynamic cache 3.file:///C|/Perl/bin/result. Persistent cache Sithu ======================================= Cache are three types namely Dynamic cache Static cache Persistent cache Cheers Sithu ======================================= Dynamic cache Persistence Cache Re cache Shared Cache ======================================= hi could any one get me information where you would use these caches for look ups and how do you set them.html RE: What r the types of lookup caches? ======================================= Cache 1.

informatica server returns the default value for connected transformations and null for unconnected transformations.When the condition is not true..Difference between static cache and dynamic cache QUESTION #33 Static cache U can not insert or update the cache The informatic server returns a value from the lookup table or cache when the condition is true. U can pass these rows to the target table file:///C|/Perl/bin/result. This indicates that the the row is not in the cache or target table.. ======================================= 33.html thanks infoseeker ======================================= There are 4 types of lookup cache Persistent Recache Satic & Dynamic. Bye Stephen ======================================= Types of Caches are : 1) Dynamic Cache 2) Static Cache 3) Persistent Cache 4) Shared Cache 5) Unshared Cache ======================================= There are five types of caches such as static cache dynamic cache persistant cache shared cache etc.html (42 of 363)4/1/2009 7:50:58 PM .Informatica .file:///C|/Perl/bin/result. Click Here to view complete document Submitted by: vp Dynamic cache U can insert rows into the cache as u pass to the target The informatic server inserts rows into cache when the condition is false.

html lets say for example your lookup table is your target table. Click Here to view complete document No best answer available.file:///C|/Perl/bin/result.. On the other hand Static caches dont get updated when you do a lookup. ananthece ======================================= lets say for example your lookup table is your target table. So when you create the Lookup selecting the dynamic cache what It does is it will lookup values and if there is no match it will insert the row in both the target and the lookup cache (hence the word dynamic cache it builds up as you go along). Above answer was rated as good by the following members: ssangi. 2006 01:08:06 sithusithu Member Since: December 2005 Contribution: 161 #1 RE: Which transformation should we use to normalize th..the normalizer transformation automatically appears.html (43 of 363)4/1/2009 7:50:58 PM . ======================================= 34. Please pick the good answer available or submit your answer. file:///C|/Perl/bin/result.creating input and output ports for every column in the source.Which transformation should we use to normalize the COBOL and relational sources? QUESTION #34 Normalizer Transformation.Informatica . or if there is a match it will update the row in the target. When U drag the COBOL source in to the mapping Designer workspace. So when you create the Lookup selecting the dynamic cache what It does is it will lookup values and if there is no match it will insert the row in both the target and the lookup cache (hence the word dynamic cache it builds up as you go along) or if there is a match it will update the row in the target. January 19. On the other hand Static caches dont get updated when you do a lookup.

Click Here to view complete document No best answer available.If U configure the seeion to use a binary sort order. A Normalizer transformation can appear anywhere in a data flow when you normalize a relational source. 2005 00:25:27 phani RE: How the informatica server sorts the string values.file:///C|/Perl/bin/result.. Please pick the good answer available or submit your answer. #1 file:///C|/Perl/bin/result.Informatica .html (44 of 363)4/1/2009 7:50:58 PM .html ======================================= The Normalizer transformation normalizes records from COBOL and relational sources allowing you to organize the data according to your own needs. Use a Normalizer transformation instead of the Source Qualifier transformation when you normalize a COBOL source. December 09.the informatica server caluculates the binary value of each string and returns the specified number of rows with the higest binary values for the string. When you drag a COBOL source into the Mapping Designer workspace the Normalizer transformation automatically appears creating input and output ports for every column in the source Cheers Sithu ======================================= 35.How the informatica server sorts the string values in Ranktransformation? QUESTION #35 When the informatica server runs in the ASCII data movement mode it sorts session data using Binary sortorder..

a Filter transformation tests data for one condition and drops the rows of data that do not meet the condition.file:///C|/Perl/bin/result. if you create a Rank transformation that ranks the top 5 salespersons for each quarter. A Router transformation tests data for one or more conditions and gives you the option to route rows of data that do not meet any of the conditions to a default output group. However. ======================================= 36. the rank index numbers the salespeople from 1 to 5: Click Here to view complete document No best answer available.Informatica .What is the Router transformation? QUESTION #37 A Router transformation is similar to a Filter transformation because both transformations allow you to use a condition to test data. 2006 04:41:57 sithusithu Member Since: December 2005 Contribution: 161 #1 RE: What is the Rankindex in Ranktransformation? ======================================= Based on which port you want generate Rank is known as rank port the generated values are known as rank index. For example. file:///C|/Perl/bin/result.Informatica .What is the Rankindex in Ranktransformation? QUESTION #36 The Designer automatically creates a RANKINDEX port for each Rank transformation. The Informatica Server uses the Rank Index port to store the ranking position for each record in a group.html ======================================= When Informatica Server runs in UNICODE data movement mode then it uses the sort order configured in session properties. January 12. Cheers Sithu ======================================= 37.html (45 of 363)4/1/2009 7:50:58 PM . Please pick the good answer available or submit your answer.

Click Here to view complete document No best answer available. Cheers Sithu ======================================= Note:. Similar as filter transformation to sort the source data according to the condition applied.i think the definition and purpose of Router transformation define by sithusithu sithu is not clear and not fully correct as they of have mentioned <A Router transformation tests data for one or more conditions > sorry sithu and sithusithu but i want to clarify is that in Filter transformation also we can give so many conditions together. 2. When we want to load data into different target tables from same source but with different condition as per target tables requirement. Please pick the good answer available or submit your answer. eg. January 19. A Filter transformation tests data for one condition and drops the rows of data that do not meet the condition.html (46 of 363)4/1/2009 7:50:58 PM .html If you need to test the same input data based on multiple conditions. empno 1234 and sal>25000 (2conditions) Actual Purposes of Router Transformation are:1. However a Router transformation tests data for one or more conditions and gives you the option to route rows of data that do not meet any of the conditions to a default output group.file:///C|/Perl/bin/result. 2006 04:46:42 sithusithu RE: What is the Router transformation? Member Since: December 2005 Contribution: 161 #1 ======================================= A Router transformation is similar to a Filter transformation because both transformations allow you to use a condition to test data. use a Router Transformation in a mapping instead of creating multiple Filter transformations to perform the same task. file:///C|/Perl/bin/result.

html (47 of 363)4/1/2009 7:50:58 PM . For this if we use filter transformation we need three(3) filter transformations So instead of using three(3) filter transformation we will use only one(1) Router transformation. Router Transformation is :. From emp table we want to load data in three(3) different target tables T1(where deptno 10) T2 (where deptno 20) and T3(where deptno 30). Better Performance because in mapping the Router transformation Informatica server processes the input data only once instead of three as in filter transformation. Less complexity because we use only one router transformation instead of multiple filter transformation. December 09.g..html e. ======================================= 38. Click Here to view complete document No best answer available.What r the types of groups in Router transformation? QUESTION #38 Input group Output group The designer copies property information from the input ports of the input group to create a set of output ports for each output group. #1 file:///C|/Perl/bin/result. 2005 00:35:44 phani RE: What r the types of groups in Router transformatio. Please pick the good answer available or submit your answer.Informatica .. Advantages:1.file:///C|/Perl/bin/result. Two types of output groups User defined groups Default group U can not modify or delete default groups. 2.Active and Connected.

======================================= A Router transformation has the following types of groups: q q Input Output Input Group The Designer copies property information from the input ports of the input group to create a set of output ports for each output group. Cheers Sithu 39.html (48 of 363)4/1/2009 7:50:58 PM ======================================= . Output Groups There are two types of output groups: q q User-defined groups Default group You cannot modify or delete output ports or their properties. Click Here to view complete document No best answer available.file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer.Informatica .html ======================================= Input group contains the data which is coming from the source.Default group contains all the rows of data that doesn't satisfy the condition of any group.We can create as many user-defined groups as required for each condition we want to specify.Why we use stored procedure transformation? QUESTION #39 For populating and maintaining data bases. file:///C|/Perl/bin/result.

Drop and recreate indexes. Shivaji Thaneru ======================================= we use a stored procedure transformation to execute a stored procedure which in turn might do the file:///C|/Perl/bin/result.file:///C|/Perl/bin/result. Perform a specialized calculation. Perform a specialized calculation. 2006 04:41:34 sithusithu Member Since: December 2005 Contribution: 161 #1 RE: Why we use stored procedure transformation? ======================================= A Stored Procedure transformation is an important tool for populating and maintaining databases. Cheers Sithu ======================================= You might use stored procedures to do the following tasks: q q q q Check the status of a target database before loading data into it. Determine if enough space exists in a database. Drop and recreate indexes.html (49 of 363)4/1/2009 7:50:58 PM . Determine if enough space exists in a database.html January 19. Database administrators create stored procedures to automate time-consuming tasks that are too complicated for standard SQL statements. Shivaji Thaneru ======================================= You might use stored procedures to do the following tasks: q q q q Check the status of a target database before loading data into it.

it acts as an intermediator in between source and target metadata.The source qualifier transformation represnets the records that the informatica server reads when it runs a session. If you include a user-defined join the Informatica Server replaces the join information specified by the metadata in the SQL query.What is source qualifier transformation? QUESTION #40 When U add a relational or a flat file source definition to a maping. Join data originating from the same source database.Informatica . If you specify a number for sorted ports the Informatica Server adds an ORDER BY clause to the default SQL query. The Source Qualifier represents the rows that the Informatica Server reads when it executes a session. q Specify sorted ports. it also generates sql. If you choose Select Distinct the Informatica Server adds a SELECT DISTINCT statement to the default SQL query. q Select only distinct values from the source. Click Here to view complete document Submitted by: Rama Rao B. You can join two or more tables with primary-foreign key relationships by linking the sources to one Source Qualifier. If you include a filter condition the Informatica Server adds a WHERE clause to the default query. q Specify an outer join rather than the default inner join.html (50 of 363)4/1/2009 7:50:58 PM . q file:///C|/Perl/bin/result.html above things in a database and more. Thanks. q Filter records when the Informatica Server reads source data. And. Rama Rao Above answer was rated as good by the following members: him.U need to connect it to a source qualifer transformation. which creating mapping in between source and target metadatas.file:///C|/Perl/bin/result. Source qualifier is also a table. ======================================= can you give me a real time scenario please? ======================================= 40.life ======================================= When you add a relational or a flat file source definition to a mapping you need to connect it to a Source Qualifier transformation.

file:///C|/Perl/bin/result. 5.html (51 of 363)4/1/2009 7:50:58 PM . Create a custom query to issue a special SELECT statement for the Informatica Server to read source data. For example you might use a custom query to perform aggregate calculations or execute a stored procedure. So it works as an intemediator between and source and informatica server. Cheers Sithu ======================================= When you add a relational or a flat file source definition to a mapping you need to connect it to a Source Qualifier transformation. Thanks Rama Rao ======================================= Def:. Tasks performed by qualifier transformation:1. Specify an outer join rather than the default inner join. The Source Qualifier represents the rows that the Informatica Server reads when it executes a session. Join data originating from the same source database. 4. Specify sorted ports. Select only distinct values from the source. ======================================= file:///C|/Perl/bin/result. 6. Filter records when the Informatica Server reads source data. 3. Cheers Sithu ======================================= Source qualifier is also a table it acts as an intermediator in between source and target metadata. 2. And it also generates sql which creating mapping in between source and target metadatas.html q Create a custom query to issue a special SELECT statement for the Informatica Server to read source data.The Transformation which Converts the source(relational or flat) datatype to Informatica datatype.

What r the tasks that source qualifier performs? QUESTION #41 Join data originating from same source data base.file:///C|/Perl/bin/result.Informatica .html Source Qualifier Transformation is the beginning of the pipeline of any transformation the main purpose of this transformation is that it is reading the data from the relational or flat file and is passing the data ie. With every Source Definition u will get a source qualifier without Source qualifier u r mapping will be invalid and u cannot define the pipeline to the other instance. read into the mapping designed so that the data can be passed into the other transformations ======================================= Source Qualifier is a transformation with every source definiton if the source is Relational Database. Specify an outer join rather than the default inner join specify sorted records. Source Qualifier fires a Select statement on the source db. January 24. ======================================= 41. If the source is Cobol then for that source definition u will get a normalizer transormation not the Source Qualifier. Select only distinct values from the source. Click Here to view complete document No best answer available. Filter records when the informatica server reads source data. Please pick the good answer available or submit your answer.html (52 of 363)4/1/2009 7:50:58 PM . 2006 03:42:08 sithusithu Member Since: December 2005 Contribution: 161 #1 RE: What r the tasks that source qualifier performs? file:///C|/Perl/bin/result. Creating custom query to issue a special SELECT statement for the informatica server to read source data.

What is the target load order? ======================================= QUESTION #42 U specify the target loadorder based on source qualifiers in a maping. If you choose Select Distinct the Informatica Server adds a SELECT DISTINCT statement to the default SQL query. Cheers Sithu 42. If you include a filter condition the Informatica Server adds a WHERE clause to the default query. q Specify an outer join rather than the default inner join. March 01.html ======================================= q Join data originating from the same source database.html (53 of 363)4/1/2009 7:50:58 PM .Informatica . Click Here to view complete document No best answer available. q Create a custom query to issue a special SELECT statement for the Informatica Server to read source data. q Specify sorted ports.U can designatethe order in which informatica server loads data into the targets.If u have the multiple source qualifiers connected to the multiple targets. 2006 14:27:34 saritha RE: What is the target load order? #1 file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result. If you specify a number for sorted ports the Informatica Server adds an ORDER BY clause to the default SQL query. q Select only distinct values from the source. q Filter records when the Informatica Server reads source data. You can join two or more tables with primary-foreign key relationships by linking the sources to one Source Qualifier. For example you might use a custom query to perform aggregate calculations or execute a stored procedure. If you include a user-defined join the Informatica Server replaces the join information specified by the metadata in the SQL query.

html ======================================= A target load order group is the collection of source qualifiers transformations and targets linked together in a mapping. Please pick the good answer available or submit your answer.Informatica .. Click Here to view complete document No best answer available.What is the default join that source qualifier provides? QUESTION #43 Inner equi join. 2006 03:40:28 sithusithu Member Since: December 2005 Contribution: 161 #1 RE: What is the default join that source qualifier pro. ======================================= 43.What r the basic needs to join two sources in a file:///C|/Perl/bin/result.. January 24. ======================================= The Joiner transformation supports the following join types which you set in the Properties tab: q q q q Normal (Default) Master Outer Detail Outer Full Outer Cheers Sithu 44. ======================================= .Informatica .file:///C|/Perl/bin/result.html (54 of 363)4/1/2009 7:50:58 PM ======================================= Equijoin on a key common to the sources drawn by the SQ.

Two sources should have matching data types.what is update strategy transformation ? QUESTION #45 This transformation is used to maintain the history data or just most recent changes in to target table.file:///C|/Perl/bin/result. If any relation ship exists that will help u in performance point of view. Click Here to view complete document No best answer available.. #1 ======================================= The both the table should have a common feild with same datatype.. 2005 10:32:44 rishi RE: What r the basic needs to join two sources in a so. ======================================= 45.html source qualifier? QUESTION #44 Two sources should have primary and Foreign key relation ships.Informatica . Its not neccessary both should follow primary and foreign relationship.html (55 of 363)4/1/2009 7:50:58 PM . Regards SK ======================================= Both the sources must be from same database. December 14. ======================================= Also of you are using a lookup in your mapping and the lookup table is small then try to join that looup in Source Qualifier to improve perf. Please pick the good answer available or submit your answer. file:///C|/Perl/bin/result.

Please pick the good answer available or submit your answer. Within a mapping you use the Update Strategy transformation to flag rows for insert delete update or reject.html Click Here to view complete document No best answer available. Chrees Sithu ======================================= Update strategy transformation is used for flagging the records for insert update delete and reject In Informatica power center u can develop update strategy at two levels use update strategy T/R in the mapping design target table options in the session the following are the target table options Insert file:///C|/Perl/bin/result. In PowerCenter and PowerMart you set your update strategy at two different levels: q q Within a session. January 19. Within a mapping. When you configure a session you can instruct the Informatica Server to either treat all rows in the same way (for example treat all rows as inserts) or use instructions coded into the session mapping to flag rows for different database operations.html (56 of 363)4/1/2009 7:50:58 PM .file:///C|/Perl/bin/result. 2006 04:33:23 sithusithu Member Since: December 2005 Contribution: 161 #1 RE: what is update strategy transformation ? ======================================= The model you choose constitutes your update strategy how to handle changes to existing rows.

.Informatica .What is the default source option for update stratgey transformation? QUESTION #46 Data driven. If u do not choose data driven option setting. 2006 05:03:53 Gyaneshwar RE: What is the default source option for update strat. Click Here to view complete document No best answer available..html (57 of 363)4/1/2009 7:50:58 PM . delete or reject. Please pick the good answer available or submit your answer. #1 ======================================= DATA DRIVEN ======================================= 47. Click Here to view complete document file:///C|/Perl/bin/result.file:///C|/Perl/bin/result. update.html Update Delete Update as Insert Update else Insert Thanks Rekha ======================================= 46.What is Datadriven? QUESTION #47 The informatica server follows instructions coded into update strategy transformations with in the session maping determine how to flag records for insert. March 28.Informatica .the informatica server ignores all update strategy transformations in the mapping.

January 19. Cheers Sithu ======================================= When Data driven option is selected in session properties it the code will consider the update strategy (DD_UPDATE DD_INSERT DD_DELETE DD_REJECT) used in the mapping and not the options selected in the session properties.file:///C|/Perl/bin/result.Informatica . If the mapping for the session contains an Update Strategy transformation this field is marked Data Driven by default. February 03.html (58 of 363)4/1/2009 7:50:58 PM . 2006 04:36:22 sithusithu RE: What is Datadriven? Member Since: December 2005 Contribution: 161 #1 ======================================= The Informatica Server follows instructions coded into Update Strategy transformations within the session mapping to determine how to flag rows for insert delete update or reject.What r the options in the target session of update strategy transsformatioin? QUESTION #48 Insert Delete Update Update as update Update as insert Update esle insert Truncate table Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. ======================================= 48. Please pick the good answer available or submit your answer. 2006 03:46:07 Prasanna #1 file:///C|/Perl/bin/result.html No best answer available.

What r the types of maping wizards that r to be provided in Informatica? QUESTION #49 The Designer provides two mapping wizards to help you create mappings quickly and easily. In other words instead of updating the records in the target they are inserted as new records. Both wizards are designed to create mappings for loading and maintaining star schemas. as well as slowly growing dimension tables. file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer. Creates mappings to load slowly changing dimension tables based on the amount of historical dimension data you want to keep and the method you choose to handle historical dimension data. Creates mappings to load static fact and dimension tables. ======================================= 49..html RE: What r the options in the target session of update. Click Here to view complete document No best answer available. Slowly Changing Dimensions Wizard.. a series of dimensions related to a central fact table.html (59 of 363)4/1/2009 7:50:58 PM . Getting Started Wizard.file:///C|/Perl/bin/result. Update else Insert: This option enables informatica to flag the records either for update if they are old or insert if they are new records from source.. January 09. 2006 02:43:25 sithusithu Member Since: December 2005 Contribution: 161 #1 RE: What r the types of maping wizards that r to be pr..Informatica . ======================================= Update as Insert: This option specified all the update records from source to be flagged as inserts in the target.

html ======================================= Simple Pass through Slowly Growing Target Slowly Changing the Dimension Type1 Most recent values Type2 Full History Version Flag Date Type3 Current and one previous ======================================= Inf designer : Mapping -> wizards --> 1) Getting started -->Simple pass through mapping -->Slowly growing target 2) slowly changing dimensions---> SCD 1 (only recent values) --->SCD 2(HISTORY using flag or version or time) --->SCD 3(just recent values) one important point is dimensions are 2 types 1)slowly growing targets file:///C|/Perl/bin/result.file:///C|/Perl/bin/result.html (60 of 363)4/1/2009 7:50:58 PM .

What r the mapings that we use for slowly changing dimension table? QUESTION #51 Type1: Rows containing changes to existing dimensions are updated in the target by overwriting the existing dimension.file:///C|/Perl/bin/result.html 2)slowly changing dimensions.What r the types of maping in Getting Started Wizard? QUESTION #50 Simple Pass through maping : Loads a static fact or dimension table by inserting all rows. Slowly Growing TargetCheers Sithu ======================================= 51.Informatica ..html (61 of 363)4/1/2009 7:50:58 PM . January 09. Please pick the good answer available or submit your answer. Use this mapping to load new data when existing data does not require updates. Simple Pass through2. Changes are tracked in the target table by versioning the primary key and creating a version number for each dimension in the table.. Use the Type 2 Dimension/Version Data mapping to update a slowly changing file:///C|/Perl/bin/result. In the Type 1 Dimension mapping. ======================================= 50. ======================================= 1. Click Here to view complete document No best answer available.Informatica . all rows contain current dimension data. Slowly Growing target : Loads a slowly growing fact or dimension table by inserting new rows. 2006 02:46:25 sithusithu Member Since: December 2005 Contribution: 161 #1 RE: What r the types of maping in Getting Started Wiza. Type 2: The Type 2 Dimension Data mapping inserts both new and changed dimensions into the target. Use the Type 1 Dimension mapping to update a slowly changing dimension table when you do not need to keep any previous versions of dimensions in the table. Use this mapping when you want to drop all existing data from your table before loading new data.

html dimension table when you want to keep a full history of dimension data in the table. Rows containing changes to existing dimensions are updated in the target. June 03.file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer. Thanking you sir mamatha. 2006 09:39:20 mamatha RE: What r the mapings that we use for slowly changing. ======================================= file:///C|/Perl/bin/result.Look up Transfermation. the Informatica Server saves existing data in different columns of the same row and replaces the existing data with the updates Click Here to view complete document No best answer available. When updating an existing dimension.Up Date strategy Transfermation 2. Type 3: The Type 3 Dimension mapping filters source rows based on user-defined comparisons and inserts only those found to be new dimensions to the target. #1 ======================================= hello sir i want whole information on slowly changing dimension..and also want project on slowly changing dimension in informatica. ======================================= 1. Version numbers and versioned primary keys track the order of changes to each dimension.html (62 of 363)4/1/2009 7:50:58 PM ..

Use the Type 2 Dimension/Version Data mapping to update a slowly changing dimension table when you want to keep a full history of dimension data in the table. Rows containing changes to existing dimensions are updated in the target.file:///C|/Perl/bin/result.4 Mapping FTR to UPD .7 Mapping. I think these are the 7 mapping used for SCD in general. For type 2 : The mapping will be increased thrice one for insert 2nd for update and 3 to keep the old one. For type 1: The mapping will be doubled that is one for insert and other for update and total as 14.2 mapping SQ_LKP to EXP .6 Mapping SQGen to TGT .1 mapping SQ to LKP .What r the different types of Type2 dimension maping? QUESTION #52 Type2 Dimension/Version Data Maping: In this maping the updated dimension in the source will gets inserted in target along with a new version number. Type 3: The Type 3 Dimension mapping filters source rows based on user-defined comparisons and inserts only those found to be new dimensions to the target. Use the Type 1 Dimension mapping to update a slowly changing dimension table when you do not need to keep any previous versions of dimensions in the table. Version numbers and versioned primary keys track the order of changes to each dimension. Cheers Prasath ======================================= 52. Type 2: The Type 2 Dimension Data mapping inserts both new and changed dimensions into the target.3 Mapping EXP to FTR .5 Mapping UPD to TGT . Changes are tracked in the target table by versioning the primary key and creating a version number for each dimension in the table.And newly added dimension file:///C|/Perl/bin/result.html (63 of 363)4/1/2009 7:50:58 PM . (here the history stores) For type three : It will be doubled for insert one row and also to insert one column to keep the previous data. ======================================= SCD: Source to SQ .Informatica . In the Type 1 Dimension mapping all rows contain current dimension data.html Type1: Rows containing changes to existing dimensions are updated in the target by overwriting the existing dimension.

html in source will inserted into target with a primary key..And changes r tracked by the effective date range for each version of each dimension. And updated dimensions r saved with the value 0.file:///C|/Perl/bin/result. Type2 Dimension/Effective Date Range Maping: This is also one flavour of Type2 maping used for slowly changing dimensions. Type2 Dimension/Flag current Maping: This maping is also used for slowly changing dimensions.This maping also inserts both new and changed dimensions in to the target.Date Cheers Sithu ======================================= file:///C|/Perl/bin/result. 2006 05:31:39 sithusithu Member Since: December 2005 Contribution: 161 #1 RE: What r the different types of Type2 dimension mapi. Version number 2.Recent dimensions will gets saved with cuurent flag value 1. ======================================= Type2 1. Flag 3. Flag indiactes the dimension is new or newlyupdated..In addition it creates a flag value for changed or new dimension. January 04.html (64 of 363)4/1/2009 7:50:58 PM . Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer.

Informatica . write. ======================================= 54. You will get complete set of data which record was inserted and which was not. ======================================= Just use debugger to know how the data from source moves to target it will show how many new rows get inserted else updated. 2005 10:43:31 rishi RE: How can u recognise whether or not the newly added.and post-session operations.file:///C|/Perl/bin/result. and handle pre.How can u recognise whether or not the newly added rows in the source r gets insert in the target ? QUESTION #53 In the Type2 maping we have three options to recognise the newly added rows Version number Flagvalue Effective date Range Click Here to view complete document No best answer available.What r two types of processes that informatica runs the session? QUESTION #54 Load manager Process: Starts the session. December 14. The DTM process. #1 ======================================= If it is Type 2 Dimension the abouve answer is fine but if u want to get the info of all the insert statements and Updates you need to use session log file where you configure it to verbose. and transform data.. Please pick the good answer available or submit your answer.html 53..html (65 of 363)4/1/2009 7:50:58 PM . Creates threads to initialize the session. creates the DTM process. and sends post-session email when the session completes. read. Click Here to view complete document file:///C|/Perl/bin/result.Informatica .

January 19. read session properties 2.run the pre and post stored procedures. Please pick the good answer available or submit your answer. ======================================= 55.create session log file 3.create workflow log file 3.It will send the post session Email when the DTM abnormally terminated The DTM process involved in the following tasks 1. ======================================= When the workflow start to run Then the informatica server process starts Two process:load manager process and DTM Process..It starts the DTM Process. September 17.html No best answer available. 5. By using Metadata reporter we can generate reports in informatica. 5.run the pre and post shell commands . 2007 08:17:02 rasmi Member Since: June 2007 Contribution: 20 #1 RE: What r two types of processes that informatica run.file:///C|/Perl/bin/result. lock the workflow and read the properties of workflow 2. Click Here to view complete document No best answer available. The load manager process has the following tasks 1.html (66 of 363)4/1/2009 7:50:58 PM . Please pick the good answer available or submit your answer. start the all tasks in workfolw except session and worklet 4. 6.create Threades such as master thread read write transformation threateds 4. 2006 05:05:46 sithusithu Member Since: December 2005 Contribution: 161 #1 file:///C|/Perl/bin/result.send post session Email..Informatica .Can u generate reports in Informatcia? QUESTION #55 Yes.

html (67 of 363)4/1/2009 7:50:58 PM .Informatica . December 04. Session : It is a set of instructions that describe how and when to move data from source to targets. Click Here to view complete document No best answer available.file:///C|/Perl/bin/result. 2006 15:07:09 Pavani RE: Define maping and sessions? #1 file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer.Define maping and sessions? QUESTION #56 Maping: It is a set of source and target definitions linked by transformation objects that define the rules for transformation.html RE: Can u generate reports in Informatcia? ======================================= It is a ETL tool you could not make reports from here but you can generate metadata report that is not going to be used for business analysis Cheers Sithu ======================================= can u pls tell me how generate metadata reports? ======================================= 56.

. and Identification of the mapping by the informatica server is done with the help of session. ======================================= 57.file:///C|/Perl/bin/result. Session: A session identifies the mapping created with in the mapping designer.Informatica . Please pick the good answer available or submit your answer. Click Here to view complete document No best answer available.html ======================================= Mapping: A set of source and target definitions linked by diffrent transformation that define the rules for data transformation.html (68 of 363)4/1/2009 7:50:58 PM . -Pavani.. #1 file:///C|/Perl/bin/result. 2006 12:55:46 Leninformatica RE: Which tool U use to create and manage sessions and. May 16.Which tool U use to create and manage sessions and batches and to monitor and stop the informatica s QUESTION #57 Informatica server manager.

Click Here to view complete document No best answer available.html (69 of 363)4/1/2009 7:50:58 PM . ======================================= Performance can be improved by processing data in parallel in a single session by creating multiple partitions of the pipeline.html ======================================= Informatica Workflow Managar and Informatica Worlflow Monitor ======================================= 58. For XML and file sources.Informatica server reads multiple partitions of a single source concurently.Similarly for loading also informatica server creates multiple connections to the target and loads partitions of data concurently.Informatica .Why we use partitioning the session in informatica? QUESTION #58 Partitioning achieves the session performance by reducing the time period of reading the source and loading the data into target. Please pick the good answer available or submit your answer..file:///C|/Perl/bin/result.Informatica . 2005 00:26:04 khadarbasha Member Since: September 2005 Contribution: 2 #1 RE: Why we use partitioning the session in informatica. September 30.informatica server reads multiple files concurently.For loading the data informatica server creates a seperate file for each partition(of a file:///C|/Perl/bin/result. ======================================= 59.How the informatica server increases the session performance through partitioning the source? QUESTION #59 For a relational sources informatica server creates multiple connections for each parttion of a single source and extracts seperate range of data for each connection.. Informatica server can achieve high performance by partitioning the pipleline and performing the extract transformation and load for each partition in parallel.

.U can choose to merge the targets. When u sart a session loadmanger fetches the session information from the repository to perform the validations and verifications prior to starting DTM process.html (70 of 363)4/1/2009 7:50:58 PM . Creating log files: Loadmanger creates logfile contains the status of session.html source file).When u configure the session the loadmanager maintains list of list of sessions and session start times.Locking prevents U starting the session again and again. Click Here to view complete document file:///C|/Perl/bin/result. Reading the parameter file: If the session uses a parameter files.file:///C|/Perl/bin/result.Informatica . Please pick the good answer available or submit your answer.. February 13. #1 ======================================= fine explanation ======================================= 60.loadmanager reads the parameter file and verifies that the session level parematers are declared in the file Verifies permission and privelleges: When the sesson starts load manger checks whether or not the user have privelleges to run the session. Locking and reading the session: When the informatica server starts a session lodamaager locks the session from the repository. 2006 08:00:53 durga RE: How the informatica server increases the session p. Click Here to view complete document No best answer available.What r the tasks that Loadmanger process will do? QUESTION #60 Manages the session and batch scheduling: Whe u start the informatica server the load maneger launches and queries the repository for a list of sessions configured to run on the informatica server.

2005 02:08:34 AnjiReddy RE: What r the tasks that Loadmanger process will do? #1 ======================================= How can you determine whether informatica server is running or not with out using event viewer by using shell command.Informatica . Please pick the good answer available or submit your answer. Transformation thread: It will be created to tranform data. Feel free to mail me at puli. October 12.What r the different threads in DTM process? QUESTION #61 Master thread: Creates and manages all other threads Maping thread: One maping thread will be creates for each session. 2006 00:56:46 Killer RE: What r the different threads in DTM process? #1 file:///C|/Perl/bin/result.html (71 of 363)4/1/2009 7:50:58 PM . I would appreciate the solution for this one.It reads data from source. Click Here to view complete document No best answer available. reddy@gmail. Pre and post session threads: This will be created to perform pre and post session operations.file:///C|/Perl/bin/result. Writer thread: It will be created to load data to the target.com ======================================= 61.Fectchs session and maping information. Reader thread: One thread will be created for each partition of a source. August 17. Please pick the good answer available or submit your answer.html No best answer available.

======================================= 63.Informatica . If target folder or repository is not having the maping of copying session . This will automatically copy the mapping associated source targets and session to the target folder. By using copy session wizard u can copy a session in a different folder or repository. 2006 03:56:14 Prasanna RE: Can u copy the session to a different folder or re..But that target folder or repository should consists of mapping of that session. u should have to copy that maping first before u copy the session Click Here to view complete document No best answer available. If u have sessions with source-target dependencies u have to go for sequential batch to start the sessions one after another. #1 ======================================= In addition you can copy the workflow from the Repository manager..html ======================================= Yupz this make sense ! ) ======================================= 62.Informatica . Whch runs all the sessions at the same time.Batches r two types Sequential: Runs sessions one after the other Concurrent: Runs session at same time.If u have several independent sessions u can use concurrent batches. February 03.Can u copy the session to a different folder or repository? QUESTION #62 Yes. Please pick the good answer available or submit your answer. Click Here to view complete document file:///C|/Perl/bin/result.What is batch and describe about types of batches? QUESTION #63 Grouping of session is known as batch.html (72 of 363)4/1/2009 7:50:58 PM .file:///C|/Perl/bin/result.

For the similar logics we have to do same cumbersome job There might be some limitations while copying the batches like we might not be able to copy the overwritten properties of the workflow. ======================================= there is a slight correction in the above answer.the migration of the workflows from dev to test and to Production would not make any sense 2.html No best answer available. #1 ======================================= Batch--. 2006 13:13:03 sangroover RE: What is batch and describe about types of batches?. 2007 14:26:30 dl_mstr RE: Can u copy the batches? Member Since: November 2007 Contribution: 26 #1 ======================================= Yes I think workflows can be copied from one folder/repository to another ======================================= It should be definitely yes. Please pick the good answer available or submit your answer. February 15..file:///C|/Perl/bin/result.. December 16.is a group of any thing Different batches ----Different groups of different things. file:///C|/Perl/bin/result. Without that : 1.Can u copy the batches? QUESTION #64 NO Click Here to view complete document No best answer available.Informatica . Please pick the good answer available or submit your answer. ======================================= 64.html (73 of 363)4/1/2009 7:50:58 PM .

..html We might not be able to copy the overriden (wrote overwritten in above answer) properties ======================================= 65.When the informatica server marks that a batch is failed? QUESTION #65 If one of session is configured to "run if previous completes" and that previous session fails.file:///C|/Perl/bin/result. August 16.Informatica .What r the different options used to configure the sequential batches? QUESTION #66 Two options Run the session only if previous session completes sucessfully.html (74 of 363)4/1/2009 7:50:58 PM .Informatica . Always runs the session. Please pick the good answer available or submit your answer. Click Here to view complete document No best answer available. Click Here to view complete document No best answer available. 2007 13:17:03 rasmi Member Since: June 2007 Contribution: 20 #1 RE: What r the different options used to configure the. ======================================= 66. 2008 06:05:46 Vani_AT Member Since: December 2007 Contribution: 16 #1 RE: When the informatica server marks that a batch is failed? ======================================= A batch fails when the sessions in the workflow are checked with the property "Fail if parent fails" and any of the session in the sequential batch fails. April 01. Please pick the good answer available or submit your answer. file:///C|/Perl/bin/result.

Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. Even if a session fails with the above property set all the following sessions of the workflow still run and succeed depending on the validity/correctness of the individual sessions. You can double click the pipeline which connects two sessions and there you can define prevtaskstatus succeeded.html ======================================= Hi Where we have to specify these options. ======================================= 67. June 26. ======================================= You have to specify those options in the workflow designer.In a sequential batch can u run the session if previous session fails? QUESTION #67 Yes. Then only the next session runs.file:///C|/Perl/bin/result.Informatica .html (75 of 363)4/1/2009 7:50:58 PM . The only difference this property makes is it marks the workflow as failed.By setting the option always runs the session. 2008 01:03:25 prade Member Since: May 2008 Contribution: 6 #1 RE: In a sequential batch can u run the session if previous session fails? file:///C|/Perl/bin/result. You can also go edit the session and check 'fail parent if this task fails' which means it will mark the workflow as failed. If the workflow is failed it won't run the remaining sessions. ======================================= I would like to make a small correction for the above answer.

Can u start a session inside a batch idividually? QUESTION #69 We can start our required session only in case of sequential batch. ======================================= 69. Start ---L1-------> S1 --------L2-------> S2 suppose S1 fails and we still want to run S2 then L2 condition --> $S1.create a new independent batch and copy the necessary sessions into the new batch.Informatica . February 15.Informatica . similarly we should be able to start a worklet within a workflow.html ======================================= Yes you can.Can u start a batches with in a batch? QUESTION #68 U can not.status SUCCEEDED ======================================= 68. Please pick the good answer available or submit your answer.status FAILED or $S1. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer.in case of concurrent batch we cant do like this. I have not worked with worklets. If u want to start batch that resides in a batch.file:///C|/Perl/bin/result.html (76 of 363)4/1/2009 7:50:58 PM . 2006 13:13:47 sangroover RE: Can u start a batches with in a batch? #1 ======================================= Logically Yes ======================================= Logically Yes we can craete worklets and call the batch ======================================= Logically yes. But as we can start a single session within a workflow. file:///C|/Perl/bin/result. Click Here to view complete document No best answer available.

======================================= In workflow monitor 1.How can u stop a batch? Click Here to view complete document QUESTION #70 By using server manager or pmcmd.Informatica .Informatica . click on stop ======================================= 71. 2007 04:38:06 VEMBURAJ.P RE: How can u stop a batch? #1 ======================================= by using menu command or pmcmd. May 28. Sequential or concurrent doesn't matter. No best answer available. file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer.html (77 of 363)4/1/2009 7:50:58 PM . Ther is no absolute concurrent workflow.represent values U might want to change between sessions such as database connections or source files.file:///C|/Perl/bin/result.html April 01. 2008 06:28:12 Vani_AT Member Since: December 2007 Contribution: 16 #1 RE: Can u start a session inside a batch idividually? ======================================= Yes we can do this in any case.What r the session parameters? QUESTION #71 Session parameters r like maping parameters. Server manager also allows U to create userdefined session parameters. Every workflow starts with a "start" task and hence the workflow is a hybrid. click on the workflow name 2. ======================================= Yes we can do ======================================= 70.

Please pick the good answer available or submit your answer. U can define the following values in parameter file Maping parameters Maping variables file:///C|/Perl/bin/result. 2008 06:40:17 Vani_AT RE: What r the session parameters? Member Since: December 2007 Contribution: 16 #1 ======================================= In addition to this we provide the lookup file name. Database connections Source file names: use this parameter when u want to change the name or location of session source file between session runs Target file name : Use this parameter when u want to change the name or location of session target file between session runs.A parameter file is a file created by text editor such as word pad or notepad.What is parameter file? QUESTION #72 Parameter file is to define the values for parameters and variables used in a session. If we want to use a parameter for source file then it must be prefixed with $$SrcFile_<any string optional>.Informatica . ======================================= a small correction it starts with single dollar symbol . For eg. April 01. Reject file name : Use this parameter when u want to change the name or location of session reject files between session runs. Click Here to view complete document No best answer available. double dollar symbol is used for mapping parameters.file:///C|/Perl/bin/result.html Following r user defined session parameters. There is a predefined format for specifying session parameter. The values for these variables are provided in the parameter file and the parameters start with a $$ (double dollar symbol).html (78 of 363)4/1/2009 7:50:58 PM . ======================================= 72.

This ensures that the machine where the variable is defined expands the server variable. The Informatica Server runs the workflow using the parameters in the file you specify.html session parameters Click Here to view complete document No best answer available. pmcmd startworkflow -uv USERNAME -pv PASSWORD -s SALES:6258 -f east -w wSalesAvg paramfile '\$PMRootDir/myfile.txt' For Windows command prompt users the parameter file name cannot have beginning or trailing spaces. For UNIX shell users enclose the parameter file name in single quotes: -paramfile '$PMRootDir/myfile. If the name includes spaces enclose the file name in double quotes: -paramfile $PMRootDir\my file. Please pick the good answer available or submit your answer.Informatica .What is difference between partioning of relatonal target and partitioning of file targets? file:///C|/Perl/bin/result. 2006 01:27:04 sithusithu RE: What is parameter file? Member Since: December 2005 Contribution: 161 #1 ======================================= When you start a workflow you can optionally enter the directory and name of a parameter file.txt Note: When you write a pmcmd command that includes a parameter file located on another machine use the backslash (\) with the dollar sign ($).file:///C|/Perl/bin/result. January 19.txt' Cheers Sithu ======================================= 73.html (79 of 363)4/1/2009 7:50:58 PM .

file:///C|/Perl/bin/result.U can configure session properties to merge these target files.. Informatica supports Nway partitioning.U can just specify the name of the target file and create the partitions rest will be taken care by informatica session.If u partition a session with a file target the informatica server creates one target file for each partition..RoundRobin 3.For flat file only database partitioning is not applicable.html (80 of 363)4/1/2009 7:50:58 PM .Database partitioning 2. file:///C|/Perl/bin/result.Hash-Key partitioning 5. June 13. Informatica supports following partitions 1.Pass-through 4. 2006 19:10:59 UmaBojja RE: What is difference between partioning of relatonal. Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer.html QUESTION #73 If u parttion a session with a relational target informatica server creates multiple connections to the target database to write target data concurently.Key Range partitioning All these are applicable for relational targets. #1 ======================================= Partition's can be done on both relational and flat files.

html (81 of 363)4/1/2009 7:50:58 PM .Moving target database into server system may improve session performance.Because ASCII datamovement mode stores a character value in one byte. optimizing the query may improve performance. Distibuting the session load to multiple informatica servers may improve session performance. Data generally moves across a network at less than 1 MB per second.Unicode mode takes 2 bytes to store a character. whereas a local disk moves data five to twenty times faster.targets and informatica server to improve session performance. The performance of the Informatica Server is related to network connections. move those files to the machine that consists of informatica server.file:///C|/Perl/bin/result. Flat files: If u’r flat files stored on a machine other than the informatca server. Increase the session performance by following. Thus network connections ofteny affect on session performance.So aviod netwrok connections. Staging areas: If u use staging areas u force informatica server to perform multiple datapasses. Run the informatica server in ASCII datamovement mode improves the session performance.Informatica . U can run the multiple informatica servers againist the same repository. Relational datasources: Minimize the connections to sources . file:///C|/Perl/bin/result.html ======================================= Could you please tell me how can we partition the session and target? ======================================= 74. single table select statements with an ORDER BY or GROUP BY clause may benefit from optimization such as adding indexes. Removing of staging areas may improve session performance. If a session joins multiple source tables in one Source Qualifier. Also.Performance tuning in Informatica? QUESTION #74 The goal of performance tuning is optimize session performance so sessions run during the available load window for the Informatica Server.

2007 10:20:40 Infoseek Member Since: January 2007 Contribution: 4 #1 file:///C|/Perl/bin/result.To do this go to server manger .u can use incremental aggregation to improve session performance. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result. January 05.So concurent batches may also increase the session performance.Because they must group data before processing it.choose server configure database connections.To improve session performance in this case use sorted ports option.html (82 of 363)4/1/2009 7:50:58 PM .create that filter transformation nearer to the sources or u can use filter condition in source qualifier. Partittionig the session improves the session performance by creating multiple connections to sources and targets and loads data in paralel pipe lines.To improve the session performance in this case drop constraints and indexes before u run the session and rebuild them after completion of session. which allows data to cross the network at one time. If the sessioin containd lookup transformation u can improve the session performance by enabling the look up cache. Running a parallel sessions by using concurrent batches will also reduce the time of loading the data.html We can improve the session performance by configuring the network packet size. If U’r session contains filter transformation . Aviod transformation errors to improve the session performance. If u r target consists key constraints and indexes u slow the loading of data. In some cases if a session contains a aggregator transformation . Click Here to view complete document No best answer available. Aggreagator.Rank and joiner transformation may oftenly decrease the session performance .

Unlike the variables that r created in a reusable transformation can be usefull in any other maping or maplet.X ======================================= Hey Thank You so much for the information. We cant use COBOL source qualifier.But it is transparent in case of reusable transformation.normalizer transformations in maplet.joiner. Please pick the good answer available or submit your answer.html RE: Performance tuning in Informatica? ======================================= thanks for your above answer hey would like to know how we can partition a big datafile(flat) around 1Gig.Informatica . 2006 01:15:34 sithusithu Member Since: December 2005 Contribution: 161 #1 file:///C|/Perl/bin/result.file:///C|/Perl/bin/result.html (83 of 363)4/1/2009 7:50:58 PM . ======================================= 75.But we can add sources to a maplet. in Power centre V7.A reusable transformation is a single transformation that can be reusable. Click Here to view complete document No best answer available. January 19. That was one of the best answers I have read on this website.Where as we can make them as a reusable transformations. Descriptive yet to the point and highly useful in real world.and what are the options to set for the same.. I appreciate your effort. If u create a variables or parameters in maplet that can not be used in another maping or maplet. We can not include source definitions in reusable transformations.What is difference between maplet and reusable transformation? QUESTION #75 Maplet consists of set of transformations that is reusable. Whole transformation logic will be hided in case of maplet.

.client tools use.We can use it N number of times. or metadata. ======================================= Maplet: one or more transformations Reusable transformation: only one transformation Cheers Sithu ======================================= Mapplet is a group of reusable transformation.It works like a function in C language. sessions indicating when you want the Informatica Server to perform the transformations. Use repository manager to create the repository. Click Here to view complete document No best answer available.The Repository Manager connects to the repository database and runs the code needed to create the repository tables. and connect strings for sources and targets.html RE: What is difference between maplet and reusable tra. Reusable transformation is a single transformation.file:///C|/Perl/bin/result. file:///C|/Perl/bin/result.The main purpose of using Mapplet is to hide the logic from end user point of view. Metadata can include information such as mappings describing how to transform source data.Thsea tables stores metadata in specific format the informatica server. The repository also stores administrative information such as usernames and passwords.. used by the Informatica Server and Client tools.html (84 of 363)4/1/2009 7:50:58 PM . and product version. Please pick the good answer available or submit your answer..Its a reusable object. ======================================= 76.Define informatica repository? QUESTION #76 The Informatica repository is a relational database that stores information..Informatica . permissions and privileges.

Informatica . Cheers Sithu ======================================= 77. 2006 06:49:01 sithusithu RE: Define informatica repository? Member Since: December 2005 Contribution: 161 #1 ======================================= Infromatica Repository:The informatica repository is at the center of the informatica suite. The informatica client and server access the repository to save and retrieve metadata.What r the types of metadata that stores in repository? QUESTION #77 Following r the types of metadata that stores in the repository Database connections Global objects Mappings Mapplets Multidimensional metadata Reusable transformations Sessions and batches Short cuts Source definitions Target defintions Transformations Click Here to view complete document No best answer available.html January 10.html (85 of 363)4/1/2009 7:50:58 PM . You create a set of metadata tables within the repository database that the informatica application and tools access. file:///C|/Perl/bin/result.file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer.

Click Here to view complete document No best answer available. A workflow is a set of instructions that describes how and when to run tasks related to extracting transforming and loading data. Sessions and workflows store information about how and when the Informatica Server moves data. These are the instructions that the Informatica Server uses to transform and move data. Cheers Sithu 78.. Transformations that you can use in multiple mappings.Informatica . q Multi-dimensional metadata. Definitions of database objects or files that contain the target data. A set of source and target definitions along with transformations containing business logic that you build into the transformation. January 19. A session is a type of task that you can put in a workflow. In a data mart domain. Target definitions that are configured as cubes and dimensions.. A set of transformations that you can use in multiple mappings. Each session corresponds to a single mapping.html (86 of 363)4/1/2009 7:50:58 PM . 2006 01:40:54 sithusithu Member Since: December 2005 Contribution: 161 #1 RE: What r the types of metadata that stores in reposi. q Mapplets.What is power center repository? ======================================= QUESTION #78 The PowerCenter repository allows you to share metadata across repositories to create a data mart domain. q Sessions and workflows. q Mappings. 2006 01:44:05 sithusithu Member Since: December 2005 Contribution: 161 #1 file:///C|/Perl/bin/result. Definitions of database objects (tables views synonyms) or files that provide source data. q Target definitions. Please pick the good answer available or submit your answer. you can create a single global repository to store metadata used across an enterprise. q Reusable transformations. and a number of local repositories to share the global metadata as needed.file:///C|/Perl/bin/result. ======================================= q Source definitions.html January 19.

January 27. q Local repository. Please pick the good answer available or submit your answer.html RE: What is power center repository? ======================================= q Standalone repository.) The centralized repository in a domain a group of connected repositories. A repository that functions individually unrelated and unconnected to other repositories. The global repository can contain common objects to be shared throughout the domain through global shortcuts. (PowerCenter only. 2006 02:18:13 sithusithu Member Since: December 2005 Contribution: 161 #1 RE: How can u work with remote database in informatica. file:///C|/Perl/bin/result.But it is not preferable to work with that remote source directly by using remote connections .If u work directly with remote source the session performance will decreases by passing less amount of data across the network in a particular time.How can u work with remote database in informatica?did u work directly by using remote connections? ======================================= QUESTION #79 To work with remote datasource u need to connect it with remote connections.Informatica . Each domain can contain one global repository. (PowerCenter only.. Cheers Sithu 79.) A repository within a domain that is not the global repository.file:///C|/Perl/bin/result.. Click Here to view complete document No best answer available.html (87 of 363)4/1/2009 7:50:58 PM . Instead u bring that source into U r local machine where informatica server resides. Each local repository in the domain can connect to the global repository and use objects in its shared folders. q Global repository.

What is tracing level and what r the types of tracing level? QUESTION #80 Tracing level represents the amount of information that informatcia server writes in a log file. #1 file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer. Types of tracing level Normal Verbose Verbose init Verbose data Click Here to view complete document No best answer available.Informatica .file:///C|/Perl/bin/result. 2007 03:19:08 Minnu RE: What is tracing level and what r the types of trac. April 16..html ======================================= You can work with remote But you have to Configure FTP Connection details IP address User authentication Cheers Sithu ======================================= 80.html (88 of 363)4/1/2009 7:50:58 PM ..

file:///C|/Perl/bin/result.html ======================================= Its 1) Terse 2) Normal 3) Verbose Init 4) Verbose data ======================================= 81. ======================================= Hi We can restart the session using session recovery option in workflow manager and workflow monitor.Use performing recovery to load the records from where the session fails..It displays u all the information that is to be stored in repository.Informatica . Then the loading starts from 10001 th row. file:///C|/Perl/bin/result. 2007 11:17:03 ggk. Click Here to view complete document No best answer available. If you start the session normally it starts from 1 st row.000 records.How can u load the records from QUESTION #81 As explained above informatcia server has 3 methods to recovering the sessions.If a session fails after loading of 10.Informatica .000 records in to the target..If want to reflect back end changes to informatica screens.If i done any modifications for my table in back end does it reflect in informatca warehouse or mapi QUESTION #82 NO. ======================================= 82.krishna Member Since: February 2007 Contribution: 12 #1 RE: If a session fails after loading of 10. April 08. Please pick the good answer available or submit your answer.html (89 of 363)4/1/2009 7:50:58 PM . Informatica is not at all concern with back end data base. If you define target load type as "Bulk" session recovery is not possible.

December 14.html again u have to import from back end to informatica by valid connection. oracle. If you make change to the size of columns the sql will still be valid although you may get truncation of data if the database column has larger size (more characters) than the SQ or subsequent transformation. Then if you make a change to the underlying table in the database that makes the override SQL incorrect for the modified table the session will fail. ======================================= 83. Please pick the good answer available or submit your answer..And u have to replace the existing files with imported files. c QUESTION #83 NO. But if you added a column to the underlying table after the last column then the sql statement in the override will still be valid.After draging the ports of three sources(sql server. Click Here to view complete document No best answer available. Click Here to view complete document No best answer available.file:///C|/Perl/bin/result.Informatica . Please pick the good answer available or submit your answer. 2006 05:35:19 vidyanand RE: If i done any modifications for my table in back e. ======================================= It does matter if you have SQL override . August 04. If you change a table .say rename a column that is in the sql override statement then session will fail. #1 ======================================= Yes It will be reflected once u refresh the mapping once again..say in the SQ or in a Lookup you override the default sql.Unless and until u join those three ports in source qualifier u cannot map them directly.informix) to a single source qualifier. 2005 10:37:10 rishi #1 file:///C|/Perl/bin/result.html (90 of 363)4/1/2009 7:50:58 PM .

What is Data cleansing. To use a single source qualifier for multiple sources the data source for all the sources should be same.? #1 file:///C|/Perl/bin/result. redundant. The first part of the question itself is not possible..html (91 of 363)4/1/2009 7:50:58 PM .. out-of-date. or formatted incorrectly. Whenever we drag multiple sources in same source qualifier 1. For Heterogeneous join a Joiner transformation has to be used. ======================================= I don't think dragging three heterogeneous sources in a single source qualifier is valid. Click Here No best answer available.. incomplete. April 27.file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer. The SQL needs to be executed in the database to join the three tables. They can only be joined using a joiner. ======================================= Sources from heterogenous databases cannot be pulled into a single source qualifier. ======================================= if u drag three hetrogenous sources and populated to target without any join means you are entertaining Carteisn product..? to view complete document QUESTION #84 The process of finding and removing or correcting data that is incorrect. There must be a joining key between the tables. And then can be written to the target ======================================= 84.. If you don't use join means not only diffrent sources but homegeous sources are show same error. ======================================= Yes it possible. 2.html RE: After draging the ports of three sources(sql serve.. If you are not interested to use joins at source qualifier level u can add some joins sepratly. 2005 11:12:05 neetha RE: What is Data cleansing.Informatica .

We might need a address cleansing to tool to have the customers addresses in clean and neat form.html (92 of 363)4/1/2009 7:50:58 PM . ======================================= 85. ======================================= This is nothing but polising of data. July 08.how can we partition a session in Informatica? QUESTION #85 No best answer available. 2005 18:12:42 Kevin B RE: how can we partition a session in Informatica? #1 file:///C|/Perl/bin/result. So we need to polish this data clean it before it is add to Datawarehouse.file:///C|/Perl/bin/result.Informatica . ======================================= Data cleansing means remove the inconistance data and transfer the data correct way and correct manner ======================================= it means process of removing data inconsistancies and reducing data in accuracies. The other may store it as MALE and FEMALE. For example of one of the sub system store the Gender as M and F. Please pick the good answer available or submit your answer. The all sub systesms maintinns the customer address can be different. Other typical example can be Addresses.html ======================================= Data cleansing is a two step process including DETECTION and then CORRECTION of errors in a data set.

August 04. Whenever u genetated the report that time u access all data from thro time dimension. Click Here to view complete document Time dimension is one of important in Datawarehouse.what is a time dimension? give an example.html Click Here to view complete document The Informatica PowerCenter Partitioning option optimizes parallel processing on multiprocessor hardware by providing a thread-based architecture and built-in data partitioning. Please pick the good answer available or submit your answer. employee time dimension Fields : Date key full date day of wek day month quarter fiscal year ======================================= In a relational data model for normalization purposes year lookup quarter lookup month lookup and week lookups are not merged as a single table. We can have a trend analysis by comparing this year sales with the previous year or this week sales with the previous week. GUI-based tools reduce the development effort necessary to create data partitions and streamline ongoing troubleshooting and performance tuning tasks while ensuring data integrity throughout the execution process.Informatica . QUESTION #86 No best answer available. ======================================= A TIME DIMENSION is a table that contains the detail information of the time at which a particular file:///C|/Perl/bin/result. ======================================= Download the Document ======================================= 86. This dimensions helps to find the sales done on date weekly monthly and yearly basis. As the amount of data within an organization expands and real-time demand for information grows the PowerCenter Partitioning option enables hardware and applications to provide outstanding performance and jointly scale to handle large volumes of data and users. 2005 07:48:36 Sakthi RE: what is a time dimension? give an example.file:///C|/Perl/bin/result. In a dimensional data modeling(star schema) these tables would be merged as a single table called TIME DIMENSION for performance and slicing data.html (93 of 363)4/1/2009 7:50:58 PM #1 . eg.

Informatica .html (94 of 363)4/1/2009 7:50:58 PM . 2005 02:05:13 Nagi R Anumandla RE: Diff between informatica repositry server & informatica server Click Here to view complete document Informatica Repository Server:It's manages connections to the repository from client application.Diff between informatica repositry server & informatica server QUESTION #87 No best answer available.Informatica . 2006 07:47:31 chiranth Member Since: December 2005 Contribution: 1 #1 RE: Explain the informatica Architecture in detail. Informatica Server:It's extracts the source data performs the data transformation and loads the transformed data into the target ======================================= #1 88. The TIME DIMENSION has the details of DAY WEEK MONTH QUARTER YEAR ======================================= 87... August 11.Explain the informatica Architecture in detail QUESTION #88 No best answer available. file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer.html 'transaction' or 'sale' (event) has taken place.file:///C|/Perl/bin/result. January 13. Please pick the good answer available or submit your answer.

Discuss the advantages & Disadvantages of star & snowflake schema? QUESTION #89 No best answer available. Please pick the good answer available or submit your file:///C|/Perl/bin/result.f.html Click Here to view complete document informatica server connects source data and target data using native odbc drivers again it connect to the repository for running sessions and retriveing metadata information source------>informatica server--------->target | | REPOSITORY ======================================= repository←Repository¡Repository ser.manager w.adm.f.Informatica . control server ¤ source←informatica server¡target ------------¤¤¤ designer w.html (95 of 363)4/1/2009 7:50:58 PM .file:///C|/Perl/bin/result.monitor ======================================= 89.

file:///C|/Perl/bin/result. Hierarchies for the dimensions are stored in the dimensional table itself in star schema.In snowflake schema the dimension tables are connected with some subdimension table. In starflake dimensional ables r denormalized in snowflake dimension tables r normalized.html (96 of 363)4/1/2009 7:50:58 PM . The advantage of snowflake schema is that the normalized tables r easier to maintain.file:///C|/Perl/bin/result. August 25. Whereas in a snow flake schema a dimension table will have one or more parent tables. #1 ======================================= star schema consists of single fact table surrounded by some dimensional table. These hierachies helps to drill down the data from topmost hierachies to the lowermost hierarchies.it also saves the storage space. In a star schema a dimension table will not have any parent table. 2005 02:24:19 prasad nallapati RE: Discuss the advantages & Disadvantages of star & snowflake schema? Click Here to view complete document In a star schema every dimension will have a primary key.html answer. ======================================= In a STAR schema there is no relation between any two dimension tables whereas in a SNOWFLAKE schema there is a possible relation between the dimension tables. The disadvantage of snowflake schema is that it reduces the effectiveness of navigation across the tables due to large no of joins between them. Whereas hierachies are broken into separate tables in snow flake schema. star schema is used for report generation snowflake schema is used for cube. ======================================= It depends upon the clients which they are following whether snowflake or star schema.

html ======================================= Snowflakes are an addition to the Kimball Dimensional system to enable that system to handle hierarchial data.file:///C|/Perl/bin/result. SD2 can have "outlyers" but these are very difficult to instantiate. Also Normalizer transformation can be used to create multiple rows from a single row of data ======================================= hi By using Normalizer transformation we can conver rows into columns and columns into rows and also we can collect multile rows from one row ======================================= vamshidhar How do u convert rows to columns in Normalizer? could you explain us?? Normally its used to convert columns to rows but for converting rows to columns we need an aggregator and expression and little effort is needed for coding.Informatica . Snowflake tables are ofter called "outlyers" by data modelers because they must share a key with a diminsion that directly connects to a fact table.html (97 of 363)4/1/2009 7:50:58 PM #1 . August 25. ======================================= 90.Waht are main advantages and purposeof usingNormalizer Transformation in Informatica? QUESTION #90 No best answer available. file:///C|/Perl/bin/result. Commonly every attempt is made not to use snowflakes by flattening hierarchies but when either this is not possible or practical the snowflake design solves the problem. 2005 02:27:10 prasad nallapati RE: Waht are main advantages and purpose of using Normalizer Transformation in Informatica? Click Here to view complete document Narmalizer Transformation is used mainly with COBOL sources where most of the time data is stored in de-normalized format. When Kimball proposed the dimensional data warehouse it was not first recogonized that hierarchial data could not be stored. Please pick the good answer available or submit your answer. Denormalization is not possible with a Normalizer transformation.

Please pick the good answer available or submit your answer.... October 04. Click Here to view complete document correction the rejected data and send to target relational tables using loadorder utility. Find out the rejected data by using column indicatior and row indicator.Informatica . #1 file:///C|/Perl/bin/result. 2005 17:02:07 paul luthra RE: How do you transfert the data from data wareh.file:///C|/Perl/bin/result..How to read rejected data or bad data from bad file and reload it to target? QUESTION #91 No best answer available. #1 ======================================= Design a trap to a file or table by the use of a filter transformation or a router transformation. Router works well for this. Please pick the good answer available or submit your answer.Informatica .How do youtransfert the data from data warehouse to flatfile? QUESTION #92 No best answer available.html ======================================= 91. November 09.html (98 of 363)4/1/2009 7:50:58 PM . ======================================= 92. 2005 23:16:28 ravi kumar guturi RE: How to read rejected data or bad data from bad fil.

html (99 of 363)4/1/2009 7:50:58 PM . ======================================= Always remember when designing a mapping: less for more design with the least number of transformations that can do the most jobs.html Click Here to view complete document You can write a mapping with the flat file as a target using a DUMMY_CONNECTION. A flat file target is built by pulling a source into target space using Warehouse Designer tool. ======================================= 93. 2005 12:32:26 sangeetha RE: At the max how many tranformations can be us in a . #1 ======================================= In a mapping we can use any number of transformations depending on the project and the included transformations in the perticular related transformatons.At the max how many tranformations can be us in a mapping? QUESTION #93 No best answer available. ======================================= There is no such limitation to use this number of transformations. But in performance point of view using too many transformations will reduce the session performance. Click Here to view complete document n number of transformations ======================================= 22 transformation expression joiner aggregator router stored procedure etc. You can find on Informatica transformation tool bar.. file:///C|/Perl/bin/result.. September 27.Informatica .file:///C|/Perl/bin/result. My idea is if needed more tranformations to use in a mapping its better to go for some stored procedure. Please pick the good answer available or submit your answer.

when the source file is a text file and loading data to a table in such cases we should you normal load only else the session will be failed. compartivly Bulk load is pretty faster than normal load. Please pick the good answer available or submit your answer. Bulk load is also called direct loading. ======================================= Rule of thumb For small number of rows use Normal load file:///C|/Perl/bin/result. 2005 09:01:19 suresh RE: What is the difference between Narmal load and Bulk load? Click Here to view complete document what is the difference between powermart and power center? ======================================= when we go for unconnected lookup transformation? ======================================= bulk load is faster than normal load.Informatica . In case of bulk load informatica server by passes the data base log file so we can not roll bac the transactions. #1 ======================================= Normal Load: Normal load will write information to the database log file so that if any recorvery is needed it is will be helpful. Bulk Mode: Bulk load will not write information to the database log file so that if any recorvery is needed we can't do any thing in such cases. September 09.html (100 of 363)4/1/2009 7:50:58 PM .file:///C|/Perl/bin/result.html ======================================= 94.What is the difference between Narmal load and Bulk load? QUESTION #94 No best answer available.

Please pick the good answer available or submit your answer.can we lookup a table from a source qualifer file:///C|/Perl/bin/result. This dimension table is called the junk dimension. and bulk loading fails a session if your mapping has primary keys ======================================= 95. The junk dimension is simply a structure that provides a convenient place to store the junk attributes. There are dimensions that are frequently updated. ======================================= A junk dimension is a used for constrain queary purpose based on text and flag values.html For volume of data use bulk load ======================================= also remember that not all databases support bulk loading. So from the base set of the dimensions that are already existing we pull out the dimensions that are frequently updated and put them into a separate table. A good example would be a trade fact in a company that brokers equity trades. 2005 06:22:53 prasad Nallapati RE: what is a junk dimension Click Here to view complete document A junk dimension is a collection of random transactional codes flags and/or text attributes that are unrelated to any particular dimension.what is a junk dimension QUESTION #95 No best answer available. #1 ======================================= Junk dimensions are particularly useful in Snowflake schema and one of the reasons why snowfalke is preferred over the star schema. October 17.Informatica .Informatica .html (101 of 363)4/1/2009 7:50:58 PM .file:///C|/Perl/bin/result. Some times a few dimensions discarded in a major dimensions That time we kept in to one place the all discarded dimensions that is called junk dimensions. ======================================= 96.

I will explain you why. 1) Unless you assign the output of the source qualifier to another transformation or to target no way it will include the feild in the query.. 2) source qualifier don't have any variables feilds to utalize as expression. ======================================= No it's not possible.. October 04. 2005 01:29:07 nandam Member Since: November 2005 Contribution: 1 #1 RE: can we lookup a table from a source qualifer trans. source qualifier don't have any variables fields to utilize as expression. 2005 01:07:33 ravi RE: how to get the first 100 rows from the flat file i. we can't do.html transformation-unconnected lookup QUESTION #96 No best answer available. Please pick the good answer available or submit your answer. Please pick the good answer available or submit your answer..html (102 of 363)4/1/2009 7:50:58 PM .file:///C|/Perl/bin/result.how to get the first 100 rows from the flat file into the target? QUESTION #97 No best answer available. #1 file:///C|/Perl/bin/result. Click Here to view complete document no we cant lookup data ======================================= No.Informatica . November 22. ======================================= 97..

Just add this Unix command in session properties --> Components --> Pre-session Command head -100 <source file path> > <new file name> Mention new file name and path in the Session --> Source properties. Please pick the good answer available or submit your answer.can we modify the data in flat file? QUESTION #98 No best answer available.html (103 of 363)4/1/2009 7:50:58 PM . ======================================= 1. Hope it helps.file:///C|/Perl/bin/result.Informatica . ======================================= I'd copy first 100 records to new file and load. ======================================= Use a sequence generator set properties as reset and then use filter put condition as NEXTVAL< 100 . October 04. 2005 23:19:25 ravikumar guturi #1 file:///C|/Perl/bin/result. Put counter/sequence generator in mapping and perform it. Use test download option if you want to use it for testing. ======================================= 98.html Click Here to view complete document by usning shell script ======================================= please check this one task ----->(link) session (workflow manager) double click on link and type $$source sucsess rows(parameter in session variables) 100 it should automatically stops session. 2.

======================================= yes ======================================= yes ======================================= Can you please explian how to do this without modifying the flat file manually? ======================================= Just open the text file with notepad change what ever you want (but datatype should be the same) Cheers sithu ======================================= yes by open a text file and edit ======================================= You can generate a flat file with program. Hope this helps. file:///C|/Perl/bin/result.html (104 of 363)4/1/2009 7:50:58 PM .. ======================================= You can modify the flat file using shell scripting in unix ( awk grep sed ).. Please post your views. do we have any options in the session or maaping to perform a similar task for a flat file target? I have heard about the append option in INFA 8.x. This may be helpful for incremental load in the flat file. Let's assume that the target is a flat file. ======================================= Let's not discuss about manually modifying the data of flat file. Like we use update strategy/ target properties in case of relational targets for update. But this is not a workaround for updating the rows.html RE: can we modify the data in flat file? Click Here to view complete document can't ======================================= Manually Yes..file:///C|/Perl/bin/result.mean not manually. I want to update the data in the flat file target based on the input source rows.

Detail Filter --. Please pick the good answer available or submit your answer.difference between summary filter and details filter? QUESTION #99 No best answer available. Click Here to view complete document Hi Summary Filter --.what are the difference between view and materialized view? QUESTION #100 No best answer available. December 01. Please pick the good answer available or submit your answer.Informatica . #1 ======================================= Will it be correct to say that Summary Filter ----> HAVING Clause in SQL Details Filter ------> WHERE Clause in SQL ======================================= 100.we can apply records group by that contain common values.Informatica . Sorting Options Oldest First Page 1 of 3 « First 1 2 3 > Last » file:///C|/Perl/bin/result.file:///C|/Perl/bin/result.html ======================================= 99...html (105 of 363)4/1/2009 7:50:58 PM .we can apply to each and every record in a database. 2005 09:52:34 renuka RE: difference between summary filter and details filt.

Click Here to view complete document Materialized views are schema objects that can be used to summarize precompute replicate and distribute data. Unlike an ordinary view which does not take up any storage space or contain any data ======================================= view is a tailriad representaion of data its access data from existing table it have logical structure cant space occupation. E. 2005 00:34:20 khadarbasha Member Since: September 2005 Contribution: 2 #1 RE: what are the difference between view and materiali. ======================================= Diffence between View Materialized view If you (Change) update or insert in view the corresponding table will affect.Materialized views can be used to replicate all or part of a single table or part of single table or to replicate the result of a query against multiple tables.g.file:///C|/Perl/bin/result.refreshes of the replicated daa can be done automatically by the database at time intervals. to construct a data warehouse. but changes will not affect materialized view.. but meterailzedview stores precaluculated data its have physical structure space occupation ======================================= view is a tailraid representation of data but metereialized view is stores precaluculated data view is a logical structure but mview is physical structure view is cant occupie the space bu mview is occpies space. A materialized view provides indirect access to table data by storing the results of a query in a separate schema object.html (106 of 363)4/1/2009 7:50:58 PM .. file:///C|/Perl/bin/result. ======================================= materialized views to store copies of data or aggregations.html September 30.

here storage parameters are required.Informatica . November 23. A materialized view immediately carries the change done to its mater table even after the materialized view is created.. #1 file:///C|/Perl/bin/result.the select query is stored in the db. where as materialized view stores the data as well.What is the difference between summary filter and detail filter QUESTION #101 No best answer available. Effectively u r calling the stored query. 2005 15:07:50 sir RE: what is the difference between summary filter and . ======================================= In materialized view we cann't perform DML operation but the reverse is true in case of simple view.file:///C|/Perl/bin/result. ======================================= In case of materialised view we can perform DML but reverse is not true in case of simple view. ======================================= 101. ======================================= view. Depending on how large a result set and how complex the query a materialized view should perform better. whenever u use select from view the stored query is executed. A materialized view has a physical table associated with it. like table. Once a view is instantiated performance can be quite good until it is aged out of the cache.html ======================================= A view do no derive the change made to it master table after the view is created. Please pick the good answer available or submit your answer. In case u want use the query repetadly or complex queries we store the queries in Db using View. ======================================= A view is just a stored query and has no physical part. it doesn't have to resolve the query each time it is queried..html (107 of 363)4/1/2009 7:50:58 PM .

.html Click Here to view complete document summary filter can be applieid on a group of rows that contain a common value.. ======================================= top down ODS-->ETL-->Datawarehouse-->Datamart-->OLAP Bottom up ODS-->ETL-->Datamart-->Datawarehouse-->OLAP Cheers file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer. 2005 12:22:13 ravi kumar guturi RE: in datawarehousing approach(top/down) or (bottom/u. October 04.CompareData Warehousing Top-Down approach withBottom-up approach QUESTION #102 No best answer available.where as detail filters can be applied on each and every rec of the data base.html (108 of 363)4/1/2009 7:50:58 PM . ======================================= Top/down approach is better in datawarehousing #1 ======================================= Rajesh: Bottom/Up approach is better ======================================= At the time of software intragartion buttom/up is good but implimentatino time top/down is good. Click Here to view complete document bottom approach is the best because in 3 tier architecture datatier is the bottom one. ======================================= 102.file:///C|/Perl/bin/result.Informatica .

in bottom up approach: first we will build data marts then data warehuse. For more info read Kimball vs Inmon. October 20.Informatica . It all depends on your business requirements and what is in place already at your company.Discuss which is better among incremental load. 2005 03:06:53 ravi guturi RE: incremental loading ? normal load and bulk load? #1 file:///C|/Perl/bin/result. Normal Load and Bulk load QUESTION #103 No best answer available. less time as compared to above and less cost. ======================================= 103.. ======================================= Nothing wrong with any of these approaches. the data mart that is first build will remain as a proff of concept for the others.html (109 of 363)4/1/2009 7:50:58 PM . Please pick the good answer available or submit your answer. Lot of folks have a hybrid approach.file:///C|/Perl/bin/result. which will need more crossfunctional skills and timetaking process also costly.html Sithu ======================================= in top down approch: first we have to build dataware house then we will build data marts.

. ======================================= If the database supports bulk load option from Infromatica then using BULK LOAD for intial loading the tables is recommended.html Click Here to view complete document normal load is the best.file:///C|/Perl/bin/result.Informatica . 2005 20:02:14 #1 file:///C|/Perl/bin/result.(incremental load concept is differnt dont merge with bulk load mormal load) ======================================= 104. Otherwise Incremental load which can be better as it takes onle that data which is not available previously on the target. QUESTION #104 No best answer available. ======================================= It depends on the requirement.html (110 of 363)4/1/2009 7:50:58 PM . Depending upon the requirment we should choose between Normal and incremental loading strategies. ======================================= if supported by the database bulk load can do the loading faster than normal load. ======================================= normal is always preffered over bulk. ======================================= Normal Loading is Better ======================================= Rajesh:Normal load is Better ======================================= RE: incremental loading ? normal load and bulk load?<b. Please pick the good answer available or submit your answer. September 25..What is the difference between connected and unconnected stored procedures.

file:///C|/Perl/bin/result. Click Here to view complete document Unconnected: The unconnected Stored Procedure transformation is not connected directly to the flow of the mapping. You should use a connected Stored Procedure transformation when you need data from an input port sent as an input parameter to the stored procedure or the results of a stored procedure sent as an output parameter to another transformation. Run a stored procedure every time a row passes through the Stored Procedure transformation. ======================================= Run a stored procedure before or after your session. Run a stored procedure once during your mapping such as pre. Note: To get multiple output parameters from an unconnected Stored Procedure transformation you must create variables for each output parameter.. For details see Calling a Stored Procedure From an Expression.html sangeetha RE: What is the difference between connected and uncon. file:///C|/Perl/bin/result.or postsession. Run a stored procedure based on data that passes through the mapping such as when a specific port does not contain a null value. Pass parameters to the stored procedure and receive multiple output parameters. All data entering the transformation through the input ports affects the stored procedure. Run nested stored procedures.. Pass parameters to the stored procedure and receive a single output parameter.html (111 of 363)4/1/2009 7:50:58 PM Unconnected Unconnected Connected or Unconnected Unconnected Connected or Unconnected Connected or Unconnected Unconnected . connected: The flow of data through a mapping in connected mode also passes through the Stored Procedure transformation. It either runs before or after the session or is called by an expression in another transformation in the mapping.

we can export independent and dependent rep objects 6.we can use pmcmdrep 5.version controlling #1 file:///C|/Perl/bin/result.0Yours sincerely.. Click Here to view complete document in 7.we ca move mapping in any web application 7. October 04. Please pick the good answer available or submit your answer.Rushi.union and custom transformation 2.0 intorduce custom transfermation and union transfermation and also flat file lookup condition.html (112 of 363)4/1/2009 7:50:58 PM . 2005 01:17:06 ravi RE: Differences between Informatica 6. ======================================= Features in 7.lookup on flat file 3. Cheers Sithu Unconnected 105.2 and Informatica 7.2 and Informati. ======================================= QUESTION #105 No best answer available.Differences between Informatica 6.html Call multiple times within a mapping.grid servers working on different operating systems can coexist on same server 4.file:///C|/Perl/bin/result..Informatica .1 are : 1.

2) Exporting independent and dependent objects. Click Here to view complete document No best answer available.. repositoryserver and repository? QUESTION #106 Powercenter server contains the sheduled runs at which time data should load from source to target Repository contains all the definitions of the mappings done in designer.data profilling ======================================= Can someone guide me on 1) Data profiling.file:///C|/Perl/bin/result. Thanks in advance. Please pick the good answer available or submit your answer.Informatica . file:///C|/Perl/bin/result.html 8.. 2005 01:59:02 Gokulnath_J Member Since: November 2005 Contribution: 3 #1 RE: whats the diff between Informatica powercenter ser. November 08.whats the diff between Informatica powercenter server.html (113 of 363)4/1/2009 7:50:58 PM . -Azhar ======================================= 106.

all the metadata is retrived from the DB through Rep server. Infa server is one which is responsible for running the WF tasks etc. ======================================= 107.Informatica . Infa server also communicates with the DB through Rep server.it stores all metadata of the infa objects.repository server and power center server access the repository for managing the data. The reposiitory server controls the repository and maintains the data integrity and Consistency across the repository when multiple users use Informatica. repository-it is a place where all the metadata information is stored. Repository server is one which communicates with the repository i.how to create the staging area in your database QUESTION #107 client having database throught that data base u get all sources Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer. November 02.. ======================================= power center server-power center server does the extraction from the source and loaded it to the target.All the client tools communicate with the DB through Rep server. Powercenter Server/Infa Server is responsible for execution of the components (sessions) stored in the repository. 2005 11:56:42 Chandran #1 file:///C|/Perl/bin/result. ======================================= hi Repository is nothing but a set of tables created in a DB.html ======================================= Repository is a database in which all informatica componets are stored in the form of tables.file:///C|/Perl/bin/result.e DB.. repository server-it takes care of the connection between the power center client and repository.html (114 of 363)4/1/2009 7:50:58 PM .

html RE: how to create the staging area in your database ======================================= If You have defined all the staging tables as tragets use option Targets--> Generate sql in warehouse designer ======================================= A Staging area in a DW is used as a temporary space to hold all the records from the source system. the tables will be created... So more or less it should be exact replica of the source systems except for the laod startegy where we use truncate and reload options. file:///C|/Perl/bin/result. November 02...what does the expression n filter transformations do in Informatica Slowly growing target wizard? QUESTION #108 No best answer available. tmp-----> indicate temparary tables nothing but staging ======================================= 108. ======================================= creating of staging tables/area is the work of data modellor/dba.html (115 of 363)4/1/2009 7:50:58 PM .just like create table <tablename>. 2005 23:10:06 sivapreddy Member Since: November 2005 Contribution: 1 #1 RE: what does the expression n filter transformations .. they will have some name to identified as staging like dwc_tmp_asset_eval... So create using the same layout as in your source tables or using the Generate SQL option in the Warehouse Designer tab. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result.Informatica .

Informatica .Briefly explian the Versioning Concept in Power Center 7.file:///C|/Perl/bin/result. QUESTION #109 No best answer available. #1 file:///C|/Perl/bin/result..1. For example you might need to adjust employee salaries concatenate first and last names or convert strings to numbers.. Filter transformation filters the rows that are not flagged and passes the flagged rows to the Update strategy transformation ======================================= Expression finds the Primary key is or not and calculates new flag Based on that New Flag filter transformation filters the Data Cheers Sithu ======================================= You can use the Expression transformation to calculate values in a single row before you write to the target. November 29. ======================================= 109.html Click Here to view complete document dontknow ======================================= EXPESSION transformation detects and flags the rows from source.html (116 of 363)4/1/2009 7:50:58 PM . 2005 11:29:11 Manoj Kumar Panigrahi RE: Briefly explian the Versioning Concept in Power Ce. Please pick the good answer available or submit your answer.

file:///C|/Perl/bin/result. They do not automatically update to the current folder version.Informatica .html Click Here to view complete document In power center 7.e add in Look up.0. and add 5 transformation . December 01. Maintaining versions of shared folders can result in shortcuts pointing to different versions of the folder.0 the shortcut continues to point to the source definition in version 1.0. To avoid this you can recreate shortcuts pointing to earlier versions but this solution is not practical for much-used objects. QUESTION #110 No best answer available. Cheers Sithu ======================================= 110.0 then you create a new folder version 1.0.How to join two tables without using the Joiner Transformation.x use only 8 tem server. Please pick the good answer available or submit your answer. in 6.x anly 17 transformation but 7.5. But in power center 6. Therefore when possible do not version folders referenced by shortcuts. For example if you have a shortcut to a source definition in the Marketing folder version 1. Though shortcuts to different versions do not affect the server they might prove more difficult to maintain.x use 22 transformation. 2005 07:49:58 #1 file:///C|/Perl/bin/result.html (117 of 363)4/1/2009 7:50:58 PM . ======================================= Hi manoj I appreciate ur response But can u be a bit clear thanks sri ======================================= When you create a version of a folder referenced by shortcuts all shortcuts continue to reference their original object in the original version.1 use 9 Tem server i.

Right click on the source qualifier u will find EDIT click on it. Cheers Sithu ======================================= can do using source qualifer but some limitations are there..Click on the properties tab u will find sql query in that u can write ur sqls ======================================= The Joiner transformation requires two input transformations from two separate pipelines. When u drag n drop the tables u will getting the source qualifier for each table. An input transformation is any transformation connected to the input ports of the current transformation.. Cheers Sithu ======================================= joiner transformation is used to join n (n>1) tables from same or different databases but source qualifier transformation is used to join only n tables from same database . Click Here to view complete document Itz possible to join the two or more tables by using source qualifier. ======================================= simple In the session property user defined options is there by using this we can join with out joiner file:///C|/Perl/bin/result.But provided the tables should have relationship.Add a common source qualifier for all.html thiyagarajanc Member Since: November 2005 Contribution: 4 RE: How to join two tables without using the Joiner Tr.html (118 of 363)4/1/2009 7:50:58 PM .file:///C|/Perl/bin/result.Delete all the source qualifiers.

file:///C|/Perl/bin/result.Informatica . #1 file:///C|/Perl/bin/result...html (119 of 363)4/1/2009 7:50:58 PM . 2005 08:13:47 kalyan RE: Identifying bottlenecks in various components of I.Identifying bottlenecks in various components of Informatica and resolving them. Note: you can only join 2 tables with Joiner Transformation but you can join two tables from different databases.. ======================================= 111. December 20. you can do also in Source Qualifier. u can also join two tables of same kind using a lookup. Please pick the good answer available or submit your answer. Any select statement you can run on a database. Cheers Ray Anthony ======================================= hi u can join 2 RDBMS sources of same database using a SQ by specifying user defined joins. Under its properties tab you can specify the user-defined join. QUESTION #111 No best answer available.html ======================================= use Source Qualifier transformation to join tables on the SAME database.

#1 ======================================= hi file:///C|/Perl/bin/result.. In database we don't have Incremental aggregation facility. Unless this can be done if nobody deleted that cache files. If that happend total aggregation we need to execute on informatica also.html (120 of 363)4/1/2009 7:50:58 PM . No necessary to process entire values again and again.If you have good processing database you can create aggregation table or view at database level else its better to use informatica. Please pick the good answer available or submit your answer. December 05. kalyan ======================================= 112.. what ever it may be informatica is a thrid party tool so it will take more time to process aggregation compared to the database but in Informatica an option we called Incremental aggregation which will help you to update the current values with current values +new values.html Click Here to view complete document hai The best way to find out bottlenecks is writing to flat file and see where the bottle neck is .file:///C|/Perl/bin/result. Click Here to view complete document It depends upon our requirment only.Informatica . 2005 04:45:35 Rishi RE: How do you decide whether you need ti do aggregati.How do you decide whether you need ti do aggregations at database level or at Informatica level? QUESTION #112 No best answer available. Here i'm explaing why we need to use informatica.

if u have a EMS Q or flat file or any source other than RDBMS u need info to do any kind of agg functions. if in informatica if u r asking whether to do it in the mapping level or at DB level then fine its always better to do agg at the DB level by using SQL over ride in SQ if only aggr is the main purpose of ur mapping. ======================================= If ur database supported bulk option Then choose the bulk option. In session configurati..file:///C|/Perl/bin/result. other wise go for normal option ======================================= #1 114.it all depends on the source u have and ur requirment. April 10. and u need to do some other things like looking up against some table or joining the agg result with the actual source. Please pick the good answer available or submit your answer.html see informatica is basically a integration tool. it definetly improves the performance..Informatica . Please pick the good answer available or submit your answer.Informatica .. if ur source is a RDBMS u r not only doing the aggregation using informatica right?? there will be a bussiness logic behind it.\ QUESTION #113 No best answer available.what is the procedure to write the query to list the highest salary of three employees? QUESTION #114 No best answer available.. etc.. 2005 07:31:41 thiyagarajanc Member Since: November 2005 Contribution: 4 #1 file:///C|/Perl/bin/result. In session configuration --.target Load option-. 2007 23:51:51 Surendra Kumar RE: Source table has 1000 rows. ======================================= 113.Source table has 1000 rows.html (121 of 363)4/1/2009 7:50:58 PM . December 01. Click Here to view complete document If you use bulk mode then it will be fast to load..

Click Here to view complete document select sal from (select sal from emp order by sal desc) where rownum < 3. check out the help file on how to use it.. ======================================= hai there is max function in Informatica use it... Cheers Ray Anthony file:///C|/Perl/bin/result. you might as well use the Rank transformation.html RE: what is the procedure to write the query to list t.html (122 of 363)4/1/2009 7:50:58 PM .file:///C|/Perl/bin/result. ======================================= SELECT sal FROM (SELECT sal FROM my_table ORDER BY sal DESC) WHERE ROWNUM < 4. kalyan ======================================= since this is informatica.

sal) order by sal desc.html ======================================= select max(sal) from emp. in SQL Server:-(take emp table) select top 10 sal from emp ======================================= You can write the query as follows.Informatica ..sal>emp. ======================================= the following is the query to find out the top three salaries in ORACLE:--(take emp table) select * from emp e where 3>(select count (*) from emp where e.. #1 file:///C|/Perl/bin/result.html (123 of 363)4/1/2009 7:50:58 PM . ======================================= 115. December 05.which objects are required by the debugger to create a valid debug session? QUESTION #115 No best answer available. Please pick the good answer available or submit your answer. 2005 03:15:32 Rishi RE: which objects are required by the debugger to crea.file:///C|/Perl/bin/result.SQL> select * from 2 (select ename sal from emp order by sal desc) 3 where rownum< 3.

December 05. source target lookups expressions should be availble min 1 break point should be available for debugger to debug your session. #1 file:///C|/Perl/bin/result. But we have to give valid database connection details for sources targets and lookups used in the mapping and it should contain valid mapplets (if any in the mapping). 2005 03:21:24 Rishi RE: What is the limit to the number of sources and tar.html (124 of 363)4/1/2009 7:50:58 PM . ======================================= Informatica server must run ======================================= Informatica Server Object is must. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result.What is the limit to the number of sources and targets you can have in a mapping QUESTION #116 No best answer available.Informatica ..html Click Here to view complete document Intially the session should be valid session. ======================================= hi We can create a valid debug session even without a single break-point.. Cheers Sithu ======================================= 116.

#1 file:///C|/Perl/bin/result.9*( DTM buffer size/block size)*no. December 16.of partitions... 2005 10:27:07 VJ RE: What is difference between IIF and DECODE function.. Question is if you make N number of tables to participate at a time in processing what is the position of your database.Informatica . how many concurrent threads r u allowed to run on the db server? ======================================= there is one formula.What is difference between IIF and DECODE function QUESTION #117 No best answer available. here no.of bloccks 0. I orginzation point of view it is never encouraged to use N number of tables at a time It reduces database and informatica server performance ======================================= the restriction is only on the database side. Please pick the good answer available or submit your answer.html Click Here to view complete document As per my knowledge there is no such restriction to use this number of sources or targets inside a mapping. no.file:///C|/Perl/bin/result.of blocks (source+targets)*2 ======================================= 117.html (125 of 363)4/1/2009 7:50:58 PM .

The following example tests for various conditions and returns 0 if sales is zero or negative: IIF( SALES > 0 IIF( SALES < 50 SALARY1 IIF( SALES < 100 SALARY2 IIF( SALES < 200 SALARY3 BONUS))) 0 ) You can use DECODE instead of IIF in many cases.Informatica .html Click Here to view complete document You can use nested IIF statements to test multiple conditions. December 19. The following shows how you can use DECODE instead of IIF : SALES > 0 and SALES < 50 SALARY1 SALES > 49 AND SALES < 100 SALARY2 SALES > 99 AND SALES < 200 SALARY3 SALES > 199 BONUS) ======================================= u can use decode in conditioning coloumns also while we cann't use iff but u can use case.What are variable ports and list two situations when they can be used? QUESTION #118 No best answer available. DECODE may improve readability. Please pick the good answer available or submit your answer.. ======================================= 118. 2005 20:41:34 Rajesh RE: What are variable ports and list two situations wh. but by using decode retrieveing data is quick ======================================= Decode can be used in select statement whereas IIF cannot be used.file:///C|/Perl/bin/result.html (126 of 363)4/1/2009 7:50:58 PM .. #1 file:///C|/Perl/bin/result.

2006 00:53:24 reddeppa #1 file:///C|/Perl/bin/result. If any addition i will be more than happy if you can share.2 Totalsal sal+comm.How does the server recognise the source and target databases? QUESTION #119 No best answer available.file:///C|/Perl/bin/result. January 01. Variable port is used when we mathematical caluculations are required.html (127 of 363)4/1/2009 7:50:58 PM . Outport is used when data is mapped to next transformation. Please pick the good answer available or submit your answer. Inport represents data is flowing into transformation.Informatica . ======================================= 119.html Click Here to view complete document We have mainly tree ports Inport Outport Variable port.+bonus ======================================= variable port is used to break the complex expression into simpler and also it is used to store intermediate values ======================================= Variable Ports usually carry intermediate data (values) and can be used in Expression transformation. ======================================= you can also use as for example consider price and quantity and total as a varaible we can mak a sum on the total_amt by giving sum(tatal_amt) ======================================= For example if you are trying to calculate bonus from emp table Bonus sal*0.

correct.. explane with syntax or example QUESTION #120 No best answer available.. Please pick the good answer available or submit your answer.if it is relational.How to retrive the records from a rejected file. January 01.V ======================================= can you explain how to load rejected rows thro informatica #1 ======================================= During the execution of workflow all the rejected rows will be stored in bad files(where your informatica server get installed.if i am wrong pls. ======================================= 120. Click Here to view complete document there is one utility called reject Loader where we can findout the reject records. 2006 00:51:13 reddeppa RE: How to retrive the records from a rejected file.Informatica .kumar.. Please pick the good answer available or submit your answer. ======================================= 121.see we can make sure with connection in the properties of session both sources && targets. every time u run the session one reject file will be created and all the reject files will be there in the reject file.file:///C|/Perl/bin/result. e.html (128 of 363)4/1/2009 7:50:58 PM .html RE: How does the server recognise the source and targe.. ======================================= ya. file:///C|/Perl/bin/result. u can modify the records and correct the things in the records and u can load them to the target directly from the reject file using Regect loader.C:\Program Files\Informatica PowerCenter 7.Informatica .if is flat file FTP connection.How to lookup the data on multiple tabels. QUESTION #121 No best answer available.and able to refine and reload the rejected records...1\Server) These bad files can be imported as flat a file in source then thro' direct maping we can load these files in desired format. Click Here to view complete document by using ODBC connection.

======================================= file:///C|/Perl/bin/result. ======================================= if u want to lookup data on multiple tables at a time u can do one thing join the tables which u want then lookup that joined table.. ======================================= Hi Thanks for your responce. But my question is I have two sources or target tables i want to lookup that two sources or target tables.See in the properties. #1 ======================================= just check with import option ======================================= How to lookup the data on multiple tabels. It is possible to SQL Override.. When you create lookup transformation that time INFA asks for table name so you can choose either source target import and skip. How can i.html January 05. So click skip and the use the sql overide property in properties tab to join two table for lookup. Click Here to view complete document why using SQL override.file:///C|/Perl/bin/result. informatica provieds lookup on joined tables hats off to informatica.we can lookup the Data on multiple tables. 2006 12:05:06 reddeppa How to lookup the data on multiple tabels.html (129 of 363)4/1/2009 7:50:58 PM . ======================================= Hi You can do it.tab..

#1 file:///C|/Perl/bin/result. add column_names of the 2nd table with the qualifier and a where clause.html join the two source by using the joiner transformation and then apply a look up on the resaulting table ======================================= hi what ever my friends have answered earlier is correct. u can now continue this query. eg: lookup default query will be select lookup table column_names from lookup_table.Give .What is the procedure to load the fact table.. 2006 14:26:22 Guest RE: What is the procedure to load the fact table.u cannot join a flat file and a relatioanl table..Give in detail? QUESTION #122 No best answer available. if u want to use a order by then use -. Please pick the good answer available or submit your answer. to be more specific if the two tables are relational then u can use the SQL lookup over ride option to join the two tables in the lookup properties.html (130 of 363)4/1/2009 7:50:58 PM .Informatica . January 19.file:///C|/Perl/bin/result. hope this is more clear ======================================= 122.at the end of the order by.

when data from source is looked up against the dim table the corresponding keys are sent to the fact table. ======================================= hi usually source records are looked up with the records in the dimension table.some times only the existance check will be done and the prod_id itself will be sent to the fact.What is the use of incremental aggregation? Explain me in brief with an example. dont think that fact tables r different in case of loading it is general mapping as we do for other tables. e. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result. Please correct if I am wrong. ======================================= we use the 2 wizards (i. specifications will play important role for loading the fact. ======================================= first dimenstion tables need to be loaded then according to the specifications the fact tables should be loaded. ======================================= 123. For the fact table you need a primary key so use a sequence generator transformation to generate a unique key and pipe it to the target (fact) table with the foreign keys from the source tables. January 29. QUESTION #123 No best answer available.g product all the existing prod_id will be in DIM table.this is not the fixed rule to be followed it may vary as per ur requirments and methods u follow. 2006 11:58:51 #1 file:///C|/Perl/bin/result. all the possible values are stored in DIM table.html (131 of 363)4/1/2009 7:50:58 PM .html Click Here to view complete document Based on the requirement to your fact table choose the sources and data and transform it based on your business needs.DIM tables are called lookup or reference table.e) the getting started wizard and slowly changing dimension wizard to load the fact and dimension tables by using these 2 wizards we can create different types of mappings according to the business requirements and load into the star schemas(fact and dimension tables).Informatica .

file:///C|/Perl/bin/result. ======================================= file:///C|/Perl/bin/result. ======================================= Incremental aggregation is in session properties i have 500 records in my source and again i got 300 records if u r not using incremental aggregation what are calculation r using on 500 records again that calculation will be done on 500+ 300 records if u r using incremental aggregation calculation will be done one only what are new records (300) that will be calculated dur to this one performance will increasing. ======================================= 124..html (132 of 363)4/1/2009 7:50:58 PM . when the informatica server performs incremental aggr. Above answer was rated as good by the following members: sn3508 Click Here to view complete document use a sorter transformation in that u will have a distinct option make use of it . it passes new source data through the mapping and uses historical chache data to perform new aggregation caluculations incrementaly.. for performance we will use it.How to delete duplicate rows in flat files source is any option in informatica QUESTION #124 Submitted by: gazulas use a sorter transformation .Informatica . Click Here to view complete document its a session option. in that u will have a "distinct" option make use of it .html gazulas Member Since: January 2006 Contribution: 17 RE: What is the use of incremental aggregation? Explai.

January 29. Correct me if I am wrong.. Click Here to view complete document in designer u will find the mapping parameters and variables options.u can assign a value to them in designer. Please pick the good answer available or submit your answer.mapping parameters and variables has to create in the mapping designer by choosing the menu option as Mapping ----> parameters and variables and the enter the name for the variable or parameter but it has to be preceded by $$. ======================================= Instead we can use 'select distinct' query in Source qualifier of the source Flat file. however the final step is optional.html hi u can use a dynamic lookup or an aggregator or a sorter for doing this. and choose type as parameter/variable datatypeonce defined the variable/parameter is in the any expression for example in SQ transformation in the source filter prop[erties tab. if ther parameter is npt present it uses the initial value which is file:///C|/Perl/bin/result. suppose ur source system contains the day column. once if u assign a value to a mapping variable then it will change between sessions.Informatica . if we do that it will be like a layman's work. ======================================= You cannot write SQL override for flat file ======================================= 125. there comes the concept of mapping parameters and variables.file:///C|/Perl/bin/result. comming to there uses suppose u r doing incremental extractions daily. ======================================= mapping parameters and variables make the use of mappings more flexible. so every day u have to go to that mapping and change the day so that the particular data will be extracted .and also it avoids creating of multiple mappings.how to use mapping parameters and what is their use QUESTION #125 No best answer available. just enter filter condition and finally create a parameter file to assgn the value for the variable / parameter and configigure the session properties. 2006 11:47:14 gazulas Member Since: January 2006 Contribution: 17 #1 RE: how to use mapping parameters and what is their us.. it helps in adding incremental data.html (133 of 363)4/1/2009 7:50:58 PM .

Please pick the good answer available or submit your answer. January 30. January 31. Click Here to view complete document we can use but the update flag will not be remain.Can we use aggregator/active transformation after update strategy transformation QUESTION #126 No best answer available. #1 file:///C|/Perl/bin/result.file:///C|/Perl/bin/result.Informatica .. The problem will be once you perform the update strategy say you had flagged some rows to be deleted and you had performed aggregator transformation for all rows say you are using SUM function then the deleted rows will be subtracted from this aggregator transformation.but we can use passive transformation #1 ======================================= I guess no update can be placed just before to the target qs per my knowledge ======================================= You can use aggregator after update strategy. ======================================= 127. Please pick the good answer available or submit your answer.html assigned at the time of creating the variable ======================================= 126..why dimenstion tables are denormalized in nature ? QUESTION #127 No best answer available.Informatica . 2006 04:05:43 Rahman RE: why dimenstion tables are denormalized in nature ?..html (134 of 363)4/1/2009 7:50:58 PM . 2006 05:06:08 jawahar RE: Can we use aggregator/active transformation after ..

what he told is :-> The attributes in a dimension tables are used over again and again in queries.file:///C|/Perl/bin/result. If we give wrong answers lot of people who did't know the answer thought it as the correct answer and may fail in the interview. the site must be helpfull to other please keep that in the mind.html Click Here to view complete document Because in Data warehousing historical data should be maintained to maintain historical data means suppose one employee details like where previously he worked and now where he is working all details should be maintain in one table if u maintain primary key it won't allow the duplicate records with same employee id. regarding why dimenstion tables r in denormalised in nature. for efficient query performance it is best if the query picks up an attribute from the dimension table and goes directly to the fact table and do not thru the intermediate tables. if both child and parent are kept in different tables one has to every time join or query both these tables to get the parent child relation. Similary if we have a hierarchy something like this. For example if there is a child table and then a parent table. Maintaining Hierarchy is pretty important in the dwh environment.html (135 of 363)4/1/2009 7:50:58 PM . so if we have both child and parent in the same table we can always refer immediately. refer it once again in the manual its not wrong. so all the dimensions are marinating historical data they are de normalized because of duplicate entry means not exactly duplicate record with same employee number another record is maintaining in the table. if we normalized the dimension table we will create such intermediate tables and that will not be efficient ======================================= Yes what your manager told is correct.. so to maintain historical data we are all going for concept data warehousing by using surrogate keys we can achieve the historical data(using oracle sequence for critical column).county > city > state > territory > division > region > nation If we have different tables for all it would be a waste of database space and also we need to query all file:///C|/Perl/bin/result. ======================================= dear reham thanks for ur responce First of all i want to tell one thing to all users who r using this site. please give answers only if u r confident about it. i had discussed with my project manager about this. Apart from this we maintain Hierarchy in these tables. this may be a case..

Normalized.file:///C|/Perl/bin/result. file:///C|/Perl/bin/result. we can stop it using PMCMD command or in the monitor right click on that perticular session and select stop.html these tables everytime.Denormalized And Dimension Tables .html (136 of 363)4/1/2009 7:50:58 PM . If I am wrong please correct. This causes less number of joins while retriving data from dimensions and hence faster data retrival. This is why dimensions in OLAP systems are de-normalized. Thats why we maintain hierarchy in dimension tables and based on the business we decide whether to maintain in the same table or different tables.this will stop the current session and the sessions next to it. ======================================= hello everyone i don't know the answer of this question but i ve to tell u that how can we say that dimension table is denormalized because in snowflake schema we normalized all the dimension tables.Informatica . what would be ur comment on this?? ======================================= I am a beginner to DW but as I know fact tables . ======================================= 128. ======================================= De-normalization is basically the concept of keeping all the dimension hierarchies in a single dimensions tables.In a sequential Batch how can we stop single session? QUESTION #128 Submitted by: prasadns26 hi.

#1 file:///C|/Perl/bin/result.Informatica . Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result.. 2006 20:44:03 rajendar RE: How do you handle decimal places while importing a.How do you handle decimal places while importing a flatfile into informatica? QUESTION #129 No best answer available. ======================================= hi we can stop it using PMCMD command or in the monitor right click on that perticular session and select stop.this will stop the current session and the sessions next to it.. we start using raise event. February 11.html Above answer was rated as good by the following members: sn3508 Click Here to view complete document we have a task called wait event using that we can stop. ======================================= 129. this is as per my knowledge.html (137 of 363)4/1/2009 7:50:58 PM .

Informatica . precision includes the scale for example if the number is 98888. Please pick the good answer available or submit your answer. source ->number datatype port ->SQ -> decimal datatype.html Click Here to view complete document while geeting the data from flat file in informatica ie import data from flat file it will ask for the precision just enter that ======================================= while importing the flat file the flat file wizard helps in configuring the properties of the file so that select the numeric column and just enter the precision value and the scale. ======================================= hi while importing flat file definetion just specify the scale for a neumaric data type. ======================================= 130. hence decimal is taken care.Integer is not supported.If you have four lookup tables in the workflow. In the SQ associated with that source will have a data type as decimal for that number port of the source. 2006 15:51:01 swapna Use shared cache #1 file:///C|/Perl/bin/result. How do you troubleshoot to improve performance? QUESTION #130 No best answer available.file:///C|/Perl/bin/result.html (138 of 363)4/1/2009 7:50:58 PM . February 10. in the mapping the flat file source supports only number datatype(no decimal and integer).654 enter precision as 8 and scale as 3 and width as 10 for fixed width flat file ======================================= you can handle that by simply using the source analyzer window and then go to the ports of that flat file representations and changing the precision and scales.

(b) dedicate the second one to update : source target these r existing rows. ======================================= Hi In mapping Designer we have direct option to import files from VSAM Navigation : Sources > Import from file > file from COBOL. ======================================= there r many ways to improve the mapping which has multiple lookups. 2) divide the lookup mapping into two (a) dedicate one for insert means: source ..html Click Here to view complete document When a workflow has multiple lookup tables use shered cache . ======================================= 131.target these r new rows . 3)we can increase the chache size of the lookup.Informatica . only the new rows will come to mapping and the process will be fast .html (139 of 363)4/1/2009 7:50:58 PM . . Thanks #1 file:///C|/Perl/bin/result. February 13. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result.How do I import VSAM files from source to target. only the rows which exists allready will come into the mapping.. 2006 08:52:18 swati RE: How do I import VSAM files from source to target. Click Here to view complete document As far my knowledge by using power exchange tool convert vsam file to oracle tables then do mapping as usual to the target table. 1) we can create an index for the lookup table if we have permissions(staging area). Do I need a special plugin QUESTION #131 No best answer available.

You will need PowerExchange client on your platform and a PowerExchange listener on each of the mainframe platform(s) that you wish to work on. ======================================= Yes you will need PowerExchange..html (140 of 363)4/1/2009 7:50:58 PM . ======================================= PowerExchange does not need to copy your VSAM file to Oracle unless you want to do that. ======================================= 132. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result.Informatica .html Farid khan Pathan. With the product you can read from and write to VSAM.Informatica .Differences between Normalizer and Normalizer transformation. March 08. file:///C|/Perl/bin/result. It can do a direct read/write to VSAM. Please pick the good answer available or submit your answer. Have worked on KSDS and ESDS file types.What is IQD file? QUESTION #133 No best answer available. I have used it to read VSAM from one mainframe platform and write to a different platform. 2006 06:03:58 ravi kumar guturi RE: Differences between Normalizer and Normalizer tran. Click Here to view complete document Normalizer: It is a transormation mainly using for cobol sources it's change the rows into coloums and columns into rows Normalization:To remove the retundancy and inconsitecy #1 ======================================= 133. QUESTION #132 No best answer available..

data cleansing.file:///C|/Perl/bin/result. ======================================= 134.html (141 of 363)4/1/2009 7:50:58 PM . sampling? Click Here to view complete document simply man Cleansing:---TO identify and remove the retundacy and inconsistency sampling: just smaple the data throug send the data from source to target #1 ======================================= 135.In data source type we select Impromptu Query Definetion.What is data merging. file:///C|/Perl/bin/result.html February 27. 2006 05:41:52 sathyanath.How to import oracle sequence into Informatica. Please pick the good answer available or submit your answer. QUESTION #135 No best answer available. ======================================= IQD file is nothing but Impromptu Query Definition.gopi RE: What is IQD file? Member Since: February 2006 Contribution: 2 #1 Click Here to view complete document IQD file is nothing but Impromptu Query Definetion This file is maily used in Cognos Impromptu tool after creating a imr( report) we save the imr as IQD file which is used while creating a cube in power play transformer. March 08.This file used for creating cubes in COGNOS Powerplay Transformer.Informatica . sampling? QUESTION #134 No best answer available. 2006 06:01:26 ravi kumar guturi RE: What is data merging. Please pick the good answer available or submit your answer.Informatica . data cleansing.

if still is there any problem please contact me...Informatica .. Click Here to view complete document CREATE ONE PROCEDURE AND DECLARE THE SEQUENCE INSIDE THE PROCEDURE FINALLY CALL THE PROCEDURE IN INFORMATICA WITH THE HELP OF STORED PROCEDURE TRANSFORMATION. 2006 23:48:35 Saritha RE: With out using Updatestretagy and sessons options. thanks ======================================= 136.. can u help me with this.using stored procedure and load them into target. #1 file:///C|/Perl/bin/result.file:///C|/Perl/bin/result..and i want to use them in informatica. February 14..html (142 of 363)4/1/2009 7:50:58 PM .Can you jsut tell me a procedure to generate sequence number in SQl like.With out using Updatestretagy and sessons options. if i give n no of emplyees it should generate seq... 2006 05:39:04 sunil kumar RE: How to import oracle sequence into Informatica. how we can do the update our target table? QUESTION #136 No best answer available.Thanks Good Luck! ======================================= Hi sunil #1 I got a problem with this..html February 15. Please pick the good answer available or submit your answer...

g update by ename.But if u use a UPD or session level properties it necessarily should have a PK. update override in target properties is used basically for updating the target table based on a non key column. ======================================= file:///C|/Perl/bin/result. ======================================= In session properties There is an option insert update insert as update update as update like that by using this we will easily solve ======================================= By default all the rows in the session is set as insert flag you can change it in the session general properties -.html Click Here to view complete document you can use this by using update override in target properties ======================================= using update override in target option.e.file:///C|/Perl/bin/result.now you can update the rows in the target table ======================================= hi if ur database is teradata we can do it with a tpump or mload external loader.its not a key column in the EMP table.html (143 of 363)4/1/2009 7:50:58 PM .Treate source rows as :update so all the incoming rows will be set with update flag.

No errors will be perform Regards R.Informatica . 2006 08:45:49 geek_78 Member Since: February 2006 Contribution: 1 #1 RE: Two relational tables are connected to SQ Trans.html 137.. Please pick the good answer available or submit your answer.Informatica . Click Here to view complete document We can connect two relational table in one sq Transformation.file:///C|/Perl/bin/result. what are the possible errors it will be thrown? QUESTION #137 No best answer available. Both the table should be available in the same schema or same database ======================================= 138.html (144 of 363)4/1/2009 7:50:58 PM .Karthikeyan ======================================= The only two possibilities as of I know is 1. Both the table should have primary key/foreign key relation ship 2. file:///C|/Perl/bin/result. February 18..what are partition points? QUESTION #138 Submitted by: saritha Partition points mark the thread boundaries in a source pipeline and divide the pipeline into stages.Two relational tables are connected to SQ Trans.wh.

March 02. Please pick the good answer available or submit your answer. ======================================= 139. 1. 2006 17:18:19 Gayathri RE: what are cost based and rule based approaches and .. 2.what are cost based and rule based approaches and the difference QUESTION #139 No best answer available. Rule base optimizer(RBO): this basically follows the rules which are needed for executing a query. So in this process Oracle follows these optimization techniques. cost based Optimizer(CBO): If a sql query can be executed in 2 different ways ( like may have path 1 and path2 for same query) then What CBO does is it basically calculates the cost of each path and the analyses for which path the cost of execution is less and then executes that path so that it can optimize the quey execution.) When ever you process any sql query in Oracle what oracle engine internally does is it reads the query and decides which will the best possible way for executing the query. So depending on the number of rules which are to be applied the optimzer runs the query. Basically Oracle provides Two types of Optimizers (indeed 3 but we use only these two techniques.Informatica .html Above answer was rated as good by the following members: sn3508 Click Here to view complete document Partition points mark the thread boundaries in a source pipeline and divide the pipeline into stages.html (145 of 363)4/1/2009 7:50:58 PM . #1 file:///C|/Perl/bin/result..file:///C|/Perl/bin/result. Click Here to view complete document Cost based and rule based approaches are the optimization techniques which are used in related to databases where we need to optimize a sql query. bcz the third has some disadvantages.

intelligententerprise. If the table is not analysed the Oracle follows RBO. Please pick the good answer available or submit your answer.what is mystery dimention? QUESTION #140 No best answer available.html (146 of 363)4/1/2009 7:50:58 PM . Please read the article http://www.com/000320/webhouse. March 05..Informatica .Informatica . ======================================= Plz explain me Clearly what is meant by Mystery Dimension? ======================================= Also known as Junk Dimensions Making sense of the rogue fields in your fact table. ======================================= 140. file:///C|/Perl/bin/result. 2006 23:55:59 Reddy RE: what is mystery dimention? Click Here to view complete document using Mystery Dimension ur maitaining the mystery data in ur Project.jhtml #1 141.what is difference b/w Informatica 7. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result. For the first time if table is not analysed Oracle will go with full table scan.1 and Abinitio ======================================= QUESTION #141 No best answer available.html Use: If the table you are trying to query is already analysed then oracle will go with CBO.

. Click Here to view complete document #1 in Informatica there is the concept of co-operating system which makes the mapping in parallel fashion which is not in Informatica ======================================= There is a lot of diffrence between informatica an Ab Initio In Ab Initio we r using 3 parllalisim but Informatica using 1 parllalisim In Ab Initio no scheduling option we can scheduled manully or pl/sql script but informatica contains 4 scheduling options Ab Inition contains co-operating system but informatica is not Ramp time is very quickly in Ab Initio campare than Informatica Ab Initio is userfriendly than Informatica ======================================= 142.html February 24.html (147 of 363)4/1/2009 7:50:58 PM .file:///C|/Perl/bin/result.1 and Abiniti. Please pick the good answer available or submit your answer. file:///C|/Perl/bin/result. 2006 01:25:58 Niraj Kumar RE: what is difference b/w Informatica 7..Informatica .What is Micro Strategy? Why is it used for? Can any one explain in detail about it? QUESTION #142 No best answer available.

March 08...... ======================================= 143. Click Here to view complete document ya shoor Just right click on the particular session and going to recovery option or by using event wait and event rise #1 ======================================= 144.Can i start and stop single session in concurent bstch? QUESTION #143 No best answer available. Please pick the good answer available or submit your answer. u can create 2 dimensional report and also cubes in here. April 11.. Please pick the good answer available or submit your answer.Informatica . #1 Click Here to view complete document Micro strategy is again an BI tool whicl is a HOLAP. 2006 05:50:15 ravi kumar guturi RE: Can i start and stop single session in concurent b...what is the gap analysis? QUESTION #144 No best answer available.html (148 of 363)4/1/2009 7:50:58 PM .html March 11..basically a reporting tool.Informatica . IT HAS A FULL RANGE OF REPORTING ON WEB ALSO IN WINDOWS.file:///C|/Perl/bin/result... 2006 03:28:52 kundan RE: What is Micro Strategy? Why is it used for? Can an. 2007 10:55:26 #1 file:///C|/Perl/bin/result..

Member Since: March 2007 Contribution: 30 The source that does not the meet the requiremnts specified in BRD using the source given in the SSSD is treated as gap analysis. SSSD consists of the source system study.file:///C|/Perl/bin/result.what is the difference between stop and abort QUESTION #145 No best answer available. ======================================= stop: _______If the session u want to stop is a part of batch you must stop the batch if the batch is part of nested batch Stop the outer most bacth\ Abort:---You can issue the abort command it is similar to stop command except it has 60 second time out . or in one word the difference between 1 and 2 is called gap analysis. #1 file:///C|/Perl/bin/result.html vizaik RE: what is the gap analysis? Click Here to view complete document For a project there will be: 1.html (149 of 363)4/1/2009 7:50:58 PM . ======================================= 145. March 02.Informatica .BRD(Business Requirement Document)-BA 2. 2006 15:17:45 Sirisha RE: what is the difference between stop and abort Click Here to view complete document The PowerCenter Server handles the abort command for the Session task like the stop command except it has a timeout period of 60 seconds.SSSD(Source System Study Document)-BA BRD consists of the requirements of the client. Please pick the good answer available or submit your answer. If the PowerCenter Server cannot finish processing and committing data within the timeout period it kills the DTM process and terminates the session.

can we run a group of sessions without using workflow manager QUESTION #146 No best answer available.. March 05.Abort: Same as Stop but in this case maximum time allowed for buffered data is 60 Seconds. ======================================= Stop: In this case data query from source databases is stopped immediately but whatever data has been loaded into buffer there transformations and loading contunes. Click Here to view complete document ya Its Posible using pmcmd Command with out using the workflow Manager run the group of session. as per my knowledge i give the answer. QUESTION #147 No best answer available.file:///C|/Perl/bin/result. 2006 02:46:29 satyam_un Member Since: March 2006 Contribution: 5 #1 file:///C|/Perl/bin/result. Abort: Same as Stop but in this case maximum time allowed for buffered data is 60 Seconds.html If the server cannot finish processing and commiting data with in 60 sec ======================================= Stop: In this case data query from source databases is stopped immediately but whatever data has been loaded into buffer there transformations and loading contunes.html (150 of 363)4/1/2009 7:50:58 PM . 2006 23:48:38 Reddy RE: can we run a group of sessions without using workf. #1 ======================================= 147. March 13. Please pick the good answer available or submit your answer.what is meant by complex mapping.Informatica . ======================================= 146.Informatica . Please pick the good answer available or submit your answer..

Click Here to view complete document complex mapping means having more business rules .html RE: what is meant by complex mapping. ======================================= Complex maping means involved in more logic and more business rules.html (151 of 363)4/1/2009 7:50:58 PM .file:///C|/Perl/bin/result. Actually in my project complex mapping is In my bank project I involved in construct a 1 dataware house Meny customer is there in my bank project They r after taking loans relocated in to another place that time i feel to diffcult maintain both prvious and current adresses in the sense i am using scd2 This is an simple example of complex mapping ======================================= 148. Please pick the good answer available or submit your answer.Informatica . 2006 00:00:52 satyambabu RE: explain use of update strategy transformation #1 file:///C|/Perl/bin/result. March 22.explain use of update strategy transformation QUESTION #148 No best answer available.

Default flag is Insert. But value of mapping variables can be changed by using variable function. March 16. Please pick the good answer available or submit your answer. 2006 06:27:48 Girish RE: what are mapping parameters and varibles in which .file:///C|/Perl/bin/result. Click Here to view complete document Mapping parameters have a constant value through out the session whereas in mapping variable the values change and the informatica server saves the values in the repository and uses next time when u run the session.. This is must for Incremental Data Loading. In a mapping parameter we need to manually edit the attribute value in the parameter file after every session run. ======================================= 149.html Click Here to view complete document maintain the history data and maintain the most recent changaes data. If we need to increment the attribute value by 1 after every session run then we can use mapping variables . ======================================= To flag source records as INSERT DELETE UPDATE or REJECT for target database.. Then we could edit the parameter file to change the attribute values. #1 ======================================= If we need to change certain attributes of a mapping after every time the session is run it will be very difficult to edit the mapping and then change the attribute. file:///C|/Perl/bin/result.Informatica . Mapping parameter values remain constant. This makes the process simple. So we use mapping parameters and variables and define the values in a parameter file. If we need to change the parameter value then we need to edit the parameter file .html (152 of 363)4/1/2009 7:50:58 PM .what are mapping parameters and varibles in which situation we can use it QUESTION #149 No best answer available.

The use of worklet in a workflow is similar to the use of mapplet in a mapping. ======================================= file:///C|/Perl/bin/result. The use of worklet in a workflow is similar to the use of mapplet in a mapping.html (153 of 363)4/1/2009 7:50:58 PM .. But we r use diffrent situations by using this only ======================================= Worklet is a set of tasks. Above answer was rated as good by the following members: sn3508 Click Here to view complete document A set of worlflow tasks is called worklet Workflow tasks means 1)timer2)decesion3)command4)eventwait5)eventrise6)mail etc.html ======================================= How can you edit the parameter file? Once you setup a mapping variable how can you define them in a parameter file? ======================================= 150...Informatica . If a certain set of task has to be reused in many workflows then we use worklets. To execute a Worklet it has to be placed inside a workflow. To execute a Worklet... If a certain set of task has to be reused in many workflows then we use worklets. it has to be placed inside a workflow.what is worklet and what use of worklet and in which situation we can use it QUESTION #150 Submitted by: SSekar Worklet is a set of tasks.file:///C|/Perl/bin/result.

For transformations that use data cache (such as Aggregator Joiner Rank and Lookup transformations) limit connected input/output or output ports. Suppose we have to extract a file and then load a fact table in the workflow we can use one worklet to load/update dimensions. Optimize transformations. Limiting the number of connected input/output or output ports reduces the amount of data the transformations store in the data cache. Please pick the good answer available or submit your answer. You should minimize the amount of data moved by deleting unnecessary links between transformations. Optimize datatype conversions.html Worklet is reusable workflows. Optimize expressions. For transformations that use data cache (such as Aggregator Joiner Rank and Lookup transformations) limit connected input/output or output ports. You can also perform the following tasks to optimize the mapping: q q q q q #1 Configure single-pass reading. March 17.file:///C|/Perl/bin/result. ======================================= 151.html (154 of 363)4/1/2009 7:50:58 PM .How do you configure mapping in informatica QUESTION #151 No best answer available. It might contain more than on task in it. Limiting the number of connected input/output or output ports reduces the amount of data the transformations store in the data cache. We can use these worklets in other workflows. ======================================= Besides the reusability of a worklet as mentioned above we can also use a worklet to group related sessions together in a very big workflow. Eliminate transformation errors. 2006 05:34:39 suresh RE: How do you configure mapping in informatica Click Here to view complete document You should configure the mapping with the least number of transformations and expressions to do the most amount of work possible. file:///C|/Perl/bin/result. You should minimize the amount of data moved by deleting unnecessary links between transformations. You should configure the mapping with the least number of transformations and expressions to do the most amount of work possible.Informatica .

Above answer was rated as good by the following members: sn3508 Click Here to view complete document It's not possible ======================================= If the session is configured to use in bulk mode it will not write recovery information to recovery tables. So Bulk loading will not perform the recovery as required.file:///C|/Perl/bin/result. 2006 01:23:48 balaji #1 file:///C|/Perl/bin/result. Optimize transformations. ======================================= no why because in bulk load u wont create redo log file when u normal load we create redo log file but in bulk load session performance increases ======================================= ======================================= 153. 152. October 05. Eliminate transformation errors.Can i use a session Bulk loading option that time can i make a recovery to the session? QUESTION #152 Submitted by: SSekar If the session is configured to use in bulk mode it will not write recovery information to recovery tables.what is difference between COM & DCOM? QUESTION #153 No best answer available.html (155 of 363)4/1/2009 7:50:58 PM . Please pick the good answer available or submit your answer. Optimize datatype conversions.html You can also perform the following tasks to optimize the mapping: r r r r r Configure single-pass reading.Informatica . So Bulk loading will not perform the recovery as required. Optimize expressions.Informatica .

Informatica .we can lookup a flat file . In 7+ versions .COM Componenets exposes its interfaces at interface pointer where client access Components intrerface. ======================================= 154.e if we change any datatype of a field all the linked columns will reflect that change file:///C|/Perl/bin/result. 2006 01:07:29 sn3508 Member Since: April 2006 Contribution: 20 #1 RE: what are the enhancements made to Informatica 7.there is propogate option i.1. Click Here to view complete document I'm a newbie..union and custom transformation .1. April 04.file:///C|/Perl/bin/result.1 version when compared to 6.what are the enhancements made to Informatica 7. DCOM is the protocol which enables the s/w componenets in different machine to communicate with each other through n/w .html RE: what is difference between COM & DCOM? Click Here to view complete document Hai COM is technology developed by Microsoft which is based on OO Design . Please pick the good answer available or submit your answer.2.2 version? QUESTION #154 No best answer available.html (156 of 363)4/1/2009 7:50:58 PM ... Correct me if I'm wrong.

#1 file:///C|/Perl/bin/result.we can write to XML target.html (157 of 363)4/1/2009 7:50:58 PM . Please pick the good answer available or submit your answer.union & custom transformation 2.Informatica .supporting of 64mb architecture 8.data proffiling 7. . 2006 16:26:57 Sri RE: how do you create a mapping using multiple lookup .we can use upto 64 partitions ======================================= 1. March 30.we can use pmcmd command 4..version controlling 6.ldap authentication ======================================= 155.we can export independent&dependent repository objects 5.how do you create a mapping using multiple lookup transformation? QUESTION #155 No best answer available..lookup on flatfile 3.file:///C|/Perl/bin/result.html .

. May 01.html Click Here to view complete document Use Multiple Lookups in the mapping ======================================= Use unconnected lookup if same lookup repeats multiple times. Thanks..Military com . ======================================= u mean only look up trf and not any other transformations? be clear? if not one senario is that we can use multiple connected loopup trf depending upon the target.so inorder to connect coloumn to wh_key coloumn eg:sales_branch.html (158 of 363)4/1/2009 7:50:58 PM .etc #1 ======================================= Domain in Informatica means . This is possible only in PowerCenter and not PowerMart..what is the exact meaning of domain? QUESTION #156 No best answer available. Like you can say the domain for attribute Credit Card No# consists of all possible valid 16 digit numbers.Educational institutions org .commercial business net .we use multiple look ups. Please pick the good answer available or submit your answer.Government agencies edu . like sales domain telecom domain.>wh_sales_branch.Thailand in ..depending upon the targets ======================================= 156.Canada th .A central Global repository (GDR) along with the registered Local repositories (LDR) to this GDR.Organizations (nonprofit) mil .Informatica . file:///C|/Perl/bin/result.Network organizations ca . ======================================= In Database parlance you can define a domain a set of all possible permissible values for any attribute . 2006 16:51:12 kalyan RE: what is the exact meaning of domain? Click Here to view complete document a particular environment or a name that identifies one or more IP addressesexample gov .India ======================================= Domain is nothing but give a comlete information on a particular subject area.file:///C|/Perl/bin/result.if u have different wh_keys in ur target table and ur souce table only has its columns but not wh_keys.

Dense Rank: 1 2<--2nd position 2<--3rd position file:///C|/Perl/bin/result. 2006 16:37:54 kalyan RE: what is the hierarchies in DWH Click Here to view complete document Data sources ---> Data acquisition ---> Warehouse ---> Front end tools ---> Metadata management ---> Data warehouse operation management ======================================= Hierarchy in DWH is nothing but an ordered series of related dimensions grouped together to perform multidimensional analysis. Please pick the good answer available or submit your answer. Golf game ususally Ranks this way.Informatica .file:///C|/Perl/bin/result. May 01.html (159 of 363)4/1/2009 7:50:58 PM .Informatica .what is the hierarchies in DWH QUESTION #157 No best answer available.Difference between Rank and Dense Rank? QUESTION #158 Submitted by: sm1506 Rank: 1 2<--2nd position 2<--3rd position 4 5 Same Rank is assigned to same totals/numbers. ======================================= #1 158.html ======================================= 157. Rank is followed by the Position. This is usually a Gold Ranking.

This is usually a Gold Ranking. the next rank follows the serial number.html 3 4 Same ranks are assigned to same totals/numbers/names. 2006 21:15:24 #1 file:///C|/Perl/bin/result. Above answer was rated as good by the following members: sn3508 Click Here to view complete document Rank: 1 2<--2nd position 2<--3rd position 4 5 Same Rank is assigned to same totals/numbers. Please pick the good answer available or submit your answer. the next rank follows the serial number. Dense Rank: 1 2<--2nd position 2<--3rd position 3 4 Same ranks are assigned to same totals/numbers/names.html (160 of 363)4/1/2009 7:50:58 PM . Rank is followed by the Position. April 09.file:///C|/Perl/bin/result.Informatica .can anyone explain about incremental aggregation with an example? QUESTION #159 No best answer available. Golf game ususally Ranks this way. ======================================= 159.

html maverickwild Member Since: November 2005 Contribution: 3 RE: can anyone explain about incremental aggregation w.Informatica ..file:///C|/Perl/bin/result. Of group By columns 2. Click Here to view complete document When you use aggregator transformation to aggregate it creates index and data caches to store the data 1. ======================================= Incremental aggregation is specially used for tune the performance of the aggregator. April 17. It captures the change each time (incrementally) you run the transformation and then performs the aggregation function to the changed rows and not to the entire rows..html (161 of 363)4/1/2009 7:50:58 PM . 2006 10:10:23 raghavendra RE: What is meant by Junk Attribute in Informatica? #1 file:///C|/Perl/bin/result. Of aggreagte columns the incremental aggreagtion is used when we have historical data in place which will be used in aggregation incremental aggregation uses the cache which contains the historical data and for each group by column value already present in cache it add the data value to its corresponding data cache value and outputs the row in case of a incoming value having no match in index cache the new values for group by and output ports are inserted into the cache .What is meant by Junk Attribute in Informatica? QUESTION #160 No best answer available. This improves the performance because you are not reading the entire source each time you run the session. Please pick the good answer available or submit your answer. ======================================= 160.

any body can help giving answers for others also.file:///C|/Perl/bin/result.Informatica Live Interview Questions QUESTION #161 here are some of the interview questions i could not answer. At the last a dimension will be created with all the left over attributes which is usually called as JUNK Dimension and the attributes are called JUNK Attributes.html (162 of 363)4/1/2009 7:50:58 PM . thanks in advance. while creating a dimension we use all the related attributes of that dimesion from the gathered list. April 17. Please pick the good answer available or submit your answer.Informatica . Explain grouped cross tab? Explain reference cursor What are parallel query's and query hints What is meta data and system catalog What is factless fact schema What is confirmed dimension Which kind of index is preferred in DWH Why do we use DSS database for OLAP tools Click Here to view complete document No best answer available. example In Banking Domain we can fetch four attributes accounting to a junk dimensions like from the Overall_Transaction_master table tput flag tcmp flag del flag advance flag all these attributes can be a part of a junk dimensions.html Click Here to view complete document Junk Dimension A Dimension is called junk dimension if it contains attribute which are rarely changed ormodified. ======================================= 161. 2006 07:27:07 binoy_pa RE: Informatica Live Interview Questions Member Since: April 2006 Contribution: 5 #1 file:///C|/Perl/bin/result. thankxregards raghavendra ======================================= Hi In the requirment collection phase all the attributes that are likely to be used in any dimension will be gathered.

System catalog that we used in the cognos that also contains data tables privileges predefined filter etc using this catalog we generate reports group cross tab is a type of report in cognos where we have to assign 3 measures for getting the result ======================================= Hi Bin I doubt your answer about the Grouped Cross Tab where you said 3 measure are to be specified which i feel is wrong. file:///C|/Perl/bin/result. I think that grouped cross tab has only one measure but the side and row headers are grouped like India China Mah | Goa XYZ | PQR 2000 20K 30K 45K 55K 2001 39K 60K 34K 66K Here the cross tab is grouped on Country and then State.html (163 of 363)4/1/2009 7:50:58 PM . And this is known gruped Cross tab.file:///C|/Perl/bin/result. Similary even we can go further to drill year to Quarters.html ======================================= confirmed dimension one dimension that shares with two fact table factless means fact table without measures only contains foreign keys-two types of factless table one is event tracking and other is covergae table Bit map indexes preffered in the data ware housing Metedata is data about data here every thing is stored example-mapping sessions privileges other data in informatica we can see the metedata in the repository.

html (164 of 363)4/1/2009 7:50:58 PM . like Here countries are groupe items. INDIA M1 M2 Banglore Chennai USA M1 LA Chicago 578 M2 5876 556 542 542 45 254 Hyderabad 255 458 4785 546 Washington DC 548 PAKISTAN M1 Lahore Karachi 457 458 M2 875 687 Islamabad 7894 64 Thanks file:///C|/Perl/bin/result.file:///C|/Perl/bin/result.html ======================================= The cursor which is not declared in the declaration section but in executable section where we can give the table name dynamically there.so that the cursor can fetch the data from that table ======================================= grouped cross tab is the single report which contains number of crosstab report based on the grouped items.

Informatica . The purpose of a DWH is to provide users data through which they can make their critical besiness decisions. DSS->Decision Support System. ======================================= The Default Index type for DWH is bitmap(non-unique)index ======================================= 162. These reports are used by the users analysts to extract strategic information which helps in decision making. DSS data base is nothing but a DWH. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result.html (165 of 363)4/1/2009 7:50:58 PM .how do we remove the staging area QUESTION #162 No best answer available.html ======================================= Hi Rest of the answers are given by friends earlier. OLAP tools obviously use data from a DWH which is transformed to generate reports. 2006 12:39:29 Hanu RE: how do we remove the staging area #1 file:///C|/Perl/bin/result. June 08.

2)then Create Target table at DB level. Click Here to view complete document No best answer available.These tables will be used to store data from the source which will be cleaned transformed and undergo some business logic.what is polling? QUESTION #163 It displays update information about the session in the monitor window.Once the source data is done with the above process data from STG will be populated to the final Fact table through a simple one to one mapping.file:///C|/Perl/bin/result. May 01. 2006 16:32:38 kalyan RE: what is polling? #1 file:///C|/Perl/bin/result. ======================================= 163. NOTE: It is recommended to use staging area. Please pick the good answer available or submit your answer. staging area is just a set of intermediate tables. Load data directly into Target Hanu. ======================================= Hi 1)At DB level we can simply DROP the stagin table. 3) Directly LOAD the records into target table.html (166 of 363)4/1/2009 7:50:58 PM .u can create or maintain these tables in the same database as of ur DWH or in a different DB. ======================================= hi this question is logically not correct.Informatica .html Click Here to view complete document Are u talking in the DW or in Informatica? If u want any staging area dont create staging DB.

means it can be insertion modification or deletion of data performed by users/ analysts/applicators #1 ======================================= transaction is nothing but changing one window to another window during process ======================================= Transaction is a set of rows bound by commit or rollback. ======================================= 165.Informatica . ======================================= Hi Transaction is any event that indicates some action.html ======================================= It displays the updated information about the session in the monitor window. The monitor window displays the status of each session when you poll the Informatica server.What is Transaction? QUESTION #164 No best answer available.Informatica .what happens if you try to create a shortcut to a non-shared folder? QUESTION #165 No best answer available. Please pick the good answer available or submit your answer. 2006 09:08:18 vishali RE: What is Transaction? Click Here to view complete document A transaction can be define as DML operation. Please pick the good answer available or submit your file:///C|/Perl/bin/result. ======================================= 164. In DB terms any commited changes occured in the database is said to be transaction. April 14.file:///C|/Perl/bin/result.html (167 of 363)4/1/2009 7:50:58 PM .

2006 09:02:38 vishali RE: In a joiner trasformation. The fewer unique rows in the master the fewer iterations of the join comparison occur which speeds the join process. April 14.file:///C|/Perl/bin/result.Informatica .Informatica .. 2006 14:59:17 sunil_reddy Member Since: April 2006 Contribution: 2 #1 RE: what happens if you try to create a shortcut to a . April 14. Click Here to view complete document in joinner transformation informatica server reads all the records from master source builds index and data caches based on master table rows. April 15.html answer. Please pick the good answer available or submit your answer.html (168 of 363)4/1/2009 7:50:58 PM . Please pick the good answer available or submit your answer..Where is the cache stored in informatica? QUESTION #167 No best answer available.. Click Here to view complete document It only creates a copy of it. Why? QUESTION #166 No best answer available. 2006 02:14:01 #1 file:///C|/Perl/bin/result.In a joiner trasformation. #1 ======================================= 167. ======================================= 166.. you should specify the . you should specify the source with fewer rows as the master source..after building the caches the joiner transformation reads records from the detail source and perform joins ======================================= Joiner transformation compares each row of the master source against the detail source.

Informatica . ======================================= Hi Cache is stored in the Informatica server memory and over flowed data is stored on the disk in file format which will be automatically deleted after the successful completion of the session run. May 08. Please pick the good answer available or submit your answer.html shekar25_g RE: Where is the cache stored in informatica? Member Since: April 2006 Contribution: 2 Click Here to view complete document cache stored in informatica is in informatica server. 2006 05:24:58 MOOTATI RAGHAVENDROA REDDY RE: can batches be copied/stopped from server manager?.can batches be copied/stopped from server manager? QUESTION #168 No best answer available.. 2006 01:56:38 madhan #1 file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer..file:///C|/Perl/bin/result. If you want to store that data you have to use a persistant cache. April 18. Click Here to view complete document yes we can stop the batches using server manager or pmcmd commnad ======================================= #1 169.what is rank transformation?where can we use this transformation? QUESTION #169 No best answer available. ======================================= 168.html (169 of 363)4/1/2009 7:50:58 PM .Informatica .

Can Informatica load heterogeneous targets from heterogeneous sources? QUESTION #170 No best answer available.we can go for rank transformation.ex if we have one sales table and in this if we find more employees selling the same product and we are in need to find the first 5 0r 10 employee who is selling more products. 2006 01:22:19 Anant RE: Can Informatica load heterogeneous targets from he. Click Here to view complete document Rank transformation is used to find the status.. ======================================= To arrange records in Hierarchical Order and to selecte TOP or BOTTOM records. April 26.. #1 file:///C|/Perl/bin/result.. by deafult it will create a rankindex port to caliculate the rank ======================================= 170.html (170 of 363)4/1/2009 7:50:58 PM .html RE: what is rank transformation?where can we use this . ======================================= It is an active transformation which is used to identify the top and bottom values based on the numerics . Please pick the good answer available or submit your answer.Informatica .. It is same as START WITH and CONNECT BY PRIOR clauses.file:///C|/Perl/bin/result. ======================================= It is used to filter the data from top/from buttom according to the condition.

.W_DAY_D values( file:///C|/Perl/bin/result...Informatica .Flat File and Relations sources are joined in the mapping and later Flat File and relational targets are loaded. #1 ======================================= create a procedure to load data into Time Dimension.html (171 of 363)4/1/2009 7:50:58 PM . April 25. For example..file:///C|/Perl/bin/result.Insert_W_DAY_D_PR as LastSeqID number default 0. For eg the code below fills up till 2015. ======================================= 171. begin Loop LastSeqID : LastSeqID + 1.. The procedure needs to run only once to popullate all the data. You can modify the code to suit the feilds in ur table.. ======================================= yes informatica for load the data form the heterogineous sources to heterogeneous target.how do you load the time dimension. Please pick the good answer available or submit your answer. 2006 08:32:33 Appadu Dora P RE: how do you load the time dimension. QUESTION #171 No best answer available.html Click Here to view complete document Yes it can. create or replace procedure QISODS.. INSERT into QISODS. loaddate Date default to_date('12/31/1979' 'mm/dd/yyyy'). loaddate : loaddate + 1. Click Here to view complete document Time Dimension will generally load manually by using PL/SQL shell scripts proc C etc.

End If.html (172 of 363)4/1/2009 7:50:58 PM . If loaddate to_Date('12/31/2015' 'mm/dd/yyyy') Then Exit.file:///C|/Perl/bin/result.html LastSeqID Trunc(loaddate) Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2) ) TO_FLOAT(TO_CHAR(loaddate 'MM')) TO_FLOAT(TO_CHAR(loaddate 'Q')) trunc((ROUND(TO_DECIMAL(to_char(loaddate 'DDD'))) + ROUND(TO_DECIMAL(to_char(trunc(loaddate 'YYYY') 'D')))+ 5) / 7) TO_FLOAT(TO_CHAR(loaddate 'YYYY')) TO_FLOAT(TO_CHAR(loaddate 'DD')) TO_FLOAT(TO_CHAR(loaddate 'D')) TO_FLOAT(TO_CHAR(loaddate 'DDD')) 1 1 1 1 1 TO_FLOAT(TO_CHAR(loaddate 'J')) ((TO_FLOAT(TO_CHAR(loaddate 'YYYY')) + 4713) * 12) + TO_number(TO_CHAR(loaddate 'MM')) ((TO_FLOAT(TO_CHAR(loaddate 'YYYY')) + 4713) * 4) + TO_number(TO_CHAR(loaddate 'Q')) TO_FLOAT(TO_CHAR(loaddate 'J'))/7 TO_FLOAT (TO_CHAR (loaddate 'YYYY')) + 4713 TO_CHAR(load_date 'Day') TO_CHAR(loaddate 'Month') Decode(To_Char(loaddate 'D') '7' 'weekend' '6' 'weekend' 'weekday') Trunc(loaddate 'DAY') + 1 Decode(Last_Day(loaddate) loaddate 'y' 'n') to_char(loaddate 'YYYYMM') to_char(loaddate 'YYYY') || ' Half' || Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2) ) TO_CHAR(loaddate 'YYYY / MM') TO_CHAR(loaddate 'YYYY') ||' Q ' ||TRUNC(TO_number( TO_CHAR(loaddate 'Q')) ) TO_CHAR(loaddate 'YYYY') ||' Week'||TRUNC(TO_number( TO_CHAR(loaddate 'WW'))) TO_CHAR(loaddate 'YYYY')). End Loop. file:///C|/Perl/bin/result.

======================================= 172.html (173 of 363)4/1/2009 7:50:58 PM .Hash table is used to extract the data through Java Virtual Machine. 2006 15:13:30 uma bojja RE: what is hash table informatica? Click Here to view complete document Hash partitioning is the type of partition that is supported by Informatica where the hash user keys are specified .html commit.If u know more about this plz send to me ======================================= Hash partitions are some what similar to database partitions. end Insert_W_DAY_D_PR. ======================================= In hash partitioning the Informatica Server uses a hash function to group rows of data among partitions. This will be handy while handling partitioned tables. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result.Informatica . --Kr ======================================= when you want the Informatica Server to distribute rows to the partitions by group.what is hash table informatica? QUESTION #172 No best answer available. file:///C|/Perl/bin/result. This will allow user to partition the data by rage which is fetched from source. Can you please explain more on this? #1 ======================================= I donot know exact answer uma but i am telling as per my knowledge. May 03.

Use hash partitioning when you want the Informatica Server to distribute rows to the partitions by group.For example if the warehouse is build across sales vertical then it is termed as EDW for sales hierachy file:///C|/Perl/bin/result. May 04. For example you need to sort items by item ID but you do not know how many items have a particular ID number.Informatica . Its a single enterprise data warehouse (EDW) with no associated data marts or operational data store (ODS) systems.file:///C|/Perl/bin/result. cheers karthik ======================================= 173.It is limited to particular verticle.html The Informatica Server groups the data based on a partition key. Member Since: May 2006 Contribution: 7 #1 ======================================= If the warehouse is build across particular vertical of the company it is called as enterprise data warehouse.What is meant by EDW? QUESTION #173 No best answer available.html (174 of 363)4/1/2009 7:50:58 PM . 2006 10:06:21 Uma Bojja RE: What is meant by EDW? Click Here to view complete document EDW ~~~~~ Its a big data warehouses OR centralized data warehousing OR the old style of warehouse. Please pick the good answer available or submit your answer.

Thanks Yugandhar ======================================= 174... to over come is the time it takes to develop and also the management that is required to build a centralised database.html ======================================= EDW is Enterprise Datawarehouse which means that its a centralised DW for the whole organization. 2006 14:00:35 Uma Bojja Member Since: May 2006 Contribution: 7 #1 RE: how to load the data from people soft hrm to peopl. Golbal view of the Data 2. Same point of source of data for all the users acroos the organization. this apporach is the apporach on Imon which relies on the point of having a single warehouse/ centralised where the kimball apporach says to have seperate data marts for each vertical/department. 3. Advantages of having a EDW: 1. May 08. Please pick the good answer available or submit your answer.how to load the data from people soft hrm to people soft erm using informatica? QUESTION #174 No best answer available.Informatica .file:///C|/Perl/bin/result. able to perform consistent analysis on a single Data Warehouse. file:///C|/Perl/bin/result.html (175 of 363)4/1/2009 7:50:58 PM .

what are the measure objects QUESTION #175 No best answer available.Import the source and target from people soft using ODBC connections 3. Please pick the good answer available or submit your answer. file:///C|/Perl/bin/result.html (176 of 363)4/1/2009 7:50:58 PM .Informatica . ======================================= 175.file:///C|/Perl/bin/result.what is the diff b/w STOP & ABORT in INFORMATICA sess level ? QUESTION #176 No best answer available. May 15.Power Connect license 2. select the proper connection (people soft with oracle sybase db2 and informix) and execute like a normal session.Define connection under Application Connection Browser for the people soft source/target in workflow manager .Informatica . Please pick the good answer available or submit your answer.html Click Here to view complete document Following are necessary 1. #1 ======================================= 176. 2006 00:50:56 karthikeyan RE: what are the measure objects Click Here to view complete document Aggregate calculation like sum avg max min these are the measure objetcs.

.. Click Here to view complete document Stop:We can Restart the session Abort:WE cant restart the session. May 22.html (177 of 363)4/1/2009 7:50:58 PM . #1 file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer.Informatica . Abort: It works in the same way as stop but there is a time out period of 60sec.html May 18.. 2006 09:14:55 afzal RE: what is surrogatekey ? In ur project in which situ.We should truncate all the pipeline after that start the session #1 ======================================= Stop : After issuing stop PCS processes all those records which it got from source qualifier and writes to the target.. 2006 05:13:07 rkarthikeyan RE: what is the diff b/w STOP & ABORT in INFORMATICA s.file:///C|/Perl/bin/result. ======================================= 177.what is surrogatekey ? In ur project in which situation u has used ? explain with example ? QUESTION #177 No best answer available.

Advantages1. May 22.e. Click Here to view complete document The Partitioning Option increases PowerCenter’s performance through parallel data processing and this option provides a thread-based architecture and automatic data partitioning that optimizes parallel processing on multiprocessor and grid-based hardware environments.file:///C|/Perl/bin/result. ======================================= 178.html (178 of 363)4/1/2009 7:50:58 PM . 2006 08:51:38 afzal RE: what is Partitioning ? where we can use Partition?.C. The only requirement for a surrogate primary key is that it is unique for each row in the tableI it is useful because the natural primary key (i.html Click Here to view complete document A surrogate key is system genrated/artificial key /sequence number or A surrogate key is a substitution for the natural primary key. Have a flexible mechanisam for handling S. Customer Number in Customer table) can change and this makes updates more difficult.but In my project I felt that the primary reason for the surrogate keys was to record the changing context of the dimension attributes.. we can save substantial storage space with integer valued surrogate keys. Unlike other ======================================= Surrogate key is a Unique identifier for eatch row it can be used as a primary key for DWH.The DWH does not depends on primary keys generated by OLTP systems for internally identifying the recods.D's2.It is just a unique identifier or number for each row that can be used for the primary key to the table.what is Partitioning ? where we can use Partition? wht is advantages?Is it nessisary? QUESTION #178 No best answer available.Informatica . Please pick the good answer available or submit your answer. ======================================= partitions are used to optimize the session performance we can select in sesstion propetys for partiotions types default----passthrough partition #1 file:///C|/Perl/bin/result. When the new record is inserting into DWH primary keys are autimatically generated such type od keys are called SURROGATE KEY..(particulaly for scd )The reason for them being integer and integer joins are faster.

file:///C|/Perl/bin/result.html key range partion round robin partion hash partiotion ======================================= Hi In informatica we can tune performance in 5 different levels that is at source level target level mapping level session level and at network level. In hash again we have 2 types that is userdefined and automatic. thanks madhu. round robin can not be applied at source level it can be used at some transformation level key range can be applied at both source or target levels. I need clear picture.html (179 of 363)4/1/2009 7:50:58 PM . if you want me to explain each partioning level in detail the i can . ======================================= hi nimmi please explain me regarding complete partition. what tranmission it will ristrict how it will ristric where we have to give. So to tune the performance at session level we go for partitioning and again we have 4 types of partitioning those are pass through hash round robin key range. pass through is the default one. ======================================= file:///C|/Perl/bin/result.

Nanda ======================================= Use sorter transformation select distinct option duplicate rows will be eliminated. 2006 04:26:30 Karthikeya RE: hwo can we eliminate duplicate rows from flat file. #1 ======================================= Hi Before loading to target use an aggregator transformation and make use of group by function to eleminate the duplicates on columns .. When you configure the Sorter Transformation to treat output rows as distinct it configures all ports as part of the sort key. Please pick the good answer available or submit your answer.Informatica . May 22.html 179.. It therefore discards duplicate rows compared during the sort operation ======================================= Hi Before loading to target Use an aggregator transformation and use group by clause to eliminate the duplicate in columns.Nanda ======================================= Use Sorter Transformation.html (180 of 363)4/1/2009 7:50:58 PM .hwo can we eliminate duplicate rows from flat file? QUESTION #179 No best answer available. Click Here to view complete document keep aggregator between source qualifier and target and choose group by field key it will eliminate the duplicate records.file:///C|/Perl/bin/result. ======================================= if u want to delete the duplicate rows in flat files then we go for rank transformation or oracle external procedure tranfornation select all group by ports and select one field for rank its easily dupliuctee now ======================================= file:///C|/Perl/bin/result.

======================================= 180. If we enable that property automatically it will remove duplicate rows in flatfiles..How to Generate the Metadata Reports in Informatica? QUESTION #180 No best answer available.. Please pick the good answer available or submit your answer.html Hi using Sorter Transformation we can eliminate the Duplicate Rows from Flat file Thanks N. Bala Dara ======================================= Hi You can generate PowerCenter Metadata Reporter from a browser on any workstation even a workstation that does not have PowerCenter tools installed.file:///C|/Perl/bin/result. Click Here to view complete document Hi Venkatesan You can generate PowerCenter Metadata Reporter from a browser on any workstation.html (181 of 363)4/1/2009 7:50:58 PM . 2006 07:27:14 balanagdara Member Since: April 2006 Contribution: 4 #1 RE: How to Generate the Metadata Reports in Informatic.Sai ======================================= to eliminate the duplicate in flatfiles we have distinct property in sorter transformation.Informatica . June 01. file:///C|/Perl/bin/result.

.. Please pick the good answer available or submit your answer.. Using metadata reporter we can connect to repository and get the metadata without the knowledge of Sql and technical skills. It is a web based application used only for creating metadata reports in informatica.html Bala Dara ======================================= hey bala can you be more specific about that how to generate the metadata report in informantica ??? ======================================= yes we can generate reports using Metadata Reporter.Can u tell me how to go for SCD's and its types.Wh.html (182 of 363)4/1/2009 7:50:58 PM .file:///C|/Perl/bin/result.. 2006 08:53:46 priyamayee Member Since: June 2006 Contribution: 3 #1 RE: Can u tell me how to go for SCD's and its types. June 08.Informatica . Where do we use them mostly QUESTION #181 No best answer available. reply me. I thik this is my answers if it is yes... kumar ======================================= 181... file:///C|/Perl/bin/result.

Therefore both the original and the new record will be present. In cases where the number of rows for the table is very high to start with storage and performance can become a concern. There are in general three ways to solve this type of problem and they are categorized as follows: In Type 1 Slowly Changing Dimension the new information simply overwrites the original information. In our example recall we originally have the following table: Customer Key Name State 1001 Christina IllinoisAfter Christina moved from Illinois to California the new information replaces the new record and we have the following table: Customer Key Name State 1001 Christina CaliforniaAdvantages: .html Click Here to view complete document Hii It depends on the business requirement u have. For example in this case the company would not be able to know that Christina lived in Illinois before. Disadvantages: .html (183 of 363)4/1/2009 7:50:58 PM . How should ABC Inc.byeMayee ======================================= The Slowly Changing Dimension problem is a common one particular to data warehousing.This will cause the size of the table to grow fast. . In our example recall we originally have the following table: Customer Key Name State1001 Christina IllinoisTo accomodate Type 3 Slowly Changing Dimension we will now have the following columns: Customer Key Name Original State Current State Effective Date After Christina moved from Illinois to California the original information gets updated and we have the following table (assuming the effective date of change is January 15 2003): Customer Key Name Original State Current State Effective Date 1001 Christina Illinois California 15-JAN-2003Advantages: . Usage: About 50 of the time. . We use this SCD's to maintain history(changes/ updates) in the dimensions.This does not increase the size of the table since new information is updated. For example A customer dimension is a SCD because the customer can change his/her address contact name or anything. In a nutshell this applies to cases where the attribute for a record varies over time. So the original entry in the customer lookup table has the following record: Customer Key Name State 1001 Christina IllinoisAt a later date she moved to Los Angeles California on January 2003. In our example recall we originally have the following table: Customer Key Name State 1001 Christina IllinoisAfter Christina moved from Illinois to California we add the new information as a new row into the table: Customer Key Name State 1001 Christina Illinois 1005 Christina CaliforniaAdvantages: . The new record gets its own primary key. These types of changes in dimensions are know a SCD's. We give an example below: Christina is a customer with ABC Inc. And we use the 3 different SCD TYPES to handle these changes and historyu. When to use Type 1: Type 1 slowly changing dimension should be used when it is not necessary for the data warehouse to keep track of historical changes.This allows us to keep some part of history.This necessarily complicates the ETL process.This is the easiest way to handle the Slowly Changing Dimension problem since there is no need to keep track of the old information.file:///C|/Perl/bin/result. There will also be a column that indicates when the current value becomes active. Disadvantages: . Each SCD type has it's own way of storing/updating/maintaining the history. Usage: About 50 of the time. now modify its customer table to reflect this change? This is the Slowly Changing Dimension problem.All history is lost. By applying this methodology it is not possible to trace back in history. In other words no history is kept. When to use Type 2: Type 2 slowly changing dimension should be used when it is necessary for the data warehouse to track historical changes. She first lived in Chicago Illinois. In Type 3 Slowly Changing Dimension there will be two columns to indicate the particular attribute of interest one indicating the original value and one indicating the current value.This allows us to accurately keep all historical information. file:///C|/Perl/bin/result. In Type 2 Slowly Changing Dimension a new record is added to the table to represent the new information.

Informatica . Please pick the good answer available or submit your answer.how u will create header and footer in target using informatica? QUESTION #183 No best answer available. Please pick the good answer available or submit your answer. When to use Type 3: Type III slowly changing dimension should only be used when it is necessary for the data warehouse to track historical changes and when such changes will only occur for a finite number of time.html Disadvantages: . Usage: Type 3 is rarely used in actual practice. June 13. ======================================= 182.Informatica ..How to export mappings to the production environment? QUESTION #182 No best answer available. For example if Christina later moves to Texas on December 15 2003 the California information will be lost.html (184 of 363)4/1/2009 7:50:58 PM .. June 13.file:///C|/Perl/bin/result. 2006 19:05:25 UmaBojja #1 file:///C|/Perl/bin/result.Type 3 will not be able to keep all history where an attribute is changed more than once. 2006 19:15:18 UmaBojja RE: How to export mappings to the production environme. Import the exported mapping in to the production repository with replace options. Click Here to view complete document In the designer go to the main menu and one can see the export/import options. Thanks Uma #1 ======================================= 183.

. ======================================= 184.html (185 of 363)4/1/2009 7:50:58 PM . create three separate files in a single pipeline. Take the number of records as count in the aggregator transformation.what is the difference between constraind base load ordering and target load plan QUESTION #184 file:///C|/Perl/bin/result.Informatica . One will be your header and other will be your trailer coming from aggregator.file:///C|/Perl/bin/result. The third will be your main file.html RE: how u will create header and footer in target usin. Click Here to view complete document If you are focus is about the flat files then one can set it in file properties while creating a mapping or at the session level in session properties Thanks Uma ======================================= hi uma thanks for the answer i want the complete explanation regarding to this question how to create header and footer in target? ======================================= you can always create a header and a trailer in the target file using an aggregator transformation. Concatenate the header and the main file in post session command usnig shell script..

. 2006 14:16:55 Uma Bojja Member Since: May 2006 Contribution: 7 #1 RE: what is the difference between constraind base loa.It will show all the target load groups in the particular mapping. The target tables must posess primary/foreign key relationships.html No best answer available... Where as constraint based loading is a session proerty.html (186 of 363)4/1/2009 7:50:58 PM . Click Here to view complete document Constraint based load ordering example: Table 1---Master Tabke 2---Detail If the data in table1 is dependent on the data in table2 then table2 should be loaded first.file:///C|/Perl/bin/result. file:///C|/Perl/bin/result. Thanks Uma ======================================= Target load order comes in the designer property. A target load group is a set of source-source qulifier-transformations and target.In such cases to control the load order of the tables we need some conditional loading which is nothing but constraint based load In Informatica this feature is implemented by just one check box at the session level. Please pick the good answer available or submit your answer. You specify the order there the server will loadto the target accordingly. So that the server loads according to the key relation irrespective of the Target load order plan.Click mappings tab in designer and then target load plan. Here the multiple targets must be generated from one source qualifier. June 16.

How do we analyse the data at database level? QUESTION #185 No best answer available. If you have multiple source qualifiers it has to be loaded into multiple target you have to use Target load order.html ======================================= If you have only one source it s loading into multiple target means you have to use Constraint based loading.Informatica .html (187 of 363)4/1/2009 7:50:58 PM . June 16. Target Load plan : If your mapping contains multipe pipeline(flow) (specify execution order one by one. ======================================= Constraint based loading : If your mapping contains single pipeline(flow) with morethan one target (If target tables contain Master -Child relationship) you need to use constraint based load in session level.example pipeline 1 need to execute first then pipeline 2 then pipeline 3) this is purly based on pipeline dependency ======================================= 185. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result. 2006 14:20:55 Uma Bojja Member Since: May 2006 Contribution: 7 #1 RE: How do we analyse the data at database level? file:///C|/Perl/bin/result. But the target tables should have key relationships between them.

Also used to configure for case-sensitive sorting and specify whether the output rows should be distinct. Please pick the good answer available or submit your answer..html (188 of 363)4/1/2009 7:50:58 PM . We can use data profiling too.. Thanks Uma ======================================= 186. 2006 15:02:41 Kiran Kumar Cholleti RE: why sorter transformation is an active transformat.file:///C|/Perl/bin/result. ======================================= file:///C|/Perl/bin/result. If you want to view the data on source/target we can preview the data but with some limitations.Informatica .html Click Here to view complete document Data can be viewed using Informatica's designer tool. active transformation:is no of records and thier rowid that pass through the transformation will differ.why sorter transformation is an active transformation? QUESTION #186 No best answer available. Click Here to view complete document It allows to sort data either in ascending or descending order according to a specified field. then it will not return all the rows #1 ======================================= becz it will change the rowid of the records transformed. June 16.

One more thing is An active transformation can also behave like a passive ======================================= 187.file:///C|/Perl/bin/result. the port on which the sorting takes place is called as sortkeyport properties if u select distinct eliminate duplicates case sensitive valid for strings to sort the data null treated low null values are given least priority ======================================= If any transformation has the distinct option then it will be a active one bec active transformation is nothing but the transformation which will change the no. #1 file:///C|/Perl/bin/result.how is the union transformation active transformation? QUESTION #187 No best answer available. of o/p records.Informatica .. Please pick the good answer available or submit your answer.So distinct always filters the duplicate rows which inturn decrease the no of o/p records when compared to i/n records.html This is type of active transformation which is responsible for sorting the data either in the ascending order or descending order according to the key specifier.html (189 of 363)4/1/2009 7:50:58 PM . June 18.. 2006 09:53:25 zafar RE: how is the union transformation active transformat.

of rows in the Target. Number of rows coming out of Union transformation might not match the incoming rows.e input rows and output rows might not match. file:///C|/Perl/bin/result.file:///C|/Perl/bin/result. ======================================= Hi Saivenkatesh ur answer is very nice thanking you. Source (100 rows) ---> Passive Transformation ---> Target (100 rows) Union Transformation: in Union Transformation we may combine the data from two (or) more sources. So it is definetly an Active Transformation. Source (100 rows) ---> Active Transformation ---> Target (< or > 100 rows) Passive Transformation: the transformation that does not change the no.html (190 of 363)4/1/2009 7:50:58 PM . of rows in the Target. ======================================= thank u very munch sai venkatesh for ur answer but inthat case look up transformation should be an active transformation but it is a passive transformation . ======================================= This is a type of passive transformation which is responsible for merging the data comming from different sources. ======================================= Ya since the Union Trnsformation may lead a change to the no of rows incoming it is definitely an active type. Zafar ======================================= Active Transformation: the transformation that change the no. If we combine the rows of Table-1 and Table-2 we will get a total of '30' rows in the Target. the union transformation functions very similar to union all statement in oracle. ======================================= Active transformation number of records passing through the transformation and their rowid will be different it depends on rowid also. Assume Table-1 contains '10' rows and Table-2 contains '20' rows.html Click Here to view complete document Active Transformation is one which can change the number of rows i.

The no. Please pick the good answer available or submit your answer. ======================================= Ya its the level of information storage in session log. By default it remains Normal .html In the other case Look-up in no way can change the no. of row that are passing thru it.file:///C|/Perl/bin/result.Informatica . I am confusing here can anyone clear me in detail. of records increases or decreases by the transformations that follow the look-up transformation. 2006 10:47:28 rajesh RE: what is tracing level? Click Here to view complete document Tracing level determines the amount of information that informatcia server writes in a session log. thanks in advance ======================================= 188. The option comes in the properties tab of transformations.html (191 of 363)4/1/2009 7:50:58 PM . June 21. Can be Verbose Initialisation Verbose Data #1 file:///C|/Perl/bin/result.what is tracing level? QUESTION #188 No best answer available. The transformation just looks to the refering table. ======================================= Are you sure that Lookup is a passive trasformation ======================================= Ya Surely Lookup is a passive one ======================================= Hi u are saying source rows also 10+20 30 it passes all 30 rows to the target according to active definition while it passes the rows to the next t/r should be d/t but it passes all 30 rows.

Regards file:///C|/Perl/bin/result. Click Here to view complete document hi using join transformation #1 ======================================= You have to use two joiner transformations. Facts are always normalized. 2006 18:28:27 sandeep RE: How can we join 3 database like Flat File..Informatica .html Normal or Terse. Oracle.Thanks in advance QUESTION #189 No best answer available.How can we join 3 database like Flat File.Informatica . Oracle. June 24.html (192 of 363)4/1/2009 7:50:58 PM ..file:///C|/Perl/bin/result. ======================================= 189... Please pick the good answer available or submit your answer. ======================================= 190.fIRST one will join two tables and the next one will join the third with the resultant of the first joiner. Db2 in Informatrica.Is a fact table normalized or de-normalized? QUESTION #190 Submitted by: lakshmi Hi Dimension tables can be normalized or de-normalized.

.90 it has to be de-normalized and off course the fact table has to be de normalized.com/html/commentarysub2.somebody answered it as normalized. Meanwhile the fact tables with performance metrics are typically normalized.See if u dont know the answers plz dont post them.html file:///C|/Perl/bin/result. While we advise against a fully normalized with snowflaked dimension attributes in separate tables (creating blizzard-like conditions for the business user) a single denormalized big wide table containing both metrics and descriptions in the same table is also ill-advised.. Fact Dimensional models combine normalized and denormalized table structures.file:///C|/Perl/bin/result.kimballgroup. ======================================= Hi I read the above comments... The dimension tables of descriptive information are highly denormalized with detailed and hierarchical roll-up attributes in the same table.html Above answer was rated as good by the following members: Vamshidhar Click Here to view complete document Flat table is always Normalised since no redundants!! ======================================= Well!! A fact table is always DENORMALISED table.. Here is the comment. ======================================= the fact table is always DE-NORMALIZED. Thanks!! ======================================= the main funda of DW is de-normalizing the data for faster access by the reporting tool. Fable August 3 2005 Dimensional models are fully denormalized. Ref:http://www.. then we should ask Kimball know. I confused. It consists of data from dimension table (Primary Key's) and Fact table has Foreign keys and measures.Just dont make lottery by posting wrong answers.html (193 of 363)4/1/2009 7:50:58 PM ..so if ur building a DW . Hope answer the question..

Regards Umesh BSIL(Mumbai) ======================================= hi please see the following site: http://72. file:///C|/Perl/bin/result. But fact table is always normalized ======================================= Dimension table may be normalized or denormalized according to your schema but Fact table always will be denormalized.html ======================================= Hi Dimension tables can be normalized or de-normalized. Regards ======================================= Hi Dimension table can be normalized or de-normalized.104/search?q cache:lkFjt6EmsxMJ:www. Facts are always normalized.kimballgroup.file:///C|/Perl/bin/result.253.14.html (194 of 363)4/1/2009 7:50:58 PM . ======================================= Hi all Dimension table may be normalized or denormalized according to your schema but Fact table always will be denormalized.com/html/commentarysub2.

Informatica .Informatica . The dimension tables of descriptive information are highly denormalized with detailed and hierarchical roll-up attributes in the same table. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result. August 03. Meanwhile the fact tables with performance metrics are typically normalized.What is the difference between PowerCenter 6 and powercenter 7? QUESTION #192 No best answer available. P #1 file:///C|/Perl/bin/result. Regards lakshmi ======================================= 191. 2006 11:31:21 satish RE: What is the difference between PowerCenter 7 and P. Click Here to view complete document the major difference is in 8 version some new transformations are added apart from these the remaining same.html (195 of 363)4/1/2009 7:50:58 PM .. 2006 10:28:09 Gopal.html html+fact+table+normalized+or+denormalized&hl en&gl us&ct clnk&cd 1 I am highlighting what Kimball says here: Dimensional models combine normalized and denormalized table structures.What is the difference between PowerCenter 7 and PowerCenter 8? QUESTION #191 No best answer available. While we advise against a fully normalized with snowflaked dimension attributes in separate tables (creating blizzard-like conditions for the business user) a single denormalized big wide table containing both metrics and descriptions in the same table is also ill-advised.. Please pick the good answer available or submit your answer. June 30. ======================================= #1 192.

html (196 of 363)4/1/2009 7:50:58 PM .X 2) External Stored Procedure Transformation is not available in informatica 7.x ======================================= Hi Tell me some architectural difference between the two. Click Here to view complete document 1)lookup the flat files in informatica 7.X but this transformation included in informatica 6.x XML enhancements for data integration in 7..x like transaction cotrol transformation union transformation midstream XML transformation flat file lookup file:///C|/Perl/bin/result.X but we cann't lookup flat files in informatica 6.x Pradeep ======================================= q q q q Also custom transformation is not available in 6.x Session level error handling is available in 7.html RE: What is the difference between PowerCenter 6 and p.x it have the more feature compare to 6..file:///C|/Perl/bin/result.x where as its there in 7.X ======================================= Also Union Transformation is not there in 6.x The main difference is the version control available in 7. ======================================= in 7.

it will ask whether to copy the mapping say YES.. Connect to both the repositories. 2. Open the mapping you want to migrate. Go to the source folder and select mapping name from the object navigator and select 'copy' from 'Edit' menu. connect to both the repositories open the respective folders.file:///C|/Perl/bin/result.Select 'Export Objects' and give a name an XML file will be generated.'Import Objects' and select the XML file name.html custom transformation function like soundex metaphone ======================================= 193. 1.Informatica . from the navigator panel just drag and drop the mapping to the work area. Connect to the repository where you want to migrate and then select File Menu . 2006 21:12:49 martin RE: How to move the mapping from one database to anoth. keep the destination repository as active.html (197 of 363)4/1/2009 7:50:58 PM .. ======================================= file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer. Now go to the target folder and select 'Paste' from 'Edit' menu. Go to File Menu . #1 ======================================= u can also do it this way. Be sure you open the target folder. Click Here to view complete document Do you mean migration between repositories? There are 2 ways of doing this. July 06.How to move the mapping from one database to another? QUESTION #193 No best answer available. its done.

. Please pick the good answer available or submit your answer.Informatica .html if we go by the direct meaning of your question.Informatica . Click Here to view complete document if we are using more business reules or more transformations then it is called complex mapping. Please pick the good answer available or submit your answer... September 28. If we have flat files then we use the flat as a sourse or we can take relational sources depends on the availability.. July 06.there is no need for a new mapping for a new databse you just need to change the connections in the workflow manager to run the mapping on another database ======================================= 194.file:///C|/Perl/bin/result..html (198 of 363)4/1/2009 7:50:58 PM .How do we do complex mapping by using flatfiles / relational database? QUESTION #194 No best answer available. 2006 05:56:56 srinivas RE: How do we do complex mapping by using flatfiles / .How to define Informatica server? QUESTION #195 No best answer available. #1 ======================================= 195. 2006 02:45:14 deeprekha RE: How to define Informatica server? #1 file:///C|/Perl/bin/result.

2006 16:57:37 Prasanth RE: How to call stored Procedure from Workflow monitor..how can we store previous session logs QUESTION #197 No best answer available.Informatica .Informatica . Click Here to view complete document Call stored procedure using a shell script. 2006 02:54:55 Hareesh RE: how can we store previous session logs #1 file:///C|/Perl/bin/result.file:///C|/Perl/bin/result.1 version QUESTION #196 No best answer available.. Please pick the good answer available or submit your answer. Please pick the good answer available or submit your answer.Which is resonsible for reads the data from various source system and tranforms the data according to business rule and loads the data into the target table ======================================= 196. July 12.html (199 of 363)4/1/2009 7:50:58 PM .How to call stored Procedure from Workflow monitor in Informatica 7. Invoke that shell script using command task with pmcmd. October 19.. #1 ======================================= 197.html Click Here to view complete document informatica server is the main server component in informatica product family.

Hareesh ======================================= Hi We can do this way also. Please pick the good answer available or submit your answer.html Click Here to view complete document Just run the session in time stamp mode then automatically session log will not overwrite current session log...html (200 of 363)4/1/2009 7:50:58 PM .how can we use pmcmd command in a workflow or to run a session QUESTION #198 No best answer available. July 14. #1 file:///C|/Perl/bin/result.Informatica . 2006 02:31:34 abc RE: how can we use pmcmd command in a workflow or to r. using $PMSessionlogcount(specify the number of runs of the session log to save) ======================================= Hi Go to Session-->right click -->Select Edit Task then Goto -->Config Object then set the property Save Session Log By --Runs Save Session Log for These Runs --->To Number of Historical Session logs you want ======================================= 198.file:///C|/Perl/bin/result.

Informatica . 2006 00:26:44 srinuv_11 RE: Wat is QTP in Data Warehousing? Member Since: October 2006 Contribution: 23 #1 file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer. July 14.Wat is QTP in Data Warehousing? QUESTION #200 No best answer available. ======================================= 199. #1 ======================================= 200. The change data thus captured is then made available to the target systems in a controlled manner. 2006 06:26:43 Ajay RE: What is change data capture? Click Here to view complete document Change data capture (CDC) is a set of software design patterns used to determine the data that has changed in a database so that action can be taken using the changed data.What is change data capture? QUESTION #199 No best answer available. With CDC data extraction takes place at the same time the insert update or delete operations occur in the source tables and the change data is stored inside the database in change tables. ======================================= Can you eloberate on how do you this pls? ======================================= Changed Data Capture (CDC) helps identify the data in the source system that has changed since the last extraction.html Click Here to view complete document By using command task. November 15. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result.html (201 of 363)4/1/2009 7:50:58 PM .Informatica .

Please pick the good answer available or submit your file:///C|/Perl/bin/result.html Click Here to view complete document I think this question is belongs to congnos.Informatica . ======================================= 201. #1 ======================================= 202. Please pick the good answer available or submit your answer. and if the sources are heterogeneous i mean from different file systems then we can use SORTER Transformation and in transformation properties select the check box called [] select distinct same as in source qualifier we can get distinct values.html (202 of 363)4/1/2009 7:50:58 PM .Informatica .How can I get distinct values while mapping in Informatica in insertion? QUESTION #201 No best answer available.. I mean selecting the check box called [] select distinct. 2006 13:03:40 Manisha RE: How can I get distinct values while mapping in Inf. Click Here to view complete document You can add an aggregator before insert and group by the feilds that need to be distinct.what transformation you can use inplace of lookup? QUESTION #202 No best answer available. ======================================= IN the source qualifier write the query with distinct on key column ======================================= Well There are two methods to get distinct values: If the sources are databases then we can go for SQL-override in source qualifier by changing the default SQL query..file:///C|/Perl/bin/result. July 17.

file:///C|/Perl/bin/result.Why and where we are using factless fact table? QUESTION #203 No best answer available. #1 ======================================= Hi lookup's either we can use first or last value.html (203 of 363)4/1/2009 7:50:58 PM . July 18.html answer. Please pick the good answer available or submit your answer. for suppose lookup have more than one record matching we need all matching records in that situation we can use master or detail outer join instead of lookup. (according to logic) ======================================= You can use joiner in place of lookup ======================================= You can join that table which you wanted to use in lookup in the source qualifier using SQL override to avoid using lookup transformation ======================================= 203. ======================================= Hi Look-up transformation can serve in so many situations.. July 17. So if you can a bit particular about the scenarioo that you are talking about it will be easy to interpret. 2006 05:08:53 kumar #1 file:///C|/Perl/bin/result. Click Here to view complete document You can use the joiner transformation by setting as outer join of either master or detail.Informatica . 2006 05:52:56 venkat RE: what transformation you can use inplace of lookup?..

======================================= Factless Fact Tables are the fact tables with no facts or measures(numerical data).html (204 of 363)4/1/2009 7:50:58 PM . EX: Temperature in fact table will note it as Moderate Low High.html RE: Why and where we are using factless fact table? Click Here to view complete document Hi Iam not sure but you can confrirm with other people. ======================================= transaction can occur without the measure then it is factless fact table or coverage table for example product samples ======================================= Fact table will contains metrics and FK's corresponding to the Dimesional table but factless Fact table will contains only FK's of corresponding dimensions without any metrics regards rma ======================================= 204. This type of things are called Nonadditive measures.file:///C|/Perl/bin/result. It contains only the foriegn keys of corresponding Dimensions. ======================================= such fact tables are required to avoid flaking of levels within dimension and to define them as a separate cube connected to the main cube. Cheers Kumar.tell me one complecated mapping QUESTION #204 file:///C|/Perl/bin/result.Informatica . Factless fact is nothing but Non-additive measures.

Please pick the good answer available or submit your answer. September 14. July 22.Informatica .regards rma ======================================= 205. Please pick the good answer available or submit your answer. 2006 04:25:39 Praveen kumar Testing #1 file:///C|/Perl/bin/result.file:///C|/Perl/bin/result.html (205 of 363)4/1/2009 7:50:58 PM .html No best answer available.how do we do unit testing in informatica?how do we load data in informatica ? QUESTION #205 No best answer available. 2006 02:44:20 srinivas RE: tell me one complecated mapping Click Here to view complete document if we are using more business rules or more transformations then it is a complex mapping like SCD type 2(version no effective date range flag current date) #1 ======================================= Mapping is nothing but flow of data from source to Target we r giving instructions to power center server to move data from source to targets accourding to our business rules if more business rules r there in our mapping then its copmlex mapping.

how do we load data by using period dimension? QUESTION #206 No best answer available.html Click Here to view complete document Unit testing are of two types 1. September 23.Create session on themapping and then run workflow. This is called Qualitative testing.First validate the mapping 2. ======================================= 206. Please pick the good answer available or submit your answer. There you can see how many number of source rows are applied and how many number of rows loaded in to targets and how many number of rows rejected. 1. Steps 1. If once rows are successfully loaded then we will go for qualitative testing. 2006 07:46:32 #1 file:///C|/Perl/bin/result.Qualitative testing Steps. Once the session is succeeded the right click on session and go for statistics tab.file:///C|/Perl/bin/result.This is called Quantitative testing.Take the DATM(DATM means where all business rules are mentioned to the corresponding source columns) and check whether the data is loaded according to the DATM in to target table.Informatica .If any data is not loaded according to the DATM then go and check in the code and rectify it. Quantitaive testing 2.html (206 of 363)4/1/2009 7:50:58 PM . This is what a devloper will do in Unit Testing.

file:///C|/Perl/bin/result. July 21. Thanks Bala #1 ======================================= There are Factless Facts:Facts without any measures. 2006 14:54:14 Bala RE: How many types of facts and what are they? Click Here to view complete document I know some there are Additive Facts Semi-Additive Non-Additive Accumulating Facts Factless facts Periodic fact table Transaction Fact table.file:///C|/Perl/bin/result.html calltomadhu Member Since: September 2006 Contribution: 34 RE: how do we load data by using period dimension? Click Here to view complete document hi its very simple thourgh sheduleing thanks madhu ======================================= 207.Informatica . Please pick the good answer available or submit your answer.html (207 of 363)4/1/2009 7:50:58 PM .How many types of facts and what are they? QUESTION #207 No best answer available.

Fact which is used some dimesion and not with some Non Additive Fact.file:///C|/Perl/bin/result. Periodic Facts: That stores only one row per transaction that happend over a period of time.Fact which is not used any of the dimesion ======================================= 1.Factless Fact . ======================================= hi there are 3 types additive semi additive non additive thanks madhu ======================================= hai there r three types of facts Additive Fact-Fact which is used across all dimensions semi Additive Fact. Non-Additive facts: Facts that are result of non-additon Semi-Additive Facts: Only few colums data can be added.html (208 of 363)4/1/2009 7:50:58 PM . There must be some more if some one knows pls add.html Additive Facts:Fact data that can be additive/aggregative. Regular Fact . Accumulating Fact: stores row for entire lifetime of event.With numeric values 2.Without numeric values file:///C|/Perl/bin/result.

I want exact reasons.how the union is eliminating the some rows. QUESTION #208 No best answer available.If anybody knows please give me reason..bye ======================================= Pls explain more. All are giving the above answers are not supporting that question. ======================================= hi We can merge multiple source qualier query records in union trans at same time its not like expresion trans (each row).so this is active transformation. July 22.in union transformation the number of rows resulting from union can be (are) different from the actual number of rows...html ======================================= 208. ======================================= union is active becoz it eliminates duplicates from the sources ======================================= Heloo Msr your answer is wrong. Click Here to view complete document By using union transformation we can eleminate some rows. file:///C|/Perl/bin/result. ======================================= By Definiation Active transformation is the transformation that changes the number of rows that pass through it.html (209 of 363)4/1/2009 7:50:58 PM .file:///C|/Perl/bin/result..Don't give the active transofrmation defination.. #1 ======================================= Hi Union Transformation is a active transformation because it changes the number of rows through the pipeline..Informatica . 2006 07:26:42 kirankumarvema RE: How can you say that union Transormation is Active.Union is not eliminating the duplicate rows. Please pick the good answer available or submit your answer.. so we can say it is active.How can you say that union Transormation is Active transformation. we are doing the union.

The union T/R is active and also passive depends upon the property "is active" which is present at union T/R properties tab. this Specifies whether this transformation is an active or a passive transformation. I think it may developed in future. but it is woring . As of now it wont eliminate duplicates. regards kiran ======================================= 209.file:///C|/Perl/bin/result. but this property is disabled in informatica 7.html (210 of 363)4/1/2009 7:50:58 PM . Please pick the good answer available or submit your answer.1. Sorting Options Oldest First Page 1 of 2 « First 1 2 > Last » file:///C|/Perl/bin/result. ======================================= Hi all some people saying that a union T/R eliminates duplicates.how many types of dimensions are available in informatica? QUESTION #209 No best answer available.e before 7.because it eliminates duplicate rows When you select distinct option in union properties tab.Before union transformation was implement the funda on number of rows was right i.1.0 but now its not the exact benchmark to determine the active transformation Thanks Uma ======================================= Hi Union also Active tranformation. Otherwise it can generate only 0 or 1 output row for each input row.Informatica . When you enable this option the transformation can generate 0 1 or more output rows for each input row.html It normally has multiple input groups to add on it compare to other transformation.

2006 00:18:27 hello RE: how many types of dimensions are available in info. Click Here to view complete document three types of dimensions are available ======================================= What are they?Plz explain me.glaxy schema #1 ======================================= i think there r 3 types of dimension table 1.What is rapidly changing dimensions? ======================================= no ======================================= hi there r 3 types of dimensions 1. That are 1.snowflake schema 3.file:///C|/Perl/bin/result..html July 31. General dimensions file:///C|/Perl/bin/result.stand-alone 2 local..star schema 2. 3.html (211 of 363)4/1/2009 7:50:58 PM .global ======================================= There are 3 types of dimensions available according to my knowledge.

Use this mapping when you want to keep a full history of dimension data tracking changes with an exact effective date range. Type 2 Dimension/Flag Current mapping. ======================================= I want each and every one of you who is answering please don't make fun out of this. Use this mapping when you want to keep a full history of dimension data tracking the progression of changes while flagging only the current dimension. Use this mapping when you want to keep the current and previous dimension values in your dimension table. Type 2 Dimension/Version Data mapping. Type 2 Dimension/Effective Date Range mapping. Loads a slowly changing dimension table by inserting new dimensions and overwriting existing dimensions.html 2. Loads a slowly changing dimension table by inserting new and changed dimensions using a flag to mark current dimension data and an incremented primary key to track changes.file:///C|/Perl/bin/result. Loads a slowly changing dimension table by inserting new and changed dimensions using a date range to define current dimension data. Loads a slowly changing dimension table by inserting new and changed dimensions using a version number and incremented primary key to track changes. Type 3 Dimension mapping. some one gave the answer star flake schema snow flake schema etc how can a schema come under a type of dimension ANSWER: file:///C|/Perl/bin/result. some one gave the answer no . Use this mapping when you do not want a history of previous dimension data. Junk Dimensions ======================================= Using the Slowly Changing Dimensions Wizard The Slowly Changing Dimensions Wizard creates mappings to load slowly changing dimension tables: q q q q q Type 1 Dimension mapping. Loads a slowly changing dimension table by inserting new dimensions and updating values in existing dimensions. Confirmed dimensions 3.html (212 of 363)4/1/2009 7:50:58 PM . Use this mapping when you want to keep a full history of dimension data and to track the progression of changes.

html (213 of 363)4/1/2009 7:50:58 PM .. we have one more type of dimension that is CONFORMED DIMENSION: The dimension which gives the same meaning across different star schemas is called Conformed dimension. type2 SCD: Here we will add a new row for updated data..How can you improve the performance of Aggregate transformation? QUESTION #210 No best answer available. file:///C|/Perl/bin/result.. Please pick the good answer available or submit your answer..file:///C|/Perl/bin/result. but mostly used one is type2 SCD. type3 SCD: Here we will add new columns.html One major classification we use in our real time modelling is Slowly Changing Dimensions type1 SCD: If you want to load an updated row of previously existed row the previous data will be replaced. ex: Time dimension....Informatica .. thq hari krishna ======================================= casual dimension confimed dimension degenrate dim junk dimension raged dim dirty dim ======================================= 210. where ever it was gives the same meaning ======================================= wat r those three dimensions tht r available in inforamtica here we get multiple answers cud anyone tell me the exact once. So we lose historical data. So we have both current and past records which aggrees with the concept of datawarehousing maintaining historical data.

.e Index cache and data cache.Its improoves the performance. Click Here to view complete document By using sorter transformation before the aggregator transformation.increase aggregator cache size.i.file:///C|/Perl/bin/result.e reduce number of input and output ports. use sorter transimition before aggregator give sorter input to aggreator. ======================================= IN Aggregator transformations select the sorted input check list in the properties tab and write sql query in source qulifier. 2006 06:20:23 shanthi RE: How can you improve the performance of Aggregate t. file:///C|/Perl/bin/result. create group by condition only on numaric columns.html (214 of 363)4/1/2009 7:50:58 PM . 3. ======================================= Hi we can improve the aggrifation performance by doing like this.Give input/output what you need in the transformation.send sorted input. 2.html August 06.i. #1 ======================================= by using sorted input ======================================= hi we can improve the agrregator performence in the following ways 1..

html (215 of 363)4/1/2009 7:50:58 PM .Why did you use stored procedure in your ETL Application? QUESTION #211 No best answer available. Click Here to view complete document hi usage of stored procedure has the following advantages 1checks the status of the target database 2drops and recreates indexes 3determines if enough space exists in the database 4performs aspecilized calculation #1 ======================================= Stored procedure in Informatica will be useful to impose complex business rules. Please pick the good answer available or submit your answer. ======================================= to execute database procedures ======================================= Hi file:///C|/Perl/bin/result. August 11.file:///C|/Perl/bin/result... 2006 13:16:09 sudha RE: Why did you use stored procedure in your ETL Appli.Informatica .html increate the cashe size of aggreator. ======================================= 211.

oracle pl/sql package this will run on oracle database(which is the databse where we need to do changes) and it will be faster comapring to informatica because it is runing on the oracle databse.e. August 08.. The basic thing one should understand about this is it is essential transformation to perform DML operations on already data populated targets(i. 2006 12:39:03 angeletteeye RE: why did u use update stategy in your application? Click Here to view complete document Update Strategy is used to drive the data to be Inert Update and Delete depending upon some condition.you will create two flows and will make one as insert and one as update depending upon some condition.. Insertion Updation Deletion Rejection when records come to this transformation depending on our requirement we can decide whether to file:///C|/Perl/bin/result.html Using of stored procedures plays important role..Suppose ur using oracle database where ur doing some ETL changes you may use informatica .. Please pick the good answer available or submit your answer.Some jobs make take hours to run ..For eg: If you want to do update and insert in one mapping.Informatica ....why did u use update stategy in your application? QUESTION #212 No best answer available.html (216 of 363)4/1/2009 7:50:58 PM #1 . If use stored procedure i.e targets which contain some records before this mapping loads data) It is used to perform DML operations.Some things which we cant do using tools we can do using packages. ======================================= 212.in order to save time and database usage we can go for stored procedures.Refer : Update Strategy in Transformation Guide for more information ======================================= Update Strategy is the most important transformation of all Informatica transformations....file:///C|/Perl/bin/result.In this every row of the table pass should pass through informatica and it should undergo specified ETL changes mentioned in transformations. You can do this on session level tooo but there you cannot define any condition.

html (217 of 363)4/1/2009 7:50:58 PM . The basic thing one should understand about this is it is essential transformation to perform DML operations on already data populated targets(i. For example take an input row if it is already there in the target(we find this by lookup transformation) update it otherwise insert it.e targets which contain some records before this mapping loads data) It is used to perform DML operations. We can also specify some conditions based on which we can derive which update strategy we have to use. We can also specify some conditions based on which we can derive which update strategy we have to use.html insert or update or reject the rows flowing in the mapping. eg: iif(condition DD_INSERT DD_UPDATE) if condition satisfies do DD_INSERT otherwise do DD_UPDATE DD_INSERT DD_UPDATE DD_DELETE DD_REJECT are called as decode options which can perform the respective DML operations. Insertion Updation Deletion Rejection when records come to this transformation depending on our requirement we can decide whether to insert or update or reject the rows flowing in the mapping. For example take an input row if it is already there in the target(we find this by lookup transformation) update it otherwise insert it. There is a function called DECODE to which we can arguments as 0 1 2 3 DECODE(0) DECODE(1) DECODE(2) DECODE(3) for insertion updation deletion and rejection ======================================= Update Strategy is the most important transformation of all Informatica transformations. file:///C|/Perl/bin/result.file:///C|/Perl/bin/result.

How do you create single lookup transformation using multiple tables? QUESTION #213 No best answer available. August 10.. Please pick the good answer available or submit your answer. There is a function called DECODE to which we can arguments as 0 1 2 3 DECODE(0) DECODE(1) DECODE(2) DECODE(3) for insertion updation deletion and rejection ======================================= to perform dml operations thanks madhu ======================================= 213.html eg: iif(condition DD_INSERT DD_UPDATE) if condition satisfies do DD_INSERT otherwise do DD_UPDATE DD_INSERT DD_UPDATE DD_DELETE DD_REJECT are called as decode options which can perform the respective DML operations. 2006 16:46:28 Srinivas RE: How do you create single lookup transformation usi.Informatica .html (218 of 363)4/1/2009 7:50:58 PM ..file:///C|/Perl/bin/result. #1 file:///C|/Perl/bin/result.

file:///C|/Perl/bin/result.html

Click Here to view complete document Write a override sql query. Adjust the ports as per the sql query. ======================================= no it is not possible to create single lookup on multiple tables. beacuse lookup is created upon target table. ======================================= for connected lkp transformation1>create the lkp transformation.2>go for skip.3>manually enter the ports name that u want to lookup.4>connect with the i/p port from src table.5>give the condition6>go for generate sql then modify according to u'r requirement validateit will work.... ======================================= just we can create the view by using two table then we can take that view as lookup table

======================================= If you want single lookup values to be used in multiple target tables this can be done !!! For this we can use Unconnected lookup and can collect the values from source table in any target table depending upon the business rule ...

=======================================

214.Informatica - In update strategy target table or flat file which gives more performance ? why?

QUESTION #214 No best answer available. Please pick the good answer available or submit your answer.
August 10, 2006 04:10:37 prasad.yandapalli stored procedure #1

file:///C|/Perl/bin/result.html (219 of 363)4/1/2009 7:50:58 PM

file:///C|/Perl/bin/result.html

Click Here to view complete document we use stored procedure for populating and maintaing databases in our mapping ======================================= Pros: Loading Sorting Merging operations will be faster as there is no index concept and Data will be in ASCII mode. Cons: There is no concept of updating existing records in flat file. As there is no indexes while lookups speed will be lesser.

======================================= hi faltfile give more performance. beacause there is no index cosept and there is no constraints

=======================================

215.Informatica - How to load time dimension?

QUESTION #215 No best answer available. Please pick the good answer available or submit your answer.
August 15, 2006 04:08:18 Mahesh RE: How to load time dimension? #1

file:///C|/Perl/bin/result.html (220 of 363)4/1/2009 7:50:58 PM

file:///C|/Perl/bin/result.html

Click Here to view complete document We can use SCD Type 1/2/3 to load any Dimensions based on the requirement. ======================================= We can use SCD Type 1/2/3 to load data into any dimension tables as per the requirement. ======================================= U can load time dimension manually by writing scripts in PL/SQL to load the time dimension table with values for a period. Ex:- M having my business data for 5 years from 2000 to 2004 then load all the date starting from 1-12000 to 31-12-2004 its around 1825 records. Which u can do it fast writing scripts. Bhargav

======================================= hi thourgh type1 type2 type3 depending upon the codition thanks madhu

======================================= For loading data in to other dimensions we have respective tables in the oltp systems.. But for time dimension we have only one base in the OLTP database. Based on that we have to load time dimension. We can loan the time dimension using ETL procedures which calls the procedure or function created in the database. If the columns are more in the time dimension we have to creat it manually by using Excel sheet.

======================================= create a procedure to load data into Time Dimension. The procedure needs to run only once to popullate all the data. For eg the code below fills up till 2015. You can modify the code to suit the feilds in ur table.

file:///C|/Perl/bin/result.html (221 of 363)4/1/2009 7:50:58 PM

file:///C|/Perl/bin/result.html

create or replace procedure QISODS.Insert_W_DAY_D_PR as LastSeqID number default 0; loaddate Date default to_date('12/31/1979' 'mm/dd/yyyy'); begin Loop LastSeqID : LastSeqID + 1; loaddate : loaddate + 1; INSERT into QISODS.W_DAY_D values( LastSeqID Trunc(loaddate) Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2) ) TO_FLOAT(TO_CHAR(loaddate 'MM')) TO_FLOAT(TO_CHAR(loaddate 'Q')) trunc((ROUND(TO_DECIMAL(to_char(loaddate 'DDD'))) + ROUND(TO_DECIMAL(to_char(trunc(loaddate 'YYYY') 'D')))+ 5) / 7) TO_FLOAT(TO_CHAR(loaddate 'YYYY')) TO_FLOAT(TO_CHAR(loaddate 'DD')) TO_FLOAT(TO_CHAR(loaddate 'D')) TO_FLOAT(TO_CHAR(loaddate 'DDD')) 1 1 1 1 1 TO_FLOAT(TO_CHAR(loaddate 'J')) ((TO_FLOAT(TO_CHAR(loaddate 'YYYY')) + 4713) * 12) + TO_number(TO_CHAR(loaddate 'MM')) ((TO_FLOAT(TO_CHAR(loaddate 'YYYY')) + 4713) * 4) + TO_number(TO_CHAR(loaddate 'Q')) TO_FLOAT(TO_CHAR(loaddate 'J'))/7 TO_FLOAT (TO_CHAR (loaddate 'YYYY')) + 4713 TO_CHAR(load_date 'Day') TO_CHAR(loaddate 'Month') Decode(To_Char(loaddate 'D') '7' 'weekend' '6' 'weekend' 'weekday') Trunc(loaddate 'DAY') + 1 Decode(Last_Day(loaddate) loaddate 'y' 'n') to_char(loaddate 'YYYYMM') to_char(loaddate 'YYYY') || ' Half' || Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2) ) TO_CHAR(loaddate 'YYYY / MM') TO_CHAR(loaddate 'YYYY') ||' Q ' ||TRUNC(TO_number( TO_CHAR(loaddate
file:///C|/Perl/bin/result.html (222 of 363)4/1/2009 7:50:58 PM

file:///C|/Perl/bin/result.html

'Q')) ) TO_CHAR(loaddate 'YYYY') ||' Week'||TRUNC(TO_number( TO_CHAR(loaddate 'WW'))) TO_CHAR(loaddate 'YYYY')); If loaddate to_Date('12/31/2015' 'mm/dd/yyyy') Then Exit; End If; End Loop; commit; end Insert_W_DAY_D_PR;

=======================================

216.Informatica - what is the architecture of any Data warehousing project? what is the flow?

QUESTION #216 No best answer available. Please pick the good answer available or submit your answer.
August 21, 2006 06:52:22 satyaneerumalla Member Since: August 2006 Contribution: 16 #1

RE: what is the architecture of any Data warehousing p... Click Here to view complete document 1)The basic step of datawarehousing starts with datamodelling. i.e creation dimensions and facts. 2)datawarehouse starts with collection of data from source systems such as OLTP CRM ERPs etc 3)Cleansing and transformation process is done with ETL(Extraction Transformation Loading) tool. 4)by the end of ETL process target databases(dimensions facts) are ready with data which accomplishes the business rules. 5)Now finally with the use of Reporting tools(OLAP) we can get the information which is used for decision support.

=======================================

file:///C|/Perl/bin/result.html (223 of 363)4/1/2009 7:50:58 PM

file:///C|/Perl/bin/result.html

very nice answer thanks

======================================= nice answer i have more doubts and can u give me ur mail id =======================================

217.Informatica - What are the questions asked in PDM Round (Final Hr round)

QUESTION #217 No best answer available. Please pick the good answer available or submit your answer.
November 14, 2006 02:08:00 srinuv_11 Member Since: October 2006 Contribution: 23 #1

RE: What are the questions asked in PDM Round(Final Hr... Click Here to view complete document We can't say which questions asking in which round that will depending on the interviewer. In Hr round they will ask what about ur current company tell some thing about ur company and why ur preffering this company and tell me some thing my company this they will ask any type of question. These are the sample questions. =======================================

218.Informatica - What is the difference between materialized view and a data mart? Are they same?

QUESTION #218 No best answer available. Please pick the good answer available or submit your answer.
August 28, 2006 09:46:52 satyaneerumalla Member Since: August 2006 Contribution: 16 #1

RE: What is the difference between materialized view a...

file:///C|/Perl/bin/result.html (224 of 363)4/1/2009 7:50:58 PM

file:///C|/Perl/bin/result.html

Click Here to view complete document Hi friend please elaborate what do you mean by materialized view. Then i think i can help you to clear your doubt. mail me at satya.neerumalla@tcs.com

======================================= A materialized view provides indirect access to table data by storing the results of a query in a separate schema object unlike an ordinary view which does not take up any storage space or contain data. Materialized views are schema objects that can be used to summarize precompute replicate and distribute data. E.g. to construct a data warehouse.

The definition of materialized view is very near to the concept of Cubes where we keep summarized data. But cubes occupy space. Coming to datamart that is completely different concept. Datawarehouse contains overall view of the organization. But datamart is specific to a subjectarea like Finance etc... we can combine different data marts of a compnay to form datawarehouse or we can split a datawarehouse into different data marts.

======================================= hi view direct connect to table it wont contain data and what we ask it will fire on table and give the data. ex: sum(sal). materilized view are indirect connect to data and stored in separte scheema. it is just like cube in dw. it will have data. what ever we ask summerised information it will give and stores it. when ever we ask

file:///C|/Perl/bin/result.html (225 of 363)4/1/2009 7:50:58 PM

Please pick the good answer available or submit your answer.Informatica . August 28. August 28. 2006 09:43:04 satyaneerumalla Member Since: August 2006 Contribution: 16 #1 RE: How do we load from PL/SQL script into Informati... file:///C|/Perl/bin/result.How do we load from PL/SQL script into Informatica mapping? QUESTION #220 No best answer available.file:///C|/Perl/bin/result.html next time it will directly from here itself. 2006 15:13:20 prudhvi RE: In workflow can we send multiple email ? Click Here to view complete document yes we can send multiple e-mail in a workflow #1 ======================================= Yes only on the UNIX version of Workflow and not Windows based version.In workflow can we send multiple email ? QUESTION #219 No best answer available. performance wise this view are faster than normal view.Informatica . Please pick the good answer available or submit your answer.html (226 of 363)4/1/2009 7:50:58 PM . ======================================= 220. ======================================= 219.

when we run the session containing this transformation the pl/sql procedure will get executed.. Either elaborate your question or mail me at satya.neerumalla@tcs. we can use that one. 2006 07:07:31 calltomadhu Member Since: September 2006 Contribution: 34 #1 file:///C|/Perl/bin/result. September 23.file:///C|/Perl/bin/result.. Please pick the good answer available or submit your answer. If you want more clarification.html Click Here to view complete document You can use StoredProcedure transformation.Informatica . There you can specify the pl/sql procedure name.. thanks madhu ======================================= You can actually create a view and import it as source in mapping .can any one tell why we are populating time dimension only with scripts not with mapping? QUESTION #221 No best answer available.com ======================================= hi for database procedure we have procedure transmission. ======================================= 221.html (227 of 363)4/1/2009 7:50:58 PM .

file:///C|/Perl/bin/result. 01/01/1800 to some date in future say 01/01/3001 ======================================= 222.Informatica . there are a number of advantages in performance and simplicity of design with this strategy. thanks madhu ======================================= How can time dimension be a rapidly chnging dimensions.What about rapidly changing dimensions?Can u analyze with an example? QUESTION #222 No best answer available...html RE: can any one tell why we are populating time dimens.html (228 of 363)4/1/2009 7:50:58 PM . Please pick the good answer available or submit your answer. if you use mapping it is very big stuff and that to be it very big problem in performence wise. September 04. 2006 09:05:10 satyaneerumalla Member Since: August 2006 Contribution: 16 #1 RE: What about rapidly changing dimensions?Can u analy. Click Here to view complete document hi becuase time dimension is rapidly changing dimenction. Time dimension is one table where you load date and time related information so that the key can be used in facts.. You use a script to laod time dimension becuase you load it one time.. as i said earlier all it contains are dates starting from one point of time say. this way you dont have to use entire date in the fact and can rather use the time key..file:///C|/Perl/bin/result..

The best example of this one is Sales Table. However these prices change almost daily for quite a lot of these products leading to a huge dimensional table and requiring continuous updations. Changing means it can be modify or added. I'm having a SKU dimension of around 150 000 unique products which is already a SCD Type 2 for some attributes.html Click Here to view complete document hi a rapidly changing dimensions are those in which values are changing continuously and giving lot of difficulty to maintain them. ======================================= hi If you dont mine plz tell me how to creat rapidly changing dimensinons. Example.. In addition I'm willing to track changes of the sales and purchase price. ======================================= Rapidly changing dimension is that whre the dimensions changes quikly. ======================================= hi question itself there that data is quite freequently changing. please tell me what is the use of custom transformation... go through it. thanking u.html (229 of 363)4/1/2009 7:50:58 PM . best example is ATM transactions(BANKS).. and one more question.bye.The data being changes continuesly and concurently for each second so it is very difficult to capture this dimensions. so a better option would be shift those attributes into a fact table as facts which solves the problem. file:///C|/Perl/bin/result. i am sure you like it.file:///C|/Perl/bin/result. i am giving one of the best real world example which i found in some website while browsing. description of a rapidly changing dimension by that person: I'm trying to model a retailing case .

it will look some thing like this IIF( JOB 'MANAGER' DD_DELETE DD_INSERT ) this expression marks jobs with an ID of manager for deletion and all other items for insertion.update 2.delete and datadriven and all these options are present in your session properties now if you select the datadriven option informatica takes the logic to update delete or reject data from your designer update strategy transformation.file:///C|/Perl/bin/result. If you do not choose data driven optionn setting.what are the transformations that restrict the partitioning of sessions? QUESTION #224 *Advanced External procedure transformation and External procedure transformation: This Transformation contains a check box on the properties tab to allow partitioning. ======================================= 224. 2006 07:38:33 fazal RE: What are Data driven Sessions? #1 ======================================= Once you load the data in your DW you can update the new data with the following options in your session properties:1.update.html ======================================= 223.What are Data driven Sessions? QUESTION #223 The informatica server follows instructions coded into update strategy transformations with in the session mapping to determine how to flag records for insert.Informatica .html (230 of 363)4/1/2009 7:50:58 PM .delete or reject.insert2. Hope answer the question.Informatica . Please pick the good answer available or submit your answer. the informatica server ignores all update strategy transformations in the mapping Click Here to view complete document No best answer available. file:///C|/Perl/bin/result. September 07.

Click Here to view complete document No best answer available. file:///C|/Perl/bin/result.Informatica .. Aggregator Transformation: If you use sorted ports you cannot partition the associated source Joiner Transformation: you can not partition the master source for a joiner transformation Normalizer Transformation XML targets.Wht is incremental loading?Wht is versioning in 7. Please pick the good answer available or submit your answer.html (231 of 363)4/1/2009 7:50:58 PM .html *Aggregator Transformation: If you use sorted ports you cannot partition the associated source *Joiner Transformation: you can not partition the master source for a joiner transformation *Normalizer Transformation *XML targets..file:///C|/Perl/bin/result. September 07. Please pick the good answer available or submit your answer. 2006 14:32:12 Manasa RE: what are the transformations that restrict the par. ======================================= 225.1? QUESTION #225 No best answer available. #1 ======================================= 1)source defination 2)Sequence Generator 3)Unconnected Transformation 4)Xml Target defination ======================================= Advanced External procedure transformation and External procedure transformation: This Transformation contains a check box on the properties tab to allow partitioning.

. Use the Type 2 Dimension/Version Data mapping to update a slowly changing dimension table when you want to keep a full history of dimension data in the table.i.1 is like a confugration management system where you have every version of the mapping you ever worked upon.e. Simply if any body is working in Programming language then it is SOS. Shivaji Thaneru ======================================= Hi Please see the incremental loding answer in CDC. ======================================= Hi The Type 2 Dimension/Version Data mapping filters source rows based on user-defined comparisons and inserts both new and changed dimensions into the target. 2006 11:06:34 Anuj Agarwal RE: Wht is incremental loading?Wht is versioning in 7. not to load the ASIS records which already exist. In the Type 2 Dimension/Version Data target the current version of a dimension has the highest version number and the highest incremented primary key of the dimension. Click Here to view complete document #1 Incremental loading in DWH means to load only the changed and new records. Means source of site. Means file:///C|/Perl/bin/result..html September 11. This is very helpful in n environment where you have several users working on a single feature.file:///C|/Perl/bin/result. Whenever you have checkedin and created a lock noone else can work on the same mapping version. Versioning in Informatica 7.. Now i will tell you about versioning.html (232 of 363)4/1/2009 7:50:58 PM . Changes are tracked in the target table by versioning the primary key and creating a version number for each dimension in the table. Version numbers and versioned primary keys track the order of changes to each dimension.

September 11. See i developed some software in one storing area. Please pick the good answer available or submit your answer. General architecture of DWH OLTP System--> ODS ---> DWH( Denomalised Star or Snowflake vary case to case) #1 ======================================= This consept is really good.. file:///C|/Perl/bin/result. so like this hystory will be maintained. If we found there is bug in previous version.file:///C|/Perl/bin/result. I will tell u clearly.html where the hystory of data will be available.What is ODS ?what data loaded from it ? What is DW architecture? QUESTION #226 No best answer available. After that some enhancement is happend then i will download the data i will do the modification and keep that in same area but with some othere file name. If at all anothere developer wants this one simply he will download this data and he will modify or add again he will keep that source code in that same place but with other filename. thanks madhu ======================================= 226. Data is stored with least redundancy. so what we will do simply we will revert back the changes bye downloading the source .html (233 of 363)4/1/2009 7:50:58 PM .Informatica .. If any body dont know about this consept read the below. 2006 10:58:25 A Agarwal RE: Wht is ODS ?wht data loaded from it ?Wht is DW ar. Click Here to view complete document ODS--Operational Data Source Normally in 3NF form.

so that next day moring when i am dumping the data there is no need to take 40000+10000. Now at 9'o clock i had taken a backup and left. after 9 to 9 again instead of stroing the data in the same server i will keep from 9 p. It is very slow in performance wise. data i will store saperatly. Peak hours is 9-9. Operational Data Source.. Please pick the good answer available or submit your answer. i can assume tht by defalut it will add tht 40000 records to this current data 10000 records n gives the reslut 50000. thq. cud u plz reply to this. so i can take directly 10000 records. ok in this one per day around 40000 records are added or modified.Informatica . 2006 10:02:20 #1 file:///C|/Perl/bin/result.html (234 of 363)4/1/2009 7:50:58 PM .what are the type costing functions in informatica QUESTION #227 No best answer available..m.m to 9 a. ODS to WH ODS StagingArea wh Thanks & regards madhu ======================================= ODS is an Integrated view of Operational sources(OLTP)... this consepct what we call as ODS. ======================================= thq from ur answer i can come to one conclusion tht ODS is used to store the current data.. Assume that 10000 records are added in this time..file:///C|/Perl/bin/result. Arch can be in two ways. ======================================= 227..html Assume that i have one 24/7 company. September 22.

thanks madhu.html (235 of 363)4/1/2009 7:50:58 PM . ShivajiThaneru Member Since: September 2006 Contribution: 9 #1 ======================================= Hi file:///C|/Perl/bin/result.file:///C|/Perl/bin/result.Informatica . Click Here to view complete document question was not clear can u repeat this question with full descritpion because there is no specific costing functions in informatica.what is the repository agent? QUESTION #228 No best answer available. ======================================= 228.. 2006 11:07:42 Shivat RE: what is the repository agent? Click Here to view complete document Hi The Repository Agent is a multi-threaded process that fetches inserts and updates metadata in the repository database tables. September 12.html calltomadhu Member Since: September 2006 Contribution: 34 RE: what are the type costing functions in informatica. The Repository Agent uses object locking to ensure the consistency of metadata in the repository.. Please pick the good answer available or submit your answer.

file:///C|/Perl/bin/result. ======================================= Hi The Repository Server uses a process called Repository agent to access the tables from Repository database. September 15. Please pick the good answer available or submit your answer.html (236 of 363)4/1/2009 7:50:58 PM . thanks madhu ======================================= 229.Informatica .The Repository sever uses multiple repository agent processes to manage multiple repositories on different machines on the network using native drivers.html The Repository Server uses a process called Repository agent to access the tables from Repository database.what is the basic language of informatica? QUESTION #229 No best answer available. chsrgeekI ======================================= Hi Name itself it is saying that agent means meadiator between and repositary server and reposatary database tables.The Repository sever uses multiple repository agent processes to manage multiple repositories on different machines on the network using native drivers. 2006 14:38:58 vick RE: what is the basic language of informatica? #1 file:///C|/Perl/bin/result. simply reposatary agent means who speaks with reposatary.

neerumalla@tcs.com Member Since: August 2006 Contribution: 16 #1 file:///C|/Perl/bin/result.Then only it will under stand the data base language. The change data thus captured is then made available to the target systems in a controlled manner. Thanks & Regards Madhu D. mail me to discuss any thing related to informatica.file:///C|/Perl/bin/result.html (237 of 363)4/1/2009 7:50:58 PM . ======================================= 230. Please pick the good answer available or submit your answer. With CDC data extraction takes place at the same time the insert update or delete operations occur in the source tables and the change data is stored inside the database in change tables. ======================================= The basic language of Informatica is SQL plus. 2006 08:42:07 satyaneerumalla RE: Wht is CDC? Click Here to view complete document Changed Data Capture (CDC) helps identify the data in the source system that has changed since the last extraction. September 18.What is CDC? QUESTION #230 No best answer available. satya.Informatica . it's my pleasure to discuss.html Click Here to view complete document Sql plus ======================================= Hi basic language is Latin. and it was developed in VC++.

. thanks madhu ======================================= Whenever any source data is changed we need to capture it in the target system also this can be basically in 3 ways Target record is completely replaced with new record(Type 1) Complete changes can be captured as different records & stored in the target table(Type 2) Only last change & present data can be captured (Type 3) CDC can be done generally by using a timestamp or version key ======================================= 231.html (238 of 363)4/1/2009 7:50:58 PM .html ======================================= CDC Changed Data Capture. September 19. Please pick the good answer available or submit your answer. 2006 02:45:04 satyaneerumalla Member Since: August 2006 Contribution: 16 #1 RE: what r the mapping specifications? how versionzing..Informatica .file:///C|/Perl/bin/result. file:///C|/Perl/bin/result. for this one we have type1 and type2 and type3 cdc's are there. Name itself saying that if any data is changed it will how to get the values.what r the mapping specifications? how versionzing of repository objects? QUESTION #231 No best answer available. depending upon our requirement we can fallow.

Business Requirement Information 3.Source System Initial Rows Short Description Refresh Frequency Preprocessing Post Processing Error Strategy Reload Strategy Unique Source Fields 4.Mapping Name 2.html Click Here to view complete document Mapping Specification It is a metadata document of a mapping.Target System Rows/Load 5. A typical Mapping specification contains: 1.html (239 of 363)4/1/2009 7:50:58 PM .Sources Tables Table Name Schema/Owner Selection/Filter Files File Name File Owner Unique Key 6.file:///C|/Perl/bin/result.Targets file:///C|/Perl/bin/result.

That is he can lock it. In this we have to things 1.Mapping Name 2. 7. ======================================= Mapping Specification It is a metadata document of a mapping.Business Requirement Information 3.Checkout: when some user is modifying an object(source target mapping) he can checkout it.Information about lookups 8.html Information about different targets. So that until he release no body can access it. Some thing like above one.html (240 of 363)4/1/2009 7:50:58 PM .file:///C|/Perl/bin/result.Checkin: When you want to commit an object u use this checkin feature..Source System Initial Rows Short Description Refresh Frequency Preprocessing Post Processing file:///C|/Perl/bin/result..Source To Target Field Matrix Target Table Target Column Data-type Source Table Source Column Data-type Expression Default Value if Null Data Issues/Quality/Comments Coming to Versioning of objects in repository. 2. A typical Mapping specification contains: 1.

Source To Target Field Matrix Target Table Target Column Data-type Source Table Source Column Data-type Expression Default Value if Null Data Issues/Quality/Comments Coming to Versioning of objects in repository.Information about lookups 8.html Error Strategy Reload Strategy Unique Source Fields 4. That is he can lock it.html (241 of 363)4/1/2009 7:50:58 PM . Some thing like above one.. So that until he release no body can access it.file:///C|/Perl/bin/result.. In this we have to things 1. file:///C|/Perl/bin/result.Sources Tables Table Name Schema/Owner Selection/Filter Files File Name File Owner Unique Key 6.Checkout: when some user is modifying an object(source target mapping) he can checkout it.Targets Information about different targets.Target System Rows/Load 5. 7.

======================================= hi mapping is nothing but flow or work. September 26.html 2.Informatica . 2006 09:46:17 opbang RE: what is bottleneck in informatica? Member Since: March 2006 Contribution: 46 #1 file:///C|/Perl/bin/result. where the data is coming and where data is going.html (242 of 363)4/1/2009 7:50:58 PM .what is bottleneck in informatica? QUESTION #232 No best answer available. Please pick the good answer available or submit your answer. for this we need mapping name source table target table session. ======================================= Please let me know the answer for this question. thanks madhu ======================================= 232.file:///C|/Perl/bin/result.Checkin: When you want to commit an object u use this checkin feature.

Removing bottleneck is performance tuning. From a local repository you can create shortcuts to objects in shared folders in the global repository. When ETL Process is in progress first thing login to workflow monitor and observe performance statistic. Local repositories. These objects typically include source definitions common dimensions and lookups and enterprise standard transformations.. The global repository is the hub of the domain. You can also create file:///C|/Perl/bin/result.. Mostly bottleneck occurs at source qualifier during fetching data from source joiner aggregator Lookup Cache Building Session.e. Use local repositories for development.What is the differance between Local and Global repositary? QUESTION #233 No best answer available. I. In SSIS and Datastage when you run the job you can see at every level how many rows per second is processed by the server.html Click Here to view complete document Bottleneck in Informatica Bottleneck in ETL Processing is the point by which the performance of the ETL Process is slowr. These objects may include operational or Application source definitions reusable transformations mapplets and mappings.html (243 of 363)4/1/2009 7:50:58 PM . September 26. observe processing rows per second. A local repository is within a domain that is not the global repository. Use the global repository to store common objects that multiple developers can use through shortcuts.file:///C|/Perl/bin/result. ======================================= 233. Please pick the good answer available or submit your answer. 2006 09:55:03 opbang Member Since: March 2006 Contribution: 46 #1 RE: What is the differance between Local and Global re.Informatica . Click Here to view complete document You can develop global and local repositories to share metadata: q q Global repository.

2006 02:20:42 srinivas vadlakonda RE: COMMITS: What is the use of Source-based commits ?. October 11. 2006 02:03:38 srinivas vadlakonda RE: Explain in detail about Key Range & Round Robin pa. September 29.Explain in detail about Key Range & Round Robin partition with an example.... #1 file:///C|/Perl/bin/result. Round robin: The informatica server distributes the equal no of rows for each and every partition.html copies of objects in non-shared folders. Please pick the good answer available or submit your answer. Click Here to view complete document key range: The informatica server distributes the rows of data based on the st of ports that u specify as the partition key. Please pick the good answer available or submit your answer.Informatica .file:///C|/Perl/bin/result.Informatica .. ======================================= QUESTION #234 No best answer available. 234. #1 ======================================= 235.html (244 of 363)4/1/2009 7:50:58 PM .COMMITS: What is the use of Source-based commits ? PLease tell with an example ? QUESTION #235 No best answer available.

where as in case of normal mode entry of every record will be with database log file and with the informatica repository.html (245 of 363)4/1/2009 7:50:58 PM .Informatica .K RE: What Bulk & Normal load? Where we use Bulk and whe. Click Here to view complete document Hello when we try to load data in bulk mode there will be no entry in database log files so it will be tough to recover data if session got failed at some point. if you selected commit type is source once after 10000 records are queried immedialty server will commit that here server will be least bother how many records inserted in the target.file:///C|/Perl/bin/result. we use bulk mode to load data in databases it won't work with text files using as target where as normal mode will work fine with all type of targets.html Click Here to view complete document commits the data based on the course records ======================================= if you selected commit type is target once the cache is holding some 10000 records server will commit the records here server will be least bother of the number of source records processed. October 01.. 2006 10:10:23 Vamsi Krishna. ======================================= 236. Bulk mode is very fast compartively with normal mode. so if the session got failed it will be easy for us to start data from last committed point. Please pick the good answer available or submit your answer.What Bulk & Normal load? Where we use Bulk and where Normal? QUESTION #236 No best answer available. #1 ======================================= file:///C|/Perl/bin/result..

Click Here to view complete document lookup trans is check the condition like joiner but it has the more feature like get a related value update slowly chaning dimention #1 ======================================= lookupbut it will reduse the complexity of the solutions and improves the performance of the workflows.Informatica . Please pick the good answer available or submit your file:///C|/Perl/bin/result.file:///C|/Perl/bin/result. October 11.Which transformation has the most complexity Lookup or Joiner? QUESTION #237 No best answer available.Informatica . Ofcourse one can use the pre-session and post-session SQLs to drop and rebuild indexes/constraints. 2006 01:51:17 srinivas vadlakonda RE: Which transformation has the most complexity Looku.html in case of bulk for group of records a dml statement will created and executed but in the case of normal for every recorda a dml statement will created and executed if u selecting bulk performance will be increasing ======================================= Bulk mode is used for Oracle/SQLserver/Sybase..Where we use Star Schema & where Snowflake? QUESTION #238 No best answer available. As a result when using this mode recovery is unavailable. Please pick the good answer available or submit your answer. ======================================= 237. This mode improves performance by not writing to the database log. Further this mode doesn't work when update transformation is used and there shouldn't be any indexes or constraints on the table..html (246 of 363)4/1/2009 7:50:58 PM . ======================================= 238.

Please pick the good answer available or submit your answer.html answer.K #1 file:///C|/Perl/bin/result.file:///C|/Perl/bin/result.Informatica . If u look at Surrogate key conceot then NO beside surrogate key other columns can be duplicate.Where persistent cache will be stored? QUESTION #240 No best answer available.Can we create duplicate rows in star schema? QUESTION #239 No best answer available. Please pick the good answer available or submit your answer.initially we have implement high level design . October 25.Informatica . 2006 09:02:00 Ashwani RE: Can we create duplicate rows in star schema? Click Here to view complete document Duplicate row has nothing to do with StarSchema.so the client want to be normalized (snow flake schema) or de-normaliized data (star schema) which is used for their analysis. October 05..so we have implemented whatever their requirements #1 ======================================= 239. 2006 01:20:33 Raj RE: Where we use Star Schema & where Snowflake? Click Here to view complete document its depends for client requirements. October 01. 2006 10:04:07 Vamsi Krishna.. StarSchema is mathodoly and Duplicate row is part of actual implementation in DB..html (247 of 363)4/1/2009 7:50:58 PM .and in that case there is another special case if you implemented Unique inDex.Boom ======================================= #1 240..

html RE: Where persistent cache will be stored? Click Here to view complete document The informatica server saves the cache files for every session and reuses it for next session by that the query on the table will be reduced so that there will be some performance increment will be there. lookup 3.file:///C|/Perl/bin/result.K RE: What is SQL override? In which transformation we u. 1. October 01. source qualifier 2. Please pick the good answer available or submit your answer.Can you update the Target table? QUESTION #242 No best answer available.. 2006 10:01:32 Vamsi Krishna. ======================================= presistent cache stored in the server folder ======================================= 241. Please pick the good answer available or submit your file:///C|/Perl/bin/result.What is SQL override? In which transformation we use override? QUESTION #241 No best answer available. Click Here to view complete document Hello By default informatica server will generate sql query for every action if that query is not able to perform the exact task we can modify that query or we can genrate new once with new conditions and with new constraints.Informatica .Informatica .html (248 of 363)4/1/2009 7:50:58 PM .. target #1 ======================================= 242.

1. October 01. this update can be done by two type. using update stratergy 2.K RE: Can you update the Target table? Click Here to view complete document hello we have to update target table. target update override.html answer. if you are loading type-1 dimension type-2 dimension data at target surly you have to do. October 04.html (249 of 363)4/1/2009 7:50:58 PM .file:///C|/Perl/bin/result.Informatica . 2006 09:55:26 Vamsi Krishna.At what frequent u load the data? QUESTION #243 No best answer available. Thanks vamsi. 2006 01:39:58 opbang RE: At what frequent u load the data? Member Since: March 2006 Contribution: 46 #1 file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer. #1 ======================================= 243.

.file:///C|/Perl/bin/result...vadlakonda RE: What is a Materialized view? Diff.What is a Materialized view? Diff. 2006 01:45:10 srinivas. Between Materialized view and view QUESTION #244 No best answer available.. Between Materia. We can refresh the Materialised View when any changes are made in the master table.Is it possible to refresh the Materialized view? QUESTION #245 No best answer available. Please pick the good answer available or submit your answer. ======================================= 244. We can perform DML operation and direct path insert operation in Materialized View..Informatica ..Informatica .. or weekly.. ======================================= 245.. ETL process should take care. Depending on the frequency.. Please pick the good answer available or submit your file:///C|/Perl/bin/result. Other factors How fast OLTP is updating Data Volume Available time window for Extracting Data. Normal view is only for view the records. October 11.html (250 of 363)4/1/2009 7:50:58 PM .html Click Here to view complete document Loading frequency depends on the requirement of Business Users It could be daily during mid night... Click Here to view complete document materialized views are used in datawarehousing to precompute and store aggregated data such sum of sales and it will used for increase the speed of the query when we are taking the large data bases. how the updated transaction data will replace in fact tables. view nothing but it is a small table which meats our criteria it will not occupy the space #1 ======================================= View doesn't occupy any storage space in table space but meterilized view will occupy space ======================================= Materialized view stores the result set of a query but normal view does not store the same.

see this CREATE MATERIALIZED VIEW mv_emp_pk REFRESH FAST START WITH SYSDATE NEXT SYSDATE + 1/48 WITH PRIMARY KEY AS SELECT * FROM emp@remote_db.html (251 of 363)4/1/2009 7:50:58 PM .What are the common errors that you face daily? QUESTION #246 No best answer available. Please pick the good answer available or submit your answer. 2006 05:37:16 lingesh RE: Is it possible to refresh the Materialized view? Click Here to view complete document Yaa we can refresh materialized view..Informatica .e updateed data and inserted data. For active dataware house i mean inorder to have real time dataware house we can have materialized views. While creating of materialized views we can give options such as refresh fast and we can mention time so that it can refresh automatically and fetch new data i. October 30. October 05. EX:. #1 ======================================= 246.file:///C|/Perl/bin/result.html answer. 2006 04:44:35 shiva RE: What are the common errors that you face daily? #1 file:///C|/Perl/bin/result.

2006 01:24:36 opbang RE: what is the use of Factless Facttable? Member Since: March 2006 Contribution: 46 #1 file:///C|/Perl/bin/result. ======================================= 247.What is Shortcut? What is use of it? QUESTION #247 No best answer available.whilw the server is not able to connect the oracle server.file:///C|/Perl/bin/result. October 04. 2006 04:17:18 Vamsi Krishna K.we can create shortcuts for Source definitions Reusable transformations Mapplets Mappings Target definitions Business components.Informatica .what is the use of Factless Facttable? QUESTION #248 No best answer available. local shortcut2. global shortcut ======================================= #1 248.there are two diffrent types of shortcuts 1.Informatica .html (252 of 363)4/1/2009 7:50:58 PM . Please pick the good answer available or submit your answer. October 03. Please pick the good answer available or submit your answer.html Click Here to view complete document mostly we are geting oracle fatal error. RE: What is Shortcut? What is use of it? Click Here to view complete document Shortcut is a facility providing by informatica to share metadata objects across folders without copying the objects to every folder.

2006 01:20:21 opbang Member Since: March 2006 Contribution: 46 #1 RE: while Running a Session. This table will give you datewise whether the student has attended the class or not.You want to store the attendance information of the student. what are the two files it. October 04. October 05. For example .html Click Here to view complete document Factless Fact table are fact table which is not having any measures.html (253 of 363)4/1/2009 7:50:58 PM #1 . 2006 00:26:01 file:///C|/Perl/bin/result.Informatica . But there is no measures because fees paid etc is not daily.reject files target output file incremental aggregation file cache file ======================================= 250.give me an scenario where flat files are used? QUESTION #250 No best answer available. Please pick the good answer available or submit your answer.while Running a Session. what are the two files it will create? QUESTION #249 No best answer available.file:///C|/Perl/bin/result. ======================================= transaction can occur without the measure fore example victim id ======================================= 249. Click Here to view complete document Session Log file and Session Detail file ======================================= Besides session log it also creates the following files if applicable ...Informatica . Please pick the good answer available or submit your answer.

what r the types of data flows in workflow manager QUESTION #252 No best answer available. ======================================= 252.Kapil Goyal ======================================= 251. where normal table has errors. This minimizes the requirement of bandwidth faster transmission.file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer. 2006 14:57:59 zskhan Member Since: June 2006 Contribution: 3 #1 RE: Architectural diff b/w informatica 7. First one is explained by first post.html opbang Member Since: March 2006 Contribution: 46 RE: give me an scenario where flat files are used? Click Here to view complete document Loading data from flatfiles to database is faster.Informatica . Secondly Flat file can work with case sensitivity issue of data easily.Architectural diff b/w informatica 7.1 and 5. At remote location the required data can be converted into flatfile and same you can use at target location for loading. Please pick the good answer available or submit your answer. You are receiving a data from a remote location. v7 has repository server & pmserver v5 had pmserver only.1? QUESTION #251 No best answer available.Informatica . pmserver does not directly talks to repository database it talks to repository server which in turn talks to database.1 and 5. October 12.1? Click Here to view complete document 1. ======================================= Hi Flat files have some advantages which normal table does not have. file:///C|/Perl/bin/result.html (254 of 363)4/1/2009 7:50:58 PM .

Click Here to view complete document Types of dataflows1..html October 10.html (255 of 363)4/1/2009 7:50:58 PM .Informatica .sequential execution. lets say u have three S..Q n three instances of Targets by default informatica will load the data in the first target but using the Target load plan u can change this sequence .paralal execution3..control flow execution ======================================= #1 253.. in mapping 2. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result. 2006 15:03:04 Anonymous RE: what r the types of target loads Click Here to view complete document hi Target load plan is the process thru which u can decide the preference of the Target laod. October 13. in session ======================================= file:///C|/Perl/bin/result.what r the types of target loads QUESTION #253 No best answer available.by selecting vich target u want to be laoded first #1 ======================================= we define target load plans in 2 ways 1. 2006 13:03:29 sravangopisetty RE: what r the types of data flows in workflow manager.2.

In case the relationship is outer join joining both the tables will give correct results as Lookup returns only one row if multiple rows satisfy the join condition.Can we use lookup instead of join? reason QUESTION #254 No best answer available. If u r taking joiner we can join the heterogeneous sources.Informatica .. 2006 12:33:10 sravangopisetty RE: what is sql override where do we use and which tra. normal ======================================= 254. Please pick the good answer available or submit your answer.html There are two types of target load typesthey are1. October 11. 2006 01:28:11 srinivas vadlakonda RE: Can we use lookup instead of join? reason Click Here to view complete document yes we can use the lookup transformation instead of joiner but we can take the homogeous sources only. ======================================= #1 255.file:///C|/Perl/bin/result.what is sql override where do we use and which transformations QUESTION #255 No best answer available.html (256 of 363)4/1/2009 7:50:58 PM .Informatica . #1 file:///C|/Perl/bin/result.. ======================================= If the relationship to other table is a one to one join or many to one join then we can use lookup to get the required fields. October 10. Please pick the good answer available or submit your answer. bulk2.

Transfermation is Source qualifier transfermation..by default it will be the query from the mapping. Regards Rajesh file:///C|/Perl/bin/result. #1 ======================================= In Flat file SQL Override will not work.dat . Yes if it is an Target file it will have extension as . .file:///C|/Perl/bin/result. If you are talking about the flat file as a source it can be of any extension like . October 06.. joiner transformation2. We have different properties to set for a flat file. Please pick the good answer available or submit your answer. Which can be altered in the Target Properties. Click Here to view complete document Nope.html Click Here to view complete document THe defualt Sql Return by the source qualifier can be over writne by using source qualifier. ======================================= 2.. filter transformaton3..Informatica .out is the extension. QUESTION #256 No best answer available.look up transfermation3.html (257 of 363)4/1/2009 7:50:58 PM .doc etc.Target ======================================= Besides the answers above in Session properties -> transformations tab -> source qualifier ->SQL query . If you over write it this is also called 'SQL override' ======================================= we use sql override for 3 transformationswhen the source is homo genious1.out. sorter transformation ======================================= 256.In a flat file sql override will work r not? what is the extension of flatfile. 2006 09:39:08 suri RE: In a flat file sql override will work r not? what .

#1 file:///C|/Perl/bin/result.html (258 of 363)4/1/2009 7:50:58 PM .Informatica .. If the file is not sorted it can be comparable.file:///C|/Perl/bin/result..txt .out ======================================= 257.dat and the output or target flatfile extension is . October 11.Lookups into database table can be effective if the database can return sorted data fast and the amount of data is small because lookup can create whole cash in memory. the extension of the flat files are . Lookup cashes always whole file.doc . 2006 15:13:45 Anonymous RE: WHAT IS THE DIFFERENCE BETWEEN LOGICAL DESIG. If database responses slowly or big amount of data are processed lookup cache initialization can be really slow (lookup waits for database and stores cashed data on discs).WHAT IS THE DIFFERENCE BETWEEN LOGICAL DESIGN AND PHYSICAL DESIGN INA DATAWAREHOUSE QUESTION #258 No best answer available. October 13. Click Here to view complete document Are you lookuping flat file or database table? Generaly sorted joiner is more efective on flat files than lookup because sorted joiner uses merge join and cashes less rows. Please pick the good answer available or submit your answer.Informatica .what is cost effective transformation b/w lookup and joiner QUESTION #257 No best answer available.. ======================================= #1 258. Please pick the good answer available or submit your answer. Then it can be better use sorted joiner which throws data to output as reads them on input. 2006 11:14:23 Myk RE: what is cost effective transformation b/w lookup a..html ======================================= Using flat files sql override willnot work.

At this time you have to map: q q q q q Entities to tables Relationships to foreign key constraints Attributes to columns Primary unique identifiers to primary key constraints Unique identifiers to unique key constraints n if u refer this diag u will understand better : - ======================================= 259. 2006 01:10:52 srinivas vadlakonda RE: what is surrogate key ? how many surrogate key use.what is surrogate key ? how many surrogate key used in ur dimensions? QUESTION #259 No best answer available. October 12..file:///C|/Perl/bin/result.html (259 of 363)4/1/2009 7:50:58 PM . Please pick the good answer available or submit your answer..Informatica . #1 file:///C|/Perl/bin/result.html Click Here to view complete document During the logical design phase you defined a model for your data warehouse consisting of entities attributes and relationships. During the physical design process you translate the expected schemas into actual database structures. The entities are linked together using relationships.

2006 02:14:54 srinivas. October 16.. ======================================= 260. It is like a primary key in the target.html Click Here to view complete document surrogate key is used to replacement of the primary key DWH does not depends upon the primary key it is used to identify the internal records each diemension should have atleast one surrogate key.Informatica . Please pick the good answer available or submit your answer. QUESTION #260 No best answer available.what r the advantages and disadvantagesof a star schema and snoflake schema. It will reduce the redundency. Click Here to view complete document Schemas are two types starflake and snowsflake schema In starflake fact table is in normalized format and dimention table is in denormalized format In snowsflake both fact and dimention tables are in normalized format only if u r taking snowsflake it requires more dimention table and more foreign keys it will reduse the query performance. #1 ======================================= Main advantage in starschema file:///C|/Perl/bin/result.thanks in advance.file:///C|/Perl/bin/result. ======================================= Surrogate key or warehouse key acts like a composite primary key.. If the target doesn't have a unique key the surrogate key helps to address a particular row.html (260 of 363)4/1/2009 7:50:59 PM .vadlakonda RE: what r the advantages and disadvantagesof a .

Please pick the good answer available or submit your answer. November 16.file:///C|/Perl/bin/result. here if the last value for the sequence generator for the mapping1 run is 999 then the sequence generator value for second mapping will start from 1000.. this is how we relate reusable sequence generator and maplet. Click Here to view complete document The relation between reusable sequence generator and maplet is indirect.Informatica .html 1)it supports drilling and drill down options 2)fewer tables 3)less database In snowflake schema 1)degrade query peformance because more joins is there 2)savings small storage spaces ======================================= 261. #1 file:///C|/Perl/bin/result. 2006 06:05:48 Sarada RE: why do we use reusable sequencegenerator transform.html (261 of 363)4/1/2009 7:50:59 PM .why do we use reusable sequencegenerator transformation only in mapplet? QUESTION #261 No best answer available. Reusable Sequence generator are preferably used when we want the same sequence (that is the next value of the sequence) to be used in more than one mapping (may be because this next value loading the same field of same table in different mappings and to maintain the continuity its required) suppose there are two mappings using a reusable sequence generator and we run the two mappings one by one..

so performance increases. By default for reusable SEQUENCE it has a cache value set to 1000 that's why it takes 1000 else if you only have 595 records for the 1st session and automatically the 2nd session will start with 1000 as the sequence number bcos the cache value is set to it.Informatica . I would like to add more information into it. Thus it is made a must. ======================================= 262. Reusable Sequence Generator is a must for a Mapplet: Mapplet is basically to reuse a mapping as such if a non. file:///C|/Perl/bin/result.reusable sequence generator is used in mapplet then the sequence of numbers it generates mismatches and creates problem.in which particular situation we use unconnected lookup transformation? QUESTION #262 Submitted by: sridhar39 hi. Thanks Philip ======================================= Hi The solution provided is correct. Is it possible to change Number of Cached Values always to 1 instead of changing it after each time the session/sessions is/are run.html ======================================= Hi Question is Are you sure its going to start with the 1000 number after the 999.file:///C|/Perl/bin/result.html (262 of 363)4/1/2009 7:50:59 PM . I got another question. if it is the case that we can use either unconnected or connected i prefer unconnected why because unconnected doesnot participate in the dataflow so informatica server creates a seperate cache for unconnected and processing takes place parallely. both unconnected and connected will provide single output.

so performance increases. ======================================= The major advantage of unconnected lookup is its reusability.file:///C|/Perl/bin/result. ======================================= Use of connected and unconnected Lookup ic completely based on the logic which we need. We can call an unconnected lookup multiple times in the mapping unlike connected lookup. if it is the case that we can use either unconnected or connected i prefer unconnected why because unconnected doesnot participate in the dataflow so informatica server creates a seperate cache for unconnected and processing takes place parallely. However i just wanted to clear that we can get multiple rows data from an Unconnected lookup also. Just concatinate all the values which you want and get the result from the return row of unconnected lookup and then furthur split it in the expression. ======================================= hi both unconnected and connected will provide single output. If we want the output from a multiple ports at that time we have to use connected lookup Transformation.in which particular situation we use dynamic file:///C|/Perl/bin/result. ======================================= 263.html (263 of 363)4/1/2009 7:50:59 PM . However using Unconnected lookup takes more time as it breaks the flow and goes to an unconnected lookup to fetch the results.Informatica . ======================================= We can use the unconnected lookup transformation when we need to return the output from a single port.html Above answer was rated as good by the following members: Vamshidhar Click Here to view complete document We can use the unconnected lookup transformation when i need to return the only one port at that time i will use the unconnected lookup transformation instead of connected. We can also use the connected to return the one port but if u r taking unconnected lookup transformation it is not connected to the other transformation and it is not a part of data flow that why performance will increase.

html lookup? QUESTION #263 No best answer available.. If there are thousands of records dynamic cache kills time because it commits the database for each insert or update it makes. Click Here to view complete document Specific situation is not there to use the dynamic lookup if u r using the dynamic it will increase the performance and also it will minimize the transformation like sequence generator.. file:///C|/Perl/bin/result.file:///C|/Perl/bin/result. 2006 07:04:27 phanimv Member Since: July 2006 Contribution: 41 #1 RE: is there any relationship between java & inforemat. ======================================= 264.is there any relationship between java & inforematica? QUESTION #264 No best answer available. Please pick the good answer available or submit your answer. Please pick the good answer available or submit your answer.html (264 of 363)4/1/2009 7:50:59 PM . October 17. 2006 01:06:34 srinivas vadlakonda RE: in which particular situation we use dynamic looku.. of records are in hundreds one doesn't see much difference whether a static cache is used or dynamic cache.Informatica . October 18. #1 ======================================= If the no..

g RE: How many types of flatfiles available in Informati.. Click Here to view complete document There are two types of flate files: 1. 2006 04:08:09 subbarao.Informatica . October 25..html Click Here to view complete document Like java informatica is a plat-form independent portable and architechtural nutral.Delimtedwidth 2. 2006 15:42:32 sn3508 RE: what is the event-based scheduling? Member Since: April 2006 Contribution: 20 #1 file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer.html (265 of 363)4/1/2009 7:50:59 PM .Informatica ..Fixdwidth #1 ======================================= Thank you very much Subbarao. ======================================= 265.file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer. ======================================= 266.How many types of flatfiles available in Informatica? QUESTION #265 No best answer available.what is the event-based scheduling? QUESTION #266 No best answer available. October 16.

. ======================================= event based scheduling is using for row indicator file.. In some situations we've to run a job based on some events like if a file arrives then only the job has to run whatever the time it is.what is the new lookup port in look-up transformation? QUESTION #267 No best answer available. ======================================= Seems u r talking newlook up row when u configure u lookup as dyanamic lkp cache by default it will generates newlkprow it tells the informatica whether the row u got is existing or new row if it is new row it passes data. In such cases event based scheduling is used.file:///C|/Perl/bin/result. when u dont no where is the source data that time we use shellcommand script batch file to send to the local directory of the informatica server is waiting for row indiactor file befor running the session. If the answer is yes this button creates a port where we can enter the name datatype . October 25.Informatica . or else it discards. ======================================= 267.html Click Here to view complete document In time based scheduling the jobs run at the specified time.html (266 of 363)4/1/2009 7:50:59 PM .of a port. ======================================= file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer.. This is mainly used when using unconnected lookup this reflects the datatype of the input port. 2006 15:37:38 sn3508 Member Since: April 2006 Contribution: 20 #1 RE: what is the new lookup port in look-up transformat. Click Here to view complete document I hope you're asking about the 'add a new port' button in the lookup transformation..

Please pick the good answer available or submit your answer. Please pick the good answer available or submit your answer. 2006 00:22:38 srinuv_11 RE: what is dynamic insert? Click Here to view complete document When we selecting the dynamic cache in look up transformation the informatica server create the new look up row port it will indicates the numeric value wheather the informatica server inserts updates or makes no changes to the look up cashe and if u associate a sequence id the informatica server create a sequence id for newly inserted records.Informatica . ======================================= Hi srinu Can u explain how to associate sequence ? Thanks Sri.how did you handle errors?(ETL-Row-Errors) QUESTION #269 No best answer available.[0 1 2]. Member Since: October 2006 Contribution: 23 #1 ======================================= 269.Informatica .html This port is added by PC Client :-'Designer to a Lookup transformation when ever dynamic cache is used. file:///C|/Perl/bin/result. ======================================= The new port in lookup transformation is Associative port Cheers Bobby ======================================= 268. November 15.html (267 of 363)4/1/2009 7:50:59 PM .This port is a indicator to Infa server action whether it is inserts or upates the dynamic cache through a numeric value.file:///C|/Perl/bin/result.what is dynamic insert? QUESTION #268 No best answer available.

bad file. The error are in two type 1.HOw do u setup a schedule for data loading from scratch? QUESTION #270 No best answer available.file:///C|/Perl/bin/result. column based errors column based errors identified by D-GOOD DATA N-NULL DATA O-OVERFLOW DATA R-REJECTED DATA the data stored in .Informatica . Please pick the good answer available or submit your answer. 2006 17:08:30 hanug Member Since: June 2006 Contribution: 24 #1 file:///C|/Perl/bin/result. 2006 02:02:06 phani RE: how did you handle errors?(ETL-Row-Errors) Click Here to view complete document Hi friend If there is an error comes it stored it on target_table.html (268 of 363)4/1/2009 7:50:59 PM . row-based errors 2. December 12.bad file D1232234O877NDDDN23 Like that if any doubt give msg to my mail id #1 ======================================= 270.html November 29.

The change is how to pickup the delta data.html (269 of 363)4/1/2009 7:50:59 PM ..file:///C|/Perl/bin/result..html RE: HOw do u setup a schedule for data loading from sc. Hanu. For that we can use distinct keyword. then write file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer. ======================================= can u explain detail? #1 ======================================= Hi You can write SQL override in the source qualifier (to eliminate duplicates).Informatica . October 20.HOw do u select duplicate rows using informatica? QUESTION #271 No best answer available. Click Here to view complete document whether you are loading data from scratch(for the first time) or subsequent loads there is no changes to the scheduling. ======================================= can u explain what are the steps for identifying the duplicates with the help of rank transformation. 2006 08:52:55 Sharmila RE: HOw do u select duplicate rows using informatica? Click Here to view complete document I thought we could identify dupilcates by using rank transformation. For example : consider a table dept(dept_no dept_name) and having duplicate records in that. ======================================= 271.

if you want to have only duplicate records then write the following query in the Source Qualifier SQL Override select distinct(deptno) deptname from dept_test a where deptno in( select deptno from dept_test b group by deptno having count(1)>1) ======================================= i think we cant select duplicates from rank transformation if it is possible means explain how to do it ======================================= We can get the duplicate records by using the rank transformation.. October 25. 1)select distinct(deptno) deptname from dept_test.. 2)select avg(deptno) deptname from dept_test group by deptname.Informatica .html (270 of 363)4/1/2009 7:50:59 PM . ======================================= 272.How to load data to target where the source and targets are XML'S? QUESTION #272 No best answer available. 2006 15:23:31 sn3508 Member Since: April 2006 Contribution: 20 #1 RE: How to load data to target where the source and ta.seect distinct one check box.html this following query in the Source Qualifier sql over ride. Please pick the good answer available or submit your answer. ======================================= we can aso use sorter transfermation.file:///C|/Perl/bin/result. file:///C|/Perl/bin/result.

The Schema Browser and Project Manager provides quick access to database objects. October 24. 2006 04:54:29 srinivas vadlakonda RE: What TOAD and for what purpose it will be used? Click Here to view complete document After creating the mapping we can execute the session for that mapping after that we can use the unit testing to test data. 273.Informatica .What TOAD and for what purpose it will be used? ======================================= QUESTION #273 No best answer available. With Toad you can: #1 file:///C|/Perl/bin/result. Once the session is success the xml file will be generated in the specified location. You can create and edit database tables views indexes constraints and users.html Click Here to view complete document q q q q If you don't have the structures create the Source or Target structure of the XML file by going to Sources or Targets menu and selecting 'Import XML' and follow the steps Follow the regular steps you would do to create an ordinary mapping/session. Using Toad you can build and test PL/SQL packages procedures triggers and functions.html (271 of 363)4/1/2009 7:50:59 PM . While unit testing the development or query on tables one can use TOAD with basic knowledge on database commands. And we can use the oracle to test how many records are loaded into the target that loaded records are correct or not for this purposes we can use the oracle or Toad. ======================================= Toad is a application development tool built around an advanced SQL and PL/SQL editor. Toad s SQL Editor provides an easy and efficient way to write and test scripts and queries and its powerful data grids provide an easy way to view and edit Oracle data. Please pick the good answer available or submit your answer. In the session you've to mention the location and name of source/target. ======================================= TOAD is a user friendly interface for relational databases like Oracle Sql Server DB2.file:///C|/Perl/bin/result.

Please pick the good answer available or submit your answer. Audit table first customer table should be populated than Audit table for that we use target load order Hope u undeerstood #1 ======================================= file:///C|/Perl/bin/result.html (272 of 363)4/1/2009 7:50:59 PM . October 24.What is target load order ? ======================================= QUESTION #274 No best answer available.html q q q q q q q q q View the Oracle Dictionary Create browse or alter objects Graphically build execute and tune queries Edit PL/SQL and profile stored procedures Manage your common DB tasks from one central window Find and fix database problems with constraints triggers extents indexes and grants Create code from shortcuts and templates Create custom code templates Control code access and development (with or without a third party version control product) using Toad's cooperative source control feature.Informatica . customer 2. 274.file:///C|/Perl/bin/result. 2006 04:47:53 ram gopal RE: What is target load order ? Click Here to view complete document In a mapping if there are more than one target table then we need to give in which order the target tables should be loaded example: suppose in our mapping there are 2 target table 1.

Informatica .Informatica . store the file in this external directory 3. file:///C|/Perl/bin/result.html Target oad pan specifies the order in which the data being extracted from the source qualifier.How to extract 10 records out of 100 records in a flat file QUESTION #275 No best answer available. query the external table to access records like u would do a normal table #1 ======================================= hi For falt file sourec source--sq--sequence generator tran. Please pick the good answer available or submit your answer. October 31. create external directory 2. Click Here to view complete document 1. create a external table corresponding to the file 4..html (273 of 363)4/1/2009 7:50:59 PM . 2006 16:52:24 sridhar RE: How to extract 10 records out of 100 records in a . ======================================= 275.How many types of TASKS we have in Workflomanager? What r they? QUESTION #276 No best answer available..file:///C|/Perl/bin/result.on new fileld id--filter tran. Please pick the good answer available or submit your answer. id< 10--traget ======================================= 276.

======================================= The following Tasks we r having in Workflow manager Assignment Control Command decision E-mail Session Event-Wait Event-raise and Timer. 2006 15:51:39 #1 file:///C|/Perl/bin/result. Among these tasks only Session Command and E-mail r the reusable remaining tasks r non reusable.file:///C|/Perl/bin/result..html (274 of 363)4/1/2009 7:50:59 PM ..Informatica . 6) assign values to workflow var 10) run worklets.Regards rma ======================================= 1. 1) run mappings. November 02.html October 31. Please pick the good answer available or submit your answer.What is user defined Transformation? QUESTION #277 No best answer available. Click Here to view complete document session command email notification ======================================= work flow task : 1)session 2)command 3)email 4)control 5)command 6)presession 7)post session 8)assigment #1 ======================================= 1) session2) command 3) email4) event-wait5) event-raise6) assignment7) control8) decision9) timer10) worklet3) 8) 9) are self explanatory. 2006 00:08:06 kumar RE: How many types of TASKS we have in Workflomanager?. The Tasks developed in the task developer rreusable tasks and taske which r developed by useing workflow or worklet r non reusable. 4 + 5) raise user-defined or pre-defined events and wait for the the event to be raised. 2) run OS commands/scripts. Session ======================================= We have Session Command and Email tasks Cheers Thana ======================================= 277.

2006 06:17:52 rameshr #1 file:///C|/Perl/bin/result. 2006 00:11:33 srinuv_11 Member Since: October 2006 Contribution: 23 #1 RE: what is the difference between connected stored pr.html (275 of 363)4/1/2009 7:50:59 PM .. Please pick the good answer available or submit your answer. November 15.file:///C|/Perl/bin/result.what is the difference between connected stored procedure and unconnected stored procedure? QUESTION #278 No best answer available.Informatica . Please pick the good answer available or submit your answer.. November 03. ======================================= 279.Informatica .what is semi additve measuresand fully additive measures QUESTION #279 No best answer available.html n k rajkumar RE: What is user defined Transformation? Click Here to view complete document User Defined Transformations Are Stored Procedure and Sourcr Qualifier ======================================= 278. Click Here to view complete document Connected stored procedure will execute when each row passed through the mapping then it will execute Unconnected stored procedure will execute when it is called by the another transformation and connected stored procedure is the part of a data flow in a pipe line but unconnected is not.

.file:///C|/Perl/bin/result. ======================================= hi additive means it can be summrized by any other column....... this is ramesh.semi additive 3. ======================================= hi can u give one example of additive and semi additive facts it ll be better for understand file:///C|/Perl/bin/result.html (276 of 363)4/1/2009 7:50:59 PM . Click Here to view complete document hi...html RE: what is semi additve measuresand fully addi..(if any one feel there is some conceptual problem with my solution plz let me now) there are three types of facts.. non-additive additve means when a any measure is queried of the fact table if the result relates to all the diemension table which are linked to the fact semi-additve when a any measure is queried from the fact table the results relates to some of the diemension table non-additive when a any measure is queried from the fact table if it does n't relate to any of the diemension and the result is driectly from the measures of the same fact table ex: to calculate the total percentage of loan just we take the value from the fact measure(loan) divide it with 100 we get it without the diemension. semi additive means it can be summrized by some columns only..additive 2.. 1.

======================================= A measurable data on which simple addition can be performed is called fully additive such measurable data's no need to combine two or more dimensions for it's meaning Ex: Product wise total sales Branch wise total sales A measurable data on which simple addition can't be performed is called semi additive such measurable data's need to combine two or more dimensions for it's meaning Ex: customer wise total sales amount ---------> Has no meaning customer wise product total sales amount If any one have doubt about this let me now ======================================= 280.html (277 of 363)4/1/2009 7:50:59 PM .what is DTM process? QUESTION #280 No best answer available.Informatica . Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result.html Akhi ======================================= average monthly balance of your bank account is a semi-additive fact. 2006 04:40:39 prasanna alur RE: what is DTM process? #1 file:///C|/Perl/bin/result. November 06.

November 06.Informatica . ======================================= dtm means data transmission manager. It is one of the component in informatica architecture. it will collect the data and load the data.what is Powermart and Power Center? QUESTION #281 No best answer available. The DTM process. ======================================= pc we will use in production environment. Creates threads to initialize the session read write and transform data and handle pre. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result.it run after complition of load manager.another diff is in PC we can create repository globle but it not in PM.html Click Here to view complete document DTM means data transformation manager. ======================================= Load manager Process: Starts the session creates the DTM process and sends post-session email when the session completes.in this process informatica server search source and tgt connection in repository if it correct then informatica server fetch the data from source and load it to target.in informatica this is main back ground process.html (278 of 363)4/1/2009 7:50:59 PM .and post-session operations. but file:///C|/Perl/bin/result. ======================================= 281. pm we will use in development environment #1 ======================================= Hi power center will support global and local repositories and also it supports ERP packages. 2006 04:34:18 prasanna alur RE: what is Powermart and Power Center? Click Here to view complete document In power center we can register multiple server but in power mart its not possible.

file:///C|/Perl/bin/result.file:///C|/Perl/bin/result. Thanks and Regards Siva Prasad.Informatica . ======================================= 282..html Powermart will support local repositories only and it doesn't support ERP packages .1 a.what are the differences between informatica6. November 06.1 and informatica7. Please pick the good answer available or submit your answer.. 2006 16:09:38 calltomadhu Member Since: September 2006 Contribution: 34 #1 RE: what are the differences between informatica6. Power center normally used for enterprise data warehouses where as power mart will use for Low/Mid range data warehouses. ======================================= Power Center supports Partitioning process where as Power mart only does simple pass through.html (279 of 363)4/1/2009 7:50:59 PM .1 QUESTION #282 No best answer available. Power center is High Cost where as power mart it is Low.

how we validate all the mappings in the reposi.html (280 of 363)4/1/2009 7:50:59 PM .hi.. Please pick the good answer available or submit your answer.Informatica .union transformation.html Click Here to view complete document load manager is a manager who will take care about the loading process from source to target ======================================= in informatica7. how we validate all the mappings in the repository at once QUESTION #283 No best answer available. file:///C|/Perl/bin/result.file:///C|/Perl/bin/result.flatfile as lookup table 3.it works like union all. we can take flatfile as a target 2.. thanks ®ards sivaprasad ======================================= 283.dataprofiling&versioning 4. November 07.1 1. 2006 00:03:46 srinuv_11 Member Since: October 2006 Contribution: 23 #1 RE: hi.

file:///C|/Perl/bin/result.html (281 of 363)4/1/2009 7:50:59 PM . Open the folder then the mapping sub folder then select all or some of the mappings(by pressing the shift or control key ctrl+A does not work) and then rightclick and validate. ======================================= Yes. thanks madhu ======================================= Still we dont have such facility in informatica. But you can validate all the mappings in a folder in one go and continue the process for all the folders.Informatica . ======================================= hi you cant validate all maping at a time. We can validate all mappings using the Repo Manager. Please pick the good answer available or submit your answer.html Click Here to view complete document U r not able to validate all mappings at a time each time one mapping can be validated. 2006 15:31:22 calltomadhu Member Since: September 2006 Contribution: 34 #1 file:///C|/Perl/bin/result. ======================================= Hi You can not validate all the mappings in one go.hw to work with pmcmd on windows platform QUESTION #284 No best answer available. For dooing this log on to the repository manager. November 08. you should go one by one. ======================================= 284.

establish link between session and pmcmd. 2006 02:03:43 #1 file:///C|/Perl/bin/result. if session executes successfully pmcmd command executes.exe ======================================= 285.html RE: hw to work with pmcmd on windows platform Click Here to view complete document hi in workflow manager take pmcmd command.html (282 of 363)4/1/2009 7:50:59 PM .Informatica .1. November 14. ======================================= C:Program FilesInformatica PowerCenter 7.file:///C|/Perl/bin/result. In workflow manger we execute directly session task. Please pick the good answer available or submit your answer.ThanksSwati.exe ======================================= C:-->Program Files-->Informatica PowerCenter 7.3Serverbinpmcmd. thanks madhu ======================================= Hi Frnd Can u plz tell me where is the PMCMD option in workflomanager.3-->Server-->bin-->pmcmd.1. ======================================= Hi Swathi In commandline only we can execute pmcmd pmrep commands.inteview questionwhy do u use a reusable sequence genator tranformation in mapplets? QUESTION #285 No best answer available.

2006 00:09:10 srinuv_11 Member Since: October 2006 Contribution: 23 #1 RE: interview questiontell me what would t.Thanks Vivek ======================================= You can answer this question as the number of facts and dimension in ur warehouse For Example: Insurance Datawarehouse 2 Facts : Claims and policy file:///C|/Perl/bin/result...html (283 of 363)4/1/2009 7:50:59 PM .. Click Here to view complete document U can say 900 mb ======================================= The size of EDW will be on Terabyte.file:///C|/Perl/bin/result.interview questiontell me what would the size of ur warehouse project? QUESTION #286 No best answer available. November 13.Informatica . Please pick the good answer available or submit your answer..html srinuv_11 Member Since: October 2006 Contribution: 23 RE: inteview questionwhy do u use a reusa.It varies depending up on ur Project Structure and How many data marts and EDWH ======================================= Mr Srinuv by saying 900 MB r u kiddin with folks over here! Its a datawarehouse's size not some client server software. ======================================= You can use sequence generator but is the usein the mapping it create not unique value ======================================= If we use same sequence generator for multiple times to load the data at same time there will be DEAD LLOCK occurs. Click Here to view complete document If u r not using reusable sequence generator transformation duplicasy will occur inorder to reduce this we can use reusable sequence generator transformation. To avoid Dead locks we dont use same sequence generator ======================================= 286. The Server will run on Either Unix or Linux with SAN box ======================================= U can say 600-900GB including Ur Marts.

html (284 of 363)4/1/2009 7:50:59 PM .Informatica .. ======================================= 287. Please pick the good answer available or submit your answer. November 16.Can we revert back reusable transformation to normal transformation? QUESTION #289 No best answer available.html 8 Dimension : Coverage Customer Claim etc.what is aggregate awareness? QUESTION #288 No best answer available. Please pick the good answer available or submit your answer. 2006 08:28:39 satya RE: what is aggregate awareness? Click Here to view complete document It is the ability to dynamically re-write SQL to the level of granularity neededto answer a business question ======================================= #1 289.Informatica . November 09. ======================================= I think the interviewer wants to the number of facts and dimensions in the warehouse and not the size in GB's or TB of the actual Database.what is grouped cross tab? QUESTION #287 No best answer available...Informatica . Please pick the good answer available or submit your answer. 2006 14:46:05 calltomadhu RE: what is grouped cross tab? Click Here to view complete document its one kind of report. generally we will use this one in cognos ======================================= Member Since: September 2006 Contribution: 34 #1 288.file:///C|/Perl/bin/result. file:///C|/Perl/bin/result.

. ======================================= No we CANNOT revert back the reusable transformation. ======================================= No We cant revert a resuble tranformation to Normal Trasnsformation.....html November 10. 2006 03:48:16 satish RE: Can we revert back reusable transformation to norm.file:///C|/Perl/bin/result.we can. #1 when you open a transformation in edit mode there is a check box named REUSABLE. Once we select reusable colun will be enabled.html (285 of 363)4/1/2009 7:50:59 PM .... 1) Drag the reusable transformation from Repository Navigator into Mapping Designer by pressing the file:///C|/Perl/bin/result. There is a revert button that can revert the last changes made in the transformation. ======================================= YES. ======================================= Reverting to Original Reusable Transformation If you change the properties of a reusable transformation in a mapping you can revert to the original reusable transformation properties by clicking the Revert button.. Click Here to view complete document yes ======================================= no ======================================= no it is not reversible. ======================================= No. Once we declared a transformation as a Reusable we cant able to revert.. if you tick it will give you a message saying that making reusable is not reversible.

Informatica .html left button of the mouse 2 ) and then press the Ctrl key before releasing the left button of mouse. 2006 03:59:08 narayana RE: How do we delete staging area in our project? Click Here to view complete document If your database is oracle then we can apply CDC(change data capture) and load data which is only changed recently after previous data load.. ======================================= I think if the transformation is created in Mapping designer and make it as reusable then we can revert back that one by the option "revert Back".:-) Thanks Santu ======================================= the last answer though correct in a way is not completely correct.html (286 of 363)4/1/2009 7:50:59 PM . #1 ======================================= if staging area is storing only incremental data ( means changed or new data with respect to previous load) then you can truncate the staging area. Please pick the good answer available or submit your answer. 3) Release the left button of mouse. November 15. But if we create transformation in Mapplet we can make it as non reusable one ======================================= 290..How do we delete staging area in our project? QUESTION #290 No best answer available. by using the ctrl key we are making a copy of the original transformation but not changing the original transformation into a non-reusable one. But if you maintain historical information in staging area then you can not truncate your staging area..file:///C|/Perl/bin/result. 4) Enjoy. file:///C|/Perl/bin/result.

.what is constraint based error? how ll u clarify it? QUESTION #292 No best answer available. Need to check for the primary and foreign key ralationship and the existing data if any. ======================================= 291.what is referential Intigrity error? how ll u rectify it? QUESTION #291 No best answer available.file:///C|/Perl/bin/result. 2006 06:48:12 Sravan RE: what is referential Intigrity error? how ll u rect.Informatica .html (287 of 363)4/1/2009 7:50:59 PM . because SCD Type 2 stores previous data with version number and time stamp... November 13.) ======================================= You have set the session for constraint-based loading but the PowerCenter Server is unable to determine dependencies between target tables possibly due to errors such as circular key relationships. Please pick the good answer available or submit your answer. Action: Ensure the validity of dependencies between target tables #1 ======================================= 292.(see if child table has any records which are poiting to the master table records that are no more in master table. Please pick the good answer available or submit your answer.Informatica . 2006 00:07:44 srinuv_11 Member Since: October 2006 Contribution: 23 #1 RE: what is constraint based error? how ll u clarify i. file:///C|/Perl/bin/result. November 27.html ======================================= If we use Type 2 of slowly changing dimensions we can delete staging area.. Click Here to view complete document referential Intigrity is all about foreign key relationship between tables.

.file:///C|/Perl/bin/result.Note: The target tables must have primary . Please pick the good answer available or submit your answer.. ======================================= hi when data from a single source needs to be loaded into multiple targets in that situation we use constraint based load ordering. y dynamic lookup...Informatica ..ex: online trancation data(ATM) ======================================= Dynamic Lookup generally used for connect lkp transformation when the data is cgenged then is updating insert or its leave without changing . 2006 00:03:24 srinuv_11 Member Since: October 2006 Contribution: 23 #1 RE: why exactly the dynamic lookup?plz can any bady ca. file:///C|/Perl/bin/result.. ======================================= hii...why exactly the dynamic lookup?plz can any bady can clarify it? QUESTION #293 No best answer available...html Click Here to view complete document Constraint based error will occur if the pk is duplicate in a table ======================================= it is a primary and forigen key relationship.. when data from a single source needs to be loaded into multiple targets ======================================= 293.e u want to lookup recent data then u have to go for dynamic lookup..html (288 of 363)4/1/2009 7:50:59 PM ...foreign key relationships.... regards Phani. Click Here to view complete document Dynamic look up means if the changes are applying to the look up chache then we can say it the dynamic look up.. suppose u r looking up a table that is changing frequently i.. November 13. ======================================= 1.

Generally we use this in case of slowly changing dimensions.How many mappings you have done in your project(in a banking)? QUESTION #294 No best answer available. Please pick the good answer available or submit your answer. November 21.html ======================================= Dynamic lookup cache is used in case of connected lookups only It is also called as read and write cache when a new record is inserted into the target table then cache is also updated with the new record it is saved in the cache for faster lookup of data from the target.what are the UTP'S QUESTION #295 No best answer available.file:///C|/Perl/bin/result.Informatica .On that modelling basis we can identify the no of mappings which includes simple mapping to complex mapping. ======================================= It depend upon the user requirement.according to user requirement we can configure the modelling . Pallavi ======================================= 294. Click Here to view complete document Depends on the dimensions for suppose if u r taking the banking porject it requires the dimension like time primary account holder branch like we can take any no of dimension depending on the project and we can create the mapping for cleancing the data or scrubbing the data for this also we can create the mappings we can't exact this many... November 13. Please pick the good answer available or submit your answer.Informatica . 2006 09:42:20 #1 file:///C|/Perl/bin/result.html (289 of 363)4/1/2009 7:50:59 PM .133 ======================================= 295. ======================================= Exactly 93. 2006 00:01:56 srinuv_11 Member Since: October 2006 Contribution: 23 #1 RE: How many mappings you have done in your project(in.

It will remove the duplicates. You can also use an Aggegator Transformation and group by the PK. Apply a Sorter Transformation in the property tab select distinct ..html raja147 RE: what are the UTP'S Member Since: November 2006 Contribution: 3 Click Here to view complete document hiutps are done to check the mappings are done according to given business rules. ======================================= Sorter Transofrormation do the records in sorting order(for better performence) .. ======================================= After creating the mapping each mapping can be tested by the developer individualy ======================================= 296. #1 ======================================= Use Sorter Transformation and check Distinct option.utp is the (unit test plan ) done by deveploper. Gives the same result.html (290 of 363)4/1/2009 7:50:59 PM .file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer. out put will give a sorter distinct data hence you get rid of duplicates. Click Here to view complete document by using aggregater ======================================= We can delete duplicate rows from flat files by using Sorter transformation. December 02. iam asking how can we delete duplicate rows ======================================= use a lookup by primary key ======================================= In the mapping read the flat file through a Source Definition and SQ. 2006 07:42:09 amarnath RE: how can we delete duplicate rows from flat files ?.Informatica .how can we delete duplicate rows from flat files ? QUESTION #296 No best answer available. file:///C|/Perl/bin/result.

file:///C|/Perl/bin/result.html

=======================================

297.Informatica - 1.can u look up a flat file ? how ?2.what is test load?

QUESTION #297 No best answer available. Please pick the good answer available or submit your answer.
December 03, 2006 22:03:36 sravan kumar RE: 1.can u look up a flat file ? how ?2.what i... Click Here to view complete document By Using Look up transformation we can look up A flat file.When u click Look up transformation it shows u the message.Follow that. Test load is nothing but checking whether the data is moving correctly to the target or not. #1

======================================= Test load is the property we can set at the session property level by which Informatica performs all pre and post session tasks but does not save target data(in RDBMS target table it writes the data to check the constraints but rolls it back). If the target is flat file then it does not write anything in the file. We can specify number of source rows to test load the mapping. This is another way of debugging the mapping without loading the target. =======================================

298.Informatica - what is auxiliary mapping ?

QUESTION #298 No best answer available. Please pick the good answer available or submit your answer.
December 26, 2006 03:27:18 Kuldeep Kumar Verma RE: what is auxiliary mapping ? #1

file:///C|/Perl/bin/result.html (291 of 363)4/1/2009 7:50:59 PM

file:///C|/Perl/bin/result.html

Click Here to view complete document Auxiliary Mapping is used to reflect change in one table when ever there is a change in the other table. Example: In Siebel we have S_SRV_REQ and S_EVT_ACT table Lets say that we have a image table defined for S_SRV_REQ from where our mappings read data. Now if there is any change in S_EVT_ACT then it wont be captured in S_SRV_REQ if our mappings are using image table for S_SRV_REQ. To overcome this we define a mapping beteen S_SRV_REQ and S_EVT_ACT such that if there is any change in second it will be reflected as an update in the first table.

=======================================

299.Informatica - what is authenticator ?

QUESTION #299 No best answer available. Please pick the good answer available or submit your answer.
December 18, 2006 02:38:20 Reddappa C. Reddy RE: what is authenticator ? Click Here to view complete document An authenticator is either a token of authentication (is the act of establishing or confirming something or someone) or one who authenticates ======================================= Authentication requests validate user names and passwords to access the PowerCenter repository.You can use the following authentication requests to access the PowerCenter repositories: Login Logout The Login function authenticates user name and password for a specified repository. This is the first function a client application should call before calling any other functions. The Logout function disconnects you from the repository and its PowerCenter Server connections. You can call this function once you are done calling Metadata and Batch Web Services functions to release resources at the Web Services Hub. ======================================= #1

300.Informatica - how can we populate the data into a time dimension ?

QUESTION #300 No best answer available. Please pick the good answer available or submit your
file:///C|/Perl/bin/result.html (292 of 363)4/1/2009 7:50:59 PM

file:///C|/Perl/bin/result.html

answer.
December 03, 2006 22:04:48 sravan kumar RE: how can we populate the data into a time dimension... Click Here to view complete document By using Stored procedure Transformation. ======================================= how ? can u explain. ======================================= can u plz explain me about time dimension. ======================================= #1

301.Informatica - how to create primary key only on odd numbers?
QUESTION #301 No best answer available. Please pick the good answer available or submit your answer.
December 07, 2006 02:50:14 Pavan.m RE: how to create primary key only on odd numbers? Click Here to view complete document Use sequence generator to generate warehouse keys and set the 'Increment by ' property of sequence genrator to 2. ======================================= Use sequence generator and set the 'Increment by' property in that with 2. ======================================= #1

302.Informatica - How to load fact table ?

QUESTION #302 No best answer available. Please pick the good answer available or submit your answer.
December 11, 2006 16:59:11 hanug Member Since: June 2006 Contribution: 24 #1

file:///C|/Perl/bin/result.html (293 of 363)4/1/2009 7:50:59 PM

file:///C|/Perl/bin/result.html

RE: How to load fact table ? Click Here to view complete document HI: There are two ways u have to load fact table. 1. First time load 2. Incremental load or delta load In both cases u have to aggregate get the keys of the dimentional tables and load into Fact. In case of increments u use date value to pickup only delta data while loading into fact. Hanu.

======================================= Fact tables always maintain the history records and mostly consists of keys and measures So after all teh dimension tables are populated teh fact tables can be loaded. The load is always going to be a incremental load except for the first time which is a history load.

=======================================

303.Informatica - How to load the time dimension using Informatica ?

QUESTION #303 No best answer available. Please pick the good answer available or submit your answer.
December 22, 2006 00:32:54 srinivas RE: How to load the time dimension using Informatica ?... #1

file:///C|/Perl/bin/result.html (294 of 363)4/1/2009 7:50:59 PM

file:///C|/Perl/bin/result.html

Click Here to view complete document HI use scd-type2-effectivedate range.

======================================= Hi Use strored procedure tranformation to load time dimention

======================================= Hi Kiran Can U plz tell me in detail how do we do that?I am a fresher in informatica...plz help. thanks.

=======================================

304.Informatica - What is the process of loading the time dimension?

QUESTION #304 No best answer available. Please pick the good answer available or submit your answer.
December 29, 2006 08:20:34 manisha.sinha Member Since: December 2006 Contribution: 30 #1

RE: What is the process of loading the time dimension?...

file:///C|/Perl/bin/result.html (295 of 363)4/1/2009 7:50:59 PM

file:///C|/Perl/bin/result.html

Click Here to view complete document create a procedure to load data into Time Dimension. The procedure needs to run only once to popullate all the data. For eg the code below fills up till 2015. You can modify the code to suit the feilds in ur table. create or replace procedure QISODS.Insert_W_DAY_D_PR as LastSeqID number default 0; loaddate Date default to_date('12/31/1979' 'mm/dd/yyyy'); begin Loop LastSeqID : LastSeqID + 1; loaddate : loaddate + 1; INSERT into QISODS.W_DAY_D values( LastSeqID Trunc(loaddate) Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2) ) TO_FLOAT(TO_CHAR(loaddate 'MM')) TO_FLOAT(TO_CHAR(loaddate 'Q')) trunc((ROUND(TO_DECIMAL(to_char(loaddate 'DDD'))) + ROUND(TO_DECIMAL(to_char(trunc(loaddate 'YYYY') 'D')))+ 5) / 7) TO_FLOAT(TO_CHAR(loaddate 'YYYY')) TO_FLOAT(TO_CHAR(loaddate 'DD')) TO_FLOAT(TO_CHAR(loaddate 'D')) TO_FLOAT(TO_CHAR(loaddate 'DDD')) 1 1 1 1 1 TO_FLOAT(TO_CHAR(loaddate 'J')) ((TO_FLOAT(TO_CHAR(loaddate 'YYYY')) + 4713) * 12) + TO_number(TO_CHAR(loaddate 'MM')) ((TO_FLOAT(TO_CHAR(loaddate 'YYYY')) + 4713) * 4) + TO_number(TO_CHAR(loaddate 'Q')) TO_FLOAT(TO_CHAR(loaddate 'J'))/7 TO_FLOAT (TO_CHAR (loaddate 'YYYY')) + 4713 TO_CHAR(load_date 'Day') TO_CHAR(loaddate 'Month') Decode(To_Char(loaddate 'D') '7' 'weekend' '6' 'weekend' 'weekday') Trunc(loaddate 'DAY') + 1 Decode(Last_Day(loaddate) loaddate 'y' 'n')
file:///C|/Perl/bin/result.html (296 of 363)4/1/2009 7:50:59 PM

file:///C|/Perl/bin/result.html

to_char(loaddate 'YYYYMM') to_char(loaddate 'YYYY') || ' Half' || Decode(TO_CHAR(loaddate 'Q') '1' 1 decode(to_char(loaddate 'Q') '2' 1 2) ) TO_CHAR(loaddate 'YYYY / MM') TO_CHAR(loaddate 'YYYY') ||' Q ' ||TRUNC(TO_number( TO_CHAR(loaddate 'Q')) ) TO_CHAR(loaddate 'YYYY') ||' Week'||TRUNC(TO_number( TO_CHAR(loaddate 'WW'))) TO_CHAR(loaddate 'YYYY')); If loaddate to_Date('12/31/2015' 'mm/dd/yyyy') Then Exit; End If; End Loop; commit; end Insert_W_DAY_D_PR;

=======================================

305.Informatica - how can we remove/optmize source bottlenecks using "query hints"

QUESTION #305 No best answer available. Please pick the good answer available or submit your answer.
January 08, 2007 16:29:36 creativehuang Member Since: January 2007 Contribution: 5 #1

RE: how can we remove/optmize source bottlenecks using...

file:///C|/Perl/bin/result.html (297 of 363)4/1/2009 7:50:59 PM

use the hints after and it is powerful so be careful with the hints.file:///C|/Perl/bin/result.where from we get the source data or how we access the source data QUESTION #307 No best answer available.how can we eliminate source bottleneck using query hint QUESTION #306 No best answer available. March 12. ======================================= 307. Please pick the good answer available or submit your file:///C|/Perl/bin/result. If there is a long delay between the two time measurements you can use an optimizer hint to eliminate the source bottleneck. On UNIX systems you can load the result of the query in /dev/null. u can get free doc from oracle technet.. Click Here to view complete document You can identify source bottlenecks by executing the read query directly against the source database.html (298 of 363)4/1/2009 7:50:59 PM . ======================================= 306. 2007 06:28:50 sreedhark26 Member Since: January 2007 Contribution: 25 #1 RE: how can we eliminate source bottleneck using query.html Click Here to view complete document it's SQL server question ?or informatica question? ======================================= Create indexes for source table colums ======================================= first u must have proper indexes and the table must be analyzed to gather stats to use the cbo. Copy the read query directly from the session log. On Windows you can load the result of the query in a file.. Measure the query execution time and the time it takes for the query to return the first row.Informatica .Informatica . Please pick the good answer available or submit your answer. Execute the query against the source database with a query tool such as isql.

Click Here to view complete document Hi We get souce data in the form of excel files flat files etc.html (299 of 363)4/1/2009 7:50:59 PM . By using source analyzer we can access the souce data #1 ======================================= hisource data exists in OLTP systems of any form (flat file relational database. xml definitions.html answer.. January 03.What are all the new features of informatica 8.1? QUESTION #308 No best answer available. February 03. ======================================= 308.1? #1 file:///C|/Perl/bin/result.file:///C|/Perl/bin/result..)u can acces the source data with any of the source qualifier transformationsw and normaliser tranaformation. 2007 01:57:27 Sonjoy RE: What are all the new features of informatica 8.Informatica . 2007 00:58:49 phani RE: where from we get the source data or how we access. Please pick the good answer available or submit your answer.

Please pick the good answer available or submit your answer. January 08. Click Here to view complete document PIPELINE SPECIFIES THE FLOW OF DATA FROM SOURCE TO TARGET . HTTP trans support3.e data loading time reduces. Power Center 8 release has "Append to Target file" feature. ======================================= 310. Informatica has added a new web based administrative console. Java Custom Transformation support2.1.1.Explain the pipeline partition with real time example? ======================================= QUESTION #309 No best answer available.so pipeline partisoning definetly reduces the data loading time.PIPELINE PARTISON MEANS PARTISON THE DATA BASED ON SOME KEY VALUES AND LOAD THE DATA TO TARGET UNDER CONCURRENT MODE. 2007 14:56:16 saibabu Member Since: January 2007 Contribution: 14 #1 RE: Explain the pipeline partition with real time exam. 2. New name of Power Analyzer as Data Analyzer5.1 not in 8. Management is centralized that means services can be started and stopped on nodes via a central web interface 309.. 5.How to FTP a file to a remote server? QUESTION #310 No best answer available. January 11.Informatica . ava transformation is introduced.file:///C|/Perl/bin/result.html (300 of 363)4/1/2009 7:50:59 PM #1 . in real time we have some thousands of records exists everyday to load the data to targets . 2007 16:04:23 file:///C|/Perl/bin/result. 6.Informatica . User defined functions Midstream SQL transformation has been added in 8. New name of Superglue as Metadata manager4. Support Grid Computing6.. 3. Please pick the good answer available or submit your answer. WHICH INPROVES THE SESSION PERFORMANCE i. 4.html Click Here to view complete document 1. Push down Optimization ======================================= 1.

.What's the difference between source and target object definitions in Informatica? QUESTION #311 No best answer available...In unix there is an utililty XCOMTCP which transfer file from one server to other...html (301 of 363)4/1/2009 7:50:59 PM .The server directory should have write permetion. Please pick the good answer available or submit your answer. But lot of constraints they are for this.. #1 file:///C|/Perl/bin/result... January 05....html creativehuang RE: How to FTP a file to a remote server? Click Here to view complete document ftp targetaddress go to your target directory ftp> ascii or bin ftp> put or get Member Since: January 2007 Contribution: 5 ======================================= Hi U can transfer file from one server to other..file:///C|/Perl/bin/result...Informatica ....Check in detail in UNIX by typing MAN XCOMTCP command which guides u i guess... U need to mention target server name and directory name where u need to send.. ======================================= 311. 2007 09:06:50 Sravan Kumar RE: What's the difference between source and target ob....

January 08. Click Here to view complete document reusable nonusable session ======================================= Total 10 SESSIONS 1.html Click Here to view complete document Source system is the system which provides the business data is called as source system Target system is a system to which the data being loads ======================================= source definition is the structure of the source database existed in the OLTP system . SESSION: FOR MAPPING EXECUTION file:///C|/Perl/bin/result.Informatica .how many types of sessions are there in informatica. QUESTION #312 No best answer available.please explain them. using source definition u can extrcact the transactional data from OLTP SYSTEMS.file:///C|/Perl/bin/result.html (302 of 363)4/1/2009 7:50:59 PM ..Target definition means defining the structure of the target (relation table or flat file) ======================================= 312. 2007 15:56:28 creativehuang Member Since: January 2007 Contribution: 5 #1 RE: how many types of sessions are there in informatic.TARGET DEFINITION IS THE STRUCTURE GIVEN BY THE DBA's TO POPULATE THE DATA FROM SOURCE DEFINITION ACCORDING BUSINESS RULES FOR THE PURPOSE OF MAKING EFFECTIVE DESITIONS FOR THE ENTERPRISE. Please pick the good answer available or submit your answer..Source definition means defining the structure of the source from which we have to extract the data to transform and then load to the target. ======================================= Hai what Saibabu wrote is correct.

html (303 of 363)4/1/2009 7:50:59 PM .file:///C|/Perl/bin/result. COMMAND: TO EXECUTE OS COMMANDS 4 CONTROL: FAIL STOP ABORT 5.html 2.Concurrent: When whole data moves simultaneously from source to target it is Concurrent ======================================= Hi Vidya Above Qestion is How many sessions? your answer is Bathes ? ======================================= file:///C|/Perl/bin/result. DECISSION :CONDITION TO BE EVALUATED FOR CONTROLING FLOW OR PROCESS 8.ASSIGNEMENT: TO ASSIGN VALUES WORKLET OR WORK FLOW VARIABLES ======================================= Session is a type of workflow task and set of instructions that describe how to move Data from Source to targets using a mapping There are two session in informatica 1. TIMER: TO HALT THE PROCESS FOR SPECIFIC TIME 9.WORKLET TASK: REUSABLE TASK 10. sequential: When Data moves one after another from source to target it is sequential 2. EMAIL:TO SEND EMAILS 3.EVENT WAIT: FOR PRE_DEFINED OR POST_DEFINED EVENTS 6 EVENT RAISE:TO RAISE ACTIVE USER_DEFINED EVENT 7.

Now We had build a downstream system to capture a chunk of data for specific purpose say New Hirees .reusable session ======================================= 313.Capture these changes to load the data incrementally.It is very difficult to load all the rows again in the target so what we have to do is capture the data whch is not loaded and load only the changed rows into the target.what u wrote is types of tasks used fby the workflow manager..Informatica . 2007 07:30:59 Sravan Kumar RE: What is meant by source is changing incrementally?.what is the diffrence between SCD and INCREMENTAL Aggregation? QUESTION #314 No best answer available.Informatica . Infact most of the sources in OLTP are incremental changing/constantly changing. Please pick the good answer available or submit your answer.For example we have a source where the data is changing.What is meant by source is changing incrementally? explain with example QUESTION #313 No best answer available.non resuable session2. Click Here to view complete document Source is changing Incrementally means the data in the source is keep on changing and your capturing that changes with time stamps keyrange and with triggers scds..On 10/01/07 the source is updated with some new rows . January 09.If We cannot capture this source data incrementally the data loading process will be very difficult.file:///C|/Perl/bin/result. January 09. Think about HR Data for a very big Company. #1 ======================================= I have a good example to explain this .html HI Sreedhark26ur answer is wrong .html (304 of 363)4/1/2009 7:50:59 PM #1 . dont minguide the members . We used incremental Refresh method to process such data. The data keep on changing every minute. ======================================= 314. Please pick the good answer available or submit your answer.On 09/01/07 we have loaded all the data in to the target.there r two types of sessions1. 2007 23:19:44 file:///C|/Perl/bin/result. Every record in the source will have time stamp when we load data today we check the records which are updated/inserted today and will load them (This avoids to reprocess all the data).

EFFECTIVE DATE RANGE 3..INCREMENTAL AGGRIGATIONsome requirements (daily weekly every 15 days quartly.html (305 of 363)4/1/2009 7:50:59 PM . SCD TYPE1 this method maintains only current data2.VERSION NO MAPPING 3..TYPE-1: IT'S MAINTAIN CURRENT DATA ONY 2.) need to aggrigate the values of certain colums.file:///C|/Perl/bin/result.THE PROCESS CALLED AS INCREMENTAL AGGRIGATION.in order to maintain those changes we are following three types of methods.FLAG DATA 2. NO AGGRIGATE CALCULATIONS SCD WE R USING 3 WAYS 1. ======================================= file:///C|/Perl/bin/result. 1> flag current data 2> version number mapping 3> effective date range3. SCD TYPE2 this method maintains whole history of the dimentions here three methods to identify which record is current one .TYPE-2: IT'S MAINTAIN CURRENT DATA + COMPLET HOSTROY RECORDS THESE R 3 WAYS :1.so dimention tables are called as scd tables and the fields in the scd tables are called as slowly changing dimentions .TYPE-3: IT'S MAINTAIN CURRENT DATA +ONE TIME HISTROY INCREMENTAL AGGRATION ARE STORED AGGRAGATE VALUES ACCORDING TO THE USER REQUIREMENTS ======================================= hi scd means 'slowly changing dimentions'since dimention table maintains master data the column values occationally changed .1.. Click Here to view complete document Hi HIRE DIMENSIONAL DATA IS STORED...... SCD TYPE3 this method maintains current data and one time historical data.html srinivas RE: what is the diffrence between SCD and INCREMENTAL .. HERE U have to do the same job every time (according to requirement) and add the aggrigate value to the previous aggrigate value(previous run value) of those column...

Informatica . Click Here to view complete document Stored Procedures are Hard to maintain and Debug . Its eaier to create a Mapping in Informatica than write a Stored Procedure. March 12.1 version was introduced into market QUESTION #315 No best answer available.html (306 of 363)4/1/2009 7:50:59 PM ...Informatica . Given the Logic. Thanks Gayathri ======================================= file:///C|/Perl/bin/result.1 version was introduced into.. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result.what is the advantages of converting stored procedures into Informatica mappings? QUESTION #316 No best answer available.. Its a User friendly tool.when informatica 7. January 18.1. Click Here to view complete document 2004 ======================================= 316.1.html 315. 2007 12:41:55 Gayathri84 Member Since: January 2007 Contribution: 2 #1 RE: what is the advantages of converting stored proced. 2007 06:19:29 sreedhark26 Member Since: January 2007 Contribution: 25 #1 RE: when informatica 7. Please pick the good answer available or submit your answer. Maintaince is simpler in Informatica.

of columns or not.How a LOOKUP is passive? QUESTION #317 No best answer available.. January 19. 2007 15:54:06 monicageller RE: How a LOOKUP is passive? Click Here to view complete document Hi Unconnected lookup is used for updating Slowly Chaging Dimensions..file:///C|/Perl/bin/result.html a stored procedure call is made thru an ODBC connection over a network (sometimes the info server resides on the same box as the db)..but doesn't change row count..How to partition the Session?(Interview file:///C|/Perl/bin/result.. ======================================= 318. cheers Monica. lookup is using for search value from relational table. of rows included in the cache.Informatica .so it is used to determine whether the rows are already in the target or not but it doesn't change the no.. of rows passing through it it just reduces the no.so it is passive.. In lookup SQL override property we can add a WHERE statement to the default SQL statement but it doesn't change no.in either case it will either increase no... Please pick the good answer available or submit your answer.since there is an overhead in making the call it is inherently slower. of rows .. Connected lookup transformations are used to get a related value based on some value or to perform a calculation. so it is passive.. Member Since: January 2007 Contribution: 3 #1 ======================================= the fact that a failed lookup does not erase the row makes it passive. ======================================= 317...html (307 of 363)4/1/2009 7:50:59 PM . ======================================= Lookup is using for geta a related value and perform calculation.Informatica ..

If the mapping contains multiple pipelines you can specify multiple partitions in some pipelines and single partitions in others. Enter a description for each partition. 2007 15:53:48 kirangvr Member Since: January 2007 Contribution: 5 #1 RE: How to partition the Session?(Interview question o. Hope ot helps you. @filtero Hash keys: distribute rows to the partitions by group.file:///C|/Perl/bin/result. 319.html question of CTS) QUESTION #318 No best answer available. You can configure the following information in the Partitions view on the Mapping tab: q q q q Add and delete partition points. @rank sorter joiner and unsorted aggregator.html (308 of 363)4/1/2009 7:50:59 PM ======================================= .Informatica . @source and targeto Pass-through: processes data without redistributing rows among partitions.which one is better performance wise joiner or lookup file:///C|/Perl/bin/result. Add a partition key and key ranges for certain partition types. You update partitioning information using the Partitions view on the Mapping tab in the session properties. ======================================= When you create or edit a session you can change the partitioning information for each pipeline in a mapping.. ======================================= By default when we create the session workflow creates pass-through partition points at Source Qualifier transformations and target instances.o Key range: distributes rows based on a port or set of ports that you specify as the partition key. January 23. Click Here to view complete document o Round-Robin: PC server distributes rows of data evenly to all partitions. @any valid partition point.. Specify the partition type at each partition point. Please pick the good answer available or submit your answer.

file:///C|/Perl/bin/result. Member Since: January 2007 Contribution: 5 #1 file:///C|/Perl/bin/result.html (309 of 363)4/1/2009 7:50:59 PM ..Informatica . January 25. Click Here to view complete document Are you lookuping flat file or database table? Generaly sorted joiner is more effective on flat files than lookup because sorted joiner uses merge join and cashes less rows. 2007 13:41:43 anu RE: which one is better performance wise joiner or loo. Please pick the good answer available or submit your answer.html QUESTION #319 No best answer available. If the file is not sorted it can be comparable. Then it can be better use sorted joiner which throws data to output as reads them on input. ======================================= #1 320.what is associated port in look up.Lookups into database table can be effective if the database can return sorted data fast and the amount of data is small because lookup can create whole cash in memory. Please pick the good answer available or submit your answer. February 01. If database responses slowly or big amount of data are processed lookup cache initialization can be really slow (lookup waits for database and stores cashed data on discs). 2007 16:08:08 kirangvr RE: what is associated port in look up. QUESTION #320 No best answer available.. Lookup cashes always whole file.

======================================= whenever you are using scd2 you have to use dynamic cache then one port must be specified for updating in cache that port is called associated port. Please pick the good answer available or submit your answer. 2007 16:16:53 pal RE: what is session recovery? Click Here to view complete document Session recovery is used when you want the session to continue loading from the point where it stopped last time it ran. i just forgotten where to do recovery i will tell you next time or you search for it ======================================= If the recover option is not set and 10 000 records are committed then delete the record from the target table using audit fields like update_date.what is session recovery? QUESTION #321 No best answer available.html (310 of 363)4/1/2009 7:50:59 PM . if at least one commit is performed by the session recover the session.file:///C|/Perl/bin/result. ======================================= when ever session fails then we have to follow these steps: if no commit is performed by the informatica server then run session again. February 09.Informatica .for example if the session failed after loading 10 000 records and you want the 10 001 record to be loaded then you can use session recovery to load the data from 10 001. ======================================= 321. The PowerCenter Server uses the data in the associatedport to insert or update rows in the lookup cache.html Click Here to view complete document When you use a dynamic lookup cache you must associate each lookup/output port with aninput/output port or a sequence ID. ======================================= file:///C|/Perl/bin/result. if recover is not possible truncate the target table and load again. The Designer associates the input/outputports with the lookup/output ports used in the lookup condition. #1 recovery option is set in the informatica server.

file:///C|/Perl/bin/result.html

322.Informatica - what are the steps follow in performance tuning ?

QUESTION #322 No best answer available. Please pick the good answer available or submit your answer.
February 09, 2007 16:12:47 pal RE: what are the steps follow in performance tuning ? Click Here to view complete document Steps for performance tuning: 1) Target bottle neck 2) source bottle-neck 3)mapping bottle-neck etc..As a developer we just can clear the bottle necks at the mapping level and at the session level. for example 1) Removing transformation errors. 2) Filtering the records at the earliest. 3) Using sorted data before an aggregator. 4) using less of those transformations which use cache. 5) using an external loader like sql loader etc to load the data faster. 6) less no of conversions like numeric to char and char to numeric etc 7) writing an override instead of using filter etc. 8)increasing the network packet size. 9) all the source systems in the server machine to make it run faster etc pal. ======================================= Hai Friends Performance tuning means techniques for improving the performance.1. Identify the Bottlenecks(issues that reduces performance)2. Configure the Bottlenecks.The Hierarchy we have to follow in performance tuning is a) Target b) Source c) Mapping d) Session e) SystemIf anything wrong in this plz. tell me. because I am still in the learning stage. ======================================= ======================================= Hi Friends Please clarify me What is mean by "bottle neck".For ex
file:///C|/Perl/bin/result.html (311 of 363)4/1/2009 7:50:59 PM

#1

file:///C|/Perl/bin/result.html

1) Target bottle neck 2) source bottle-neck 3)mapping bottle-neck I am not aware what does this bottle neck meant. Thanks for your help in advance. Thanks Thana

======================================= Bottle neck means drawbacks or problems ======================================= Target commit intervals and Source - based commit intervals. =======================================

323.Informatica - How to use incremental aggregation in real time?
QUESTION #323 No best answer available. Please pick the good answer available or submit your answer.
February 21, 2007 22:48:09 Hanu Ch Rao RE: How to use incremental aggregation in real time? Click Here to view complete document The first time you run a session with incremental aggregation enabled the server process the entire source. ======================================= #1

324.Informatica - Which transformation replaces the look up transformation?

QUESTION #324 No best answer available. Please pick the good answer available or submit your answer.
Sorting Options Oldest First Page 1 of 2 « First 1 2 > Last » #1

February 21, 2007 08:11:18
file:///C|/Perl/bin/result.html (312 of 363)4/1/2009 7:50:59 PM

file:///C|/Perl/bin/result.html

hegde RE: Which transformation replaces the look up transfor... Click Here to view complete document can u pls clarify ur ques? ======================================= Is there any transformation that could do the function of a lookup transformation. or Is there any transformation that we can use instead of lookup transformation. ======================================= question is misleading ... do you mean look up transformation will become obsolete since new transformation is created to do same thing in a better way ? which version ? ======================================= Yes Joiner with Source outer Join and Stored Procedure can replace look-up transformation ======================================= You can use Joiner transformation instead of lookup look up is only for relational tables which are not sources in your source analyzer but by using joiner we can join relational and flat files relational sources on different platforms. ======================================= Vicky Can you please Validate your answer with some example ?? ======================================= Hi Actually by look up t/s we are looking the required fields in the target and comparing with source ports.. Without look up we can do by joiner Master outer joiner means Matched rows of both t/s and unmatched from master table.. So you can get all changes.. if any doubt reply me back.. i will give another general example to you ======================================= plz don't mislead......... A master outer join keeps all rows of data from the detail source and the matching rows from the master source. It discards the unmatched rows from the master source.

=======================================

file:///C|/Perl/bin/result.html (313 of 363)4/1/2009 7:50:59 PM

file:///C|/Perl/bin/result.html

I hope The Joiner transformation is only for join the hetrogeneous sources.For Eg when we use one source from Flat file and another one is from Relational.I have a question as "How Joiner transformation overrides the Lookup" however both transformation performs different actions.

Cheers Thana

======================================= Lookup is nothing but Outer Join. So Look up t/s can be easily replaced by a joiner in any tool.

=======================================

325.Informatica - What is Dataware house key?

QUESTION #325 No best answer available. Please pick the good answer available or submit your answer.
February 21, 2007 23:26:44 Ravichandra RE: What is Dataware house key? Click Here to view complete document Dataware house key is warehouse key or surrogate key.ie generated by sequence generator.this key used in loading data into fact table from dimensions. ======================================= is it really called Data Warehouse Key ? is there such a term ? ======================================= I never heard this type of key is there in DWH terminology!!!!!!!!! ======================================= How datawarehouse key is called as surrogate key ======================================= There is a Dataware house key........plz go thru kimball book Every data warehouse key should be a surrogate key because the data warehouse DBA must have the flexibility to respond to changing descriptions and abnormal conditions in the raw data. =======================================
file:///C|/Perl/bin/result.html (314 of 363)4/1/2009 7:50:59 PM

#1

file:///C|/Perl/bin/result.html

Surrogate key is know as datawarehouse key. Surrogate key is used as a unique key in warehouse where dupicate keys may exits due to type 2 changes. =======================================

326.Informatica - What is inline view?

QUESTION #326 No best answer available. Please pick the good answer available or submit your answer.
February 21, 2007 22:40:51 Hanu Ch Rao RE: What is inline view? Click Here to view complete document The inline view is a construct in Oracle SQL where you can place a query in the SQL FROM clause just as if the query was a table name. A common use for in-line views in Oracle SQL is to simplify complex queries by removing join operations and condensing several separate queries into a single query #1

======================================= used in oracle to simplify queries ex: select rowid from (select ename from emp group by sal) where rowid<2 =======================================

327.Informatica - What is the exact difference between joiner and lookup transformation

QUESTION #327 No best answer available. Please pick the good answer available or submit your answer.
February 20, 2007 12:21:06 pal RE: What is the exact difference between joiner and lo... #1

file:///C|/Perl/bin/result.html (315 of 363)4/1/2009 7:50:59 PM

file:///C|/Perl/bin/result.html

Click Here to view complete document A joiner is used to join data from different sources and a lookup is used to get a related values from another table or check for updates etc in the target table. for lookup to work the table may not exist in the mapping but for a joiner to work the table has to exist in the mapping. pal. ======================================= a lookup may be unconnected while a joiner may not ======================================= lookup may not participate in mapping lookup does only non equi join joiner table must paraticipate in mapping joiner does only outer join =======================================

328.Informatica - Explain about scheduling real time in informatica

QUESTION #328 No best answer available. Please pick the good answer available or submit your answer.
March 01, 2007 06:45:31 sanghala Member Since: April 2006 Contribution: 111 #1

RE: Explain about scheduling real time in informatica Click Here to view complete document Scheduling of Informatica jobs can be done by the following ways:
q q q

Informatica Workflow Manger Using Cron in Unix Using Opcon sheduler

329.Informatica - How do you handle two sessions in Informatica
QUESTION #329 can any one tell me the option between the two session . if
file:///C|/Perl/bin/result.html (316 of 363)4/1/2009 7:50:59 PM

=======================================

file:///C|/Perl/bin/result.html

previous session execute successfully than run the next session... Click Here to view
complete document No best answer available. Please pick the good answer available or submit your answer. February 28, 2007 02:09:40 Divya Ramanathan RE: How do you handle two sessions in Informatica #1

======================================= You can handle 2 session by using a link condition (id $ PrevTaskStatus SUCCESSFULL) or you can have a decision task between them. I feel since its only one session dependent on one have a link condition ======================================= By giving a link condition like $PrevTaskStatus SUCCESSFULL

======================================= where exactly do we need to use this link condition (id $ PrevTaskStatus SUCCESSFULL) ======================================= you can drag and drop more than one session in a workflow. there will be linking different and is sequential linking concurrent linking in sequential linking you can run which ever session you require or if the workflow runs all the sessions sequentially. in concurrent linking you can't run any session you want.

=======================================

330.Informatica - How do you change change column to row in Informatica

QUESTION #330 No best answer available. Please pick the good answer available or submit your answer.
March 02, 2007 11:38:19 #1

file:///C|/Perl/bin/result.html (317 of 363)4/1/2009 7:50:59 PM

html Hanu Ch Rao RE: How do you change change column to row in Informat. Please pick the good answer available or submit your answer. 2007 06:05:37 sunil RE: write a query to retrieve the latest records from .file:///C|/Perl/bin/result. Click Here to view complete document you can write a query like inline view clause you can compare previous version to new highest version then you can get your result ======================================= hi Sunil Can u please expalin your answer some what in detail ???? ======================================= Hi #1 Assume if you put the surrogate key in target (Dept table) like p_key and file:///C|/Perl/bin/result...html (318 of 363)4/1/2009 7:50:59 PM .. Click Here to view complete document Hi We can achieve this by Normalizer trans.Informatica . aggregator is using group by duplicate rows ======================================= 331. ======================================= First u can ask what type of data change columns to rows in informatica? Next we can use expression and aggregator transformation. QUESTION #331 No best answer available. February 27.write a query to retrieve the latest records from the table sorted by version(scd)..

version from t_dept a where a.dno) this is the query if you write in lookup it retrieves latest (max) version in lookup from target. 2007 09:52:01 hemasundarnalco Member Since: December 2006 Contribution: 2 #1 RE: Explain about Informatica server process that how .* Rank() Over ( partition by ch_key_id order by version desc) as Rank from Acct where Rank() 1 ======================================= select business_key max(version) from tablename group by business_key ======================================= 332..dno a.Explain about Informatica server process that how it works relates to mapping variables? QUESTION #332 No best answer available.dno b. in this way performance increases.file:///C|/Perl/bin/result.Informatica . ======================================= Select Acct. March 09.version (select max(b..p_key a.html version field dno field and loc field is there then select a.loc a. Please pick the good answer available or submit your answer.version) from t_dept b where a. file:///C|/Perl/bin/result.html (319 of 363)4/1/2009 7:50:59 PM .

file:///C|/Perl/bin/result.html

Click Here to view complete document informatica primarly uses load manager and data transformation manager(dtm) to perform extracting transformation and loading.load manager reads parameters and variables related to session mapping and server and paases the mapping parameters and variable information to the DTM.DTM uses this information to perform the datamovement from source to target ======================================= The PowerCenter Server holds two different values for a mapping variable during a session run:
q q

Start value of a mapping variable Current value of a mapping variable

Start Value The start value is the value of the variable at the start of the session. The start value could be a value defined in the parameter file for the variable a value saved in the repository from the previous run of the session a user defined initial value for the variable or the default value based on the variable datatype. The PowerCenter Server looks for the start value in the following order: 1. 2. 3. 4. Value in parameter file Value saved in the repository Initial value Default value

Current Value The current value is the value of the variable as the session progresses. When a session starts the current value of a variable is the same as the start value. As the session progresses the PowerCenter Server calculates the current value using a variable function that you set for the variable. Unlike the start value of a mapping variable the current value can change as the PowerCenter Server evaluates the current value of a variable as each row passes through the mapping.

======================================= First load manager starts the session and it performs verifications and validations about variables and manages post session tasks such as mail. then it creates DTM process. this DTM inturn creates a master thread which creates remaining threads. master thread credtes read thread
file:///C|/Perl/bin/result.html (320 of 363)4/1/2009 7:50:59 PM

file:///C|/Perl/bin/result.html

write thread transformation thread pre and post session thread etc... Finally DTM hand overs to the load manager after writing into the target

=======================================

333.Informatica - What are the types of loading in Informatica?

QUESTION #333 No best answer available. Please pick the good answer available or submit your answer.
March 02, 2007 11:29:17 Hanu Ch Rao RE: What are the types of loading in Informatica? Click Here to view complete document Hi In Informatica there are mainly 2 types of loading is there. 1. Normal 2. Bulk you say and one more Incremental Loading. Normal means it loads record by record and writes logs for that. it takes time. Bulk load means it loads number of records at a time to target - it ignores logs ignores tracing level. It takes less time to load data to target. Ok... ======================================= 2 type of loading 1. Normal 2. Bulk Normal loading creates database log it is very slow Bulk loading by passes the database log it is very fast it disable the constraints. =======================================
file:///C|/Perl/bin/result.html (321 of 363)4/1/2009 7:50:59 PM

#1

file:///C|/Perl/bin/result.html

Hanu I agree with you. You told about Incremental Loading. Can u please let me know in detail about this type of Loading?????? Thanks in advance.......... ======================================= Loadings are 3 types 1. One time data loading 2. Complete data loading 3. Incremental loading ======================================= Two Types Normal :Data retrival is very easy it creates a index Bulk : Data retrival is not possible it stores the data in improperway =======================================

334.Informatica - What are the steps involved in to get source from OLTP system to staging area

QUESTION #334 No best answer available. Please pick the good answer available or submit your answer.
March 13, 2007 16:56:07 reddy RE: What are the steps involved in to get source from ... #1

file:///C|/Perl/bin/result.html (322 of 363)4/1/2009 7:50:59 PM

file:///C|/Perl/bin/result.html

Click Here to view complete document Data Profile Data Cleanching is used to verify the data types ======================================= Hi Reddy Could you tell me what do u mean by DATA PROFILE? Thanks in advance.. ======================================= go to source analyzer import databases go to warehouse designer import databases (target definitions) else create using generate sql go to mapping designer drag and drop sources and targets definitions link ports properly save to repository than q

=======================================

335.Informatica - What is use of event waiter?

QUESTION #335 No best answer available. Please pick the good answer available or submit your answer.
March 05, 2007 12:38:14 sreedhark26 RE: What is use of event waiter? Member Since: January 2007 Contribution: 25 #1

file:///C|/Perl/bin/result.html (323 of 363)4/1/2009 7:50:59 PM

file:///C|/Perl/bin/result.html

Click Here to view complete document q Event-Wait task. The Event-Wait task waits for an event to occur. Once the event triggers the PowerCenter Server continues executing the rest of the workflow. ======================================= event wait is of two type 1> predefine event: this type of event wait for the indicator file to trigger it.. 2> user define event: this type of event wait for the event raise to trigger the event. thanks ravinder ======================================= Event Wait task is a file watcher. when ever a trigger file is touched/created this task will kick off the rest of the sessions to execute which are there in the batch. I used only User defined Event wait task. ======================================= Event wait: will hold the workflow until it is get other instruction or delay mentioned by the user Cheers Sithu sithusithu@hotmail.com =======================================

336.Informatica - which transformation can perform the non equi join?

QUESTION #336 No best answer available. Please pick the good answer available or submit your answer.
March 12, 2007 01:47:43 kasireddy RE: which transformation can perform the non equi join... #1

file:///C|/Perl/bin/result.html (324 of 363)4/1/2009 7:50:59 PM

file:///C|/Perl/bin/result.html

Click Here to view complete document Hi Lookup transformation. ======================================= Here look up not supports outer join And supports non equijoin Joiner supports outer join not non equi ======================================= Lookup only can do non equi join joiner does only outer detail outer and master outer joins. ======================================= It is Lookup Cheers Sithu sithusithu@hotmail.com =======================================

337.Informatica - Which objects cannot be used in a mapplet transformation

QUESTION #337 No best answer available. Please pick the good answer available or submit your answer.
March 07, 2007 08:35:11 tirumalesh RE: Which objects cannot be used in a mapplet transfor... #1

file:///C|/Perl/bin/result.html (325 of 363)4/1/2009 7:50:59 PM

file:///C|/Perl/bin/result.html

Click Here to view complete document Non-Reusuable Sequence generator cannot be used in maplet.

======================================= q You cannot include the following objects in a mapplet:
q q q q q q q

Normalizer transformations Cobol sources XML Source Qualifier transformations XML sources Target definitions Pre- and post- session stored procedures Other mapplets

======================================= When you add transformations to a mapplet keep the following restrictions in mind: oIf you use a Sequence Generator transformation you must use a reusable Sequence Generator transformation. oIf you use a Stored Procedure transformation you must configure the Stored Procedure Type t o b e Normal. oYou cannot include PowerMart 3.5-style LOOKUP functions in a mapplet. oYou cannot include the following objects in a mapplet: −Normalizer transformations −COBOL sources −XML Source Qualifier transformations −XML sources −Tar ge t d e f i ni t i ons −Other mapplets =======================================

Joiner Transfermation Normalizer transformations Cobol sources XML Source Qualifier transformations XML sources
q

Target definitions

file:///C|/Perl/bin/result.html (326 of 363)4/1/2009 7:50:59 PM

.Informatica . Please pick the good answer available or submit your answer.html q q Pre. March 09. 2007 09:37:56 hemasundarnalco Member Since: December 2006 Contribution: 2 #1 RE: How to do aggregation with out using AGGREGAROR Tr.session stored procedures Other mapplets 338. 2007 11:00:14 sreedhark26 Member Since: January 2007 Contribution: 25 #1 RE: how do you test mapping and what is associate port. March 26.Informatica .. Click Here to view complete document write a sql qurey to perform an aggregation in the source qualifier transformation ======================================= Overwrite the SQL Query at Source Qualifier transformation ======================================= 339. file:///C|/Perl/bin/result.file:///C|/Perl/bin/result.How to do aggregation with out using AGGREGAROR Transformation ? ======================================= We can not use the following objects/ transformations in the mapplet ======================================= QUESTION #338 No best answer available.. Please pick the good answer available or submit your answer.and post.html (327 of 363)4/1/2009 7:50:59 PM .how do you test mapping and what is associate port? QUESTION #339 No best answer available..

Please pick the good answer available or submit your answer. In Debugger u can test first instance and next instance. March 26.Why can't we use normalizer transformation in mapplet? QUESTION #340 No best answer available. sreedhar ======================================= 341. Please pick the good answer available or submit your answer.Informatica . u can not use in mapplet.. 2007 09:12:45 #1 file:///C|/Perl/bin/result.file:///C|/Perl/bin/result.html (328 of 363)4/1/2009 7:50:59 PM . ======================================= 340. 2007 10:56:14 sreedhark26 Member Since: January 2007 Contribution: 25 #1 RE: Why can't we use normalizer transformation in mapp.. Click Here to view complete document Hi Nt is using for cobol sources u can take data in source analyzer automatically displays normalizer transformation.Strategy Transformation QUESTION #341 By using Update Strategy Transformation we use to maintain historical data using Type2 & Type3. March 26.Informatica .html Click Here to view complete document hi U can test mapping in mapping Designer by using debugger. By both of this which is better to use? Why? Click Here to view complete document No best answer available. associated port is output port. sreedhar ======================================= specifying the number of test load rows in the session properties.

for reading data from sources sql is mandatory ======================================= SQ reads data from the sources when the informatica server runs the session. Click Here to view complete document No best answer available.. data to the any other transformation is not allowed with out this SQ trans. 2007 06:32:28 chowdary RE: Source Qualifier in Informatica #1 ======================================= in informatica source qualifier will read data from sources.html sai RE: Strategy Transformation ======================================= using type 2 u can maintain total historical data along with current data. ======================================= 342. it can be used as a joiner if the data is coming from the same source. Please pick the good answer available or submit your answer.html (329 of 363)4/1/2009 7:50:59 PM . using type 3 u can maintain only one time historical data and currrent data: dased upon requrirement we have to choose the best suitable. March 27. file:///C|/Perl/bin/result.Source Qualifier in Informatica QUESTION #342 What is the Technical reason for having Source Qualifier in Informatica? Can a mapping be implemented without it? (Please don't mention the functionality of SQ) but the main reason why a mapping can't do without it. it can be used as a sorter and is also used to select distinct values.file:///C|/Perl/bin/result.. It has got other qualities also it can be used as a filter.Informatica .

what is the difference between mapplet and reusable Transformation? QUESTION #343 No best answer available. Click Here to view complete document A set of reusable transformations is called mapplet where as a single reusable transformtion is called reusable transformation.Informatica .. April 08. file:///C|/Perl/bin/result.What is the difference between SQL Overriding in Source qualifier and Lookup transformation? QUESTION #344 No best answer available.. 2007 11:10:38 ggk. Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result.Informatica .html (330 of 363)4/1/2009 7:50:59 PM . ======================================= 344. Please pick the good answer available or submit your answer. 2007 02:38:57 veera_kk Member Since: March 2007 Contribution: 3 #1 RE: what is the difference between mapplet and reusabl..html ======================================= In informatica source qualifier acts as a staging area and it will create its own data types which are related to source data types ======================================= 343.krishna Member Since: February 2007 Contribution: 12 #1 RE: What is the difference between SQL Overriding in . March 30..

May 07.html Click Here to view complete document Hi 1. file:///C|/Perl/bin/result. 2.file:///C|/Perl/bin/result. 2007 22:54:43 hanug Member Since: June 2006 Contribution: 24 #1 RE: How can you call trigger in stored procedure trans. In LOOKUP SQL override if you add or subtract ports from the SELECT statement the session fails. In LOOKUP if you override the ORDER BY statement the session fails if the ORDER BY statement does not contain the condition ports in the same order they appear in the Lookup condition ======================================= You can use SQL override in lookup if you have 1.html (331 of 363)4/1/2009 7:50:59 PM ..How can you call trigger in stored procedure transformation QUESTION #345 No best answer available.Informatica . ======================================= If you write a query in source qualifier(to override using sql editor) and press validate you can recognise whether the querry written is right or wrong. More than one look up table 2. But in lookup override if the querry is wrong and if you press validate button. Please pick the good answer available or submit your answer.You cannot recognise but when you run a session you will get error message and session fails ======================================= 345.. If you use where condition to reduce the records in the cache.

Hanu. 2008 19:12:31 #1 file:///C|/Perl/bin/result.How to assign a work flow to multiple servers? QUESTION #346 I have multiple servers.html Click Here to view complete document Hi: Trigger can not be called from Stored procedure. January 24.file:///C|/Perl/bin/result. 2007 15:08:36 krishna RE: How to assign a work flow to multiple servers? #1 ======================================= Informatica server will use Load manager process to run the workflow load manager will do assign the workflow process to the multiple servers. I want to assign a work flow to multiple servers Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer.what types of Errors occur when you run a session. U can get the difference between trigger and sp anywhere in the doc.Informatica . Trigger will execute implicitly when u are performing DML operation on the table or view(instead of views). ======================================= 347. October 19. ======================================= 346.Informatica . can you describe them with real time example QUESTION #347 No best answer available.html (332 of 363)4/1/2009 7:50:59 PM . Please pick the good answer available or submit your answer.

This only holds true when you have the header and footer the same format as the detail record. if informatica failed to connect data base 2. ======================================= 348. 2007 10:10:41 Abhishek RE: how do you add and delete header .Informatica .html (333 of 363)4/1/2009 7:50:59 PM . footer records from flat file during load to oracle? QUESTION #348 No best answer available.file:///C|/Perl/bin/result.Informatica . footer records .html kasisarath Member Since: January 2008 Contribution: 6 RE: what types of Errors occur when you run a session. Please pick the good answer available or submit your answer.. file:///C|/Perl/bin/result. Click Here to view complete document We can add header and footer record in two ways. Couple of them are 1.. 2) As soon as the sesion to generate detail record file finishes we can call unix script or unix command through command task which will concat the header file detail file and footer file and generrate the required file ======================================= #1 349. September 21. If paramerters not initilized with parameter file 4.how do you add and delete header . can you describe them with real time example Click Here to view complete document There are several errors you will get.Can we update target table without using update strategy transformation? why? QUESTION #349 No best answer available. Please pick the good answer available or submit your answer. 1) Within the informatica session we can sequence the data as such that the header flows in first and footer flows in last. If source file not eixsts in location 3. incompataible piped data types etc.

Informatica .file:///C|/Perl/bin/result. ======================================= using :Tu Targetupdate override. May 07. Use slowly changing dimension wizard and make necessary changes as per your logic.. Hanu... Please pick the good answer available or submit your answer. Please pick the good answer available or submit your answer. ======================================= #1 350.Informatica . 2007 15:52:27 shanthi1 Member Since: March 2007 Contribution: 6 #1 file:///C|/Perl/bin/result. ======================================= 351. Click Here to view complete document We can create them manually or use the slowly changing dimension wizard to avoid the hassels.. April 24.How to create slowly changing dimension in informatica? QUESTION #350 No best answer available. 2007 07:00:56 rahul RE: Can we update target table without using update s.html (334 of 363)4/1/2009 7:50:59 PM . There are some options in the Session properties. Click Here to view complete document Yes we can update the target table by using the session properties.What are the general reasons of session failure with Look Up having Dynamic Cache? QUESTION #351 No best answer available.html May 22. 2007 23:02:45 hanug Member Since: June 2006 Contribution: 24 #1 RE: How to create slowly changing dimension in informa.

Informatica . ======================================= In Source Qualifier we can filter records from different source systems(Relational or Flatfile).. ======================================= A Source Qualifier transformation is the starting point in the mapping where in we are bringing the file:///C|/Perl/bin/result.html RE: What are the general reasons of session failure wi.. In Filter Transformation we can use any expression to prepare filter condition which evaluates to TRUE or FALSE. April 24. But by using Filter Transformation we can filter out records from any sources. In simple before Filter Transformation the data from source system may or may not be processed (ExpressionTransformation etc. Please pick the good answer available or submit your answer. In Filter Transformation we will filter those records which we need to update or proceed further.).html (335 of 363)4/1/2009 7:50:59 PM . 2007 15:59:43 shanthi1 Member Since: March 2007 Contribution: 6 #1 RE: what is the difference between source qualifier tr.what is the difference between source qualifier transformation and filter transformation? QUESTION #352 No best answer available. The same cannot be done using Source Qualifier...file:///C|/Perl/bin/result.. ======================================= By using source qualifier transfomation we can filter out the records for only relational sources but by using Filter Transformation we can filter out the records of any source.. ======================================= By using Source Qualifier we can filter out records from only relational sources. Click Here to view complete document There is no SQL override in FILTER where as in Source Qualifier we have SQL override and also in SQ transformation we have options like SELECT DISTINCT JOIN FILTER CONDITIONS SORTED PORTS etc.... Click Here to view complete document hi If u r using dynamic lookup and if it is trying to return more than one value then session fails ======================================= 352.

Please pick the good answer available or submit your answer. May 07. If u dont have backup u lost everything. Of course the same purpose can be solved by the Source Qualifier transformation if this is extracting data from a relational source where as if the data is going to be extracted from a flat file then we cannot do it using source qualifier.Informatica .file:///C|/Perl/bin/result. 2007 22:58:05 hanug Member Since: June 2006 Contribution: 24 #1 RE: How do you recover a session or folder if you acci. 2007 22:51:18 hanug Member Since: June 2006 Contribution: 24 #1 file:///C|/Perl/bin/result.Informatica . u cant get it back.. May 07.html incoming data or the source data is extracted from this transformation after connecting to the source data base.How do you recover a session or folder if you accidentally dropped them? QUESTION #353 No best answer available.html (336 of 363)4/1/2009 7:50:59 PM . Thats why we should always take the backup of the objects that we create.How do you automatically execute a batch or session? QUESTION #354 No best answer available. ======================================= 354. Hanu.. Please pick the good answer available or submit your answer. A filter transformation is a transformation which is placed in the mapping pipe line in order to pass the data to the data following some specific conditions that has to be followed by the passing records. Click Here to view complete document u can find ur backup and restore from the backup. ======================================= 353.

Reset the target connections. June 20.what is homogeneous transformation? QUESTION #356 No best answer available. ======================================= 355..file:///C|/Perl/bin/result. May 11...html RE: How do you automatically execute a batch or sessio. Please pick the good answer available or submit your answer. Click Here to view complete document u can use the scripting(either UNIX shell or Windows Dos scripting) and then schedule the script. Click Here to view complete document There is no as such best way but you have to incorporate following steps 1.Informatica . Please pick the good answer available or submit your answer. Refresh mapping with right click on session 3.. 4. Hanu.What is the best way to modify a mapping if the target table name is changed? QUESTION #355 No best answer available.Informatica .html (337 of 363)4/1/2009 7:50:59 PM . Change the name in Mapping Designer 2. 2007 03:12:31 yuva010 RE: What is the best way to modify a mapping if the ta. ======================================= #1 356. 2007 18:57:56 vinod madala RE: what is homogeneous transformation? #1 file:///C|/Perl/bin/result. Save.

May 07. Please pick the good answer available or submit your answer. file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer. such as force informatica to use indexes synonyms etc .html (338 of 363)4/1/2009 7:50:59 PM .. What steps you will take.How can you join two tables without using joiner and sql override transformations? QUESTION #358 No best answer available.If a Mapping is running slow.Informatica .Informatica . What steps you will ... 2007 19:05:32 vinodh259 Member Since: May 2007 Contribution: 2 #1 RE: If a Mapping is running slow.. 2007 22:48:49 hanug Member Since: June 2006 Contribution: 24 #1 RE: How can you join two tables without using joiner a. best technique is find ehich trnsformation is makin the mapping slow and with that logic build a query in the databse and see how the database executing plan. in help lookfor optimizing hints ======================================= 358. for more information try help topics in power designer.. ======================================= 357. to see the plan how database executing plan for example EXPLAIN PLAN command in oracle helps u . May 11. Click Here to view complete document provide optimizer hints to the informatica.html Click Here to view complete document Pulling the data from same type of sources using a same database link is called Homogenous. The key word is same databse link (thats the system DSN we use).file:///C|/Perl/bin/result. to correct it? QUESTION #357 No best answer available.

======================================= you can join two tables with in the same database by using lookup query override ======================================= If the sources are homogenous we use source qualifier if the sources have same structure we can use union transformation ======================================= 359... Here u may not get the same results as SQL OVERRIDE(data coming from same soure) or Joiner(same source or different sources) because it takes single record in case of multi match If one of the table contains single record u dont need to use SQL OVERRIDE or Joiner to join records.Where and Why do we use Joiner Cache? QUESTION #360 No best answer available. Please pick the good answer available or submit your answer.html (339 of 363)4/1/2009 7:50:59 PM . 2007 11:21:21 ramgan_tryst Member Since: May 2007 Contribution: 2 #1 RE: What does Check-In and Check-Out option refer to i. When you right-click you mapping you have a option called Versioning if you have got that facility enabled. May 15.Informatica .What does Check-In and Check-Out option refer to in the mapping designer? QUESTION #359 No best answer available.Informatica . It is like maintaining the changes you have made. Let it perform catesion product. Click Here to view complete document Check-In and Check-Out refers to Versioning your Mapping. Please pick the good answer available or submit your answer. This way u dont need to use both(sql override joiner) Hanu. 2007 04:57:02 mohammed haneef #1 file:///C|/Perl/bin/result.html Click Here to view complete document U can use the Lookup Transformation to perform the join. It is like using VSS or CVS. June 28.file:///C|/Perl/bin/result. ======================================= 360.

======================================= 361.. After building the cache joiner t/r reads records from detail source to performs joins.. If you connect default group to an output the powercenter processes the data.where does the records goes which does not satisfy condition in filter transformation? QUESTION #361 No best answer available. ======================================= There is no default group in Filter Transformation.html (340 of 363)4/1/2009 7:50:59 PM .Informatica . Please pick the good answer available or submit your answer. While using joiner cache informatica server first read the data from master source and built data index & data cache in the master rows.file:///C|/Perl/bin/result. 2007 05:14:44 pkonakalla Member Since: May 2007 Contribution: 2 #1 RE: where does the records goes which does not satisfy. Otherwise it doesnt process the default group. It does not appear in the session logfile or reject files. July 06.Informatica . ======================================= The rows which are not satisfing the filter transformation are discarded. The records which does not satisfy filter condition are discarded and not written to reject file or session log file ======================================= 362.How can we access MAINFRAME tables in INFORMATICA as a source ? file:///C|/Perl/bin/result.html RE: Where and Why do we use Joiner Cache? Click Here to view complete document Hi Joiner Cache will use in Joiner t/r to improve the performance. Click Here to view complete document It goes to the default group.

Worlf. August 03.e..file:///C|/Perl/bin/result.html (341 of 363)4/1/2009 7:50:59 PM .html QUESTION #362 ex-: Suppose a table EMP is in MAINFRAME then how can we access this table as SOURCE TABLE in informatica? Click Here to view complete document No best answer available. 2007 14:32:55 vishnukirank Member Since: May 2007 Contribution: 1 #1 RE: How can we access MAINFRAME tables in INFORMATICA.How to run a workflow without using GUI i. Workflow Monitor and pmcmd? QUESTION #363 No best answer available. file:///C|/Perl/bin/result..e.Informatica . Please pick the good answer available or submit your answer. ======================================= Use Informatica Power Connect to connect to external systems like Mainframes and import the source tables. May 26.. 2007 03:56:21 balaetl Member Since: November 2005 Contribution: 3 #1 RE: How to run a workflow without using GUI i. Please pick the good answer available or submit your answer. ======================================= Use the Normalizer transformation to take the Mainframe sources(COBOL) ======================================= 363. Worlflow Manager..

. ======================================= 364..html (342 of 363)4/1/2009 7:50:59 PM . ======================================= 365.html Click Here to view complete document pmcmd is not GUI. 2007 04:23:27 Shashikumar RE: What are the Data Cleansing Tools used in the DWH?.file:///C|/Perl/bin/result. This transformation used to normalize data.Informatica . ======================================= Unless the job is scheduled you cannot manually run a workflow without using a GUI. #1 file:///C|/Perl/bin/result. 2008 19:04:25 kasisarath Member Since: January 2008 Contribution: 6 #1 RE: How to implement de-normalization concept in Informatica Mappings? Click Here to view complete document User Normalizer. June 21. Please pick the good answer available or submit your answer. January 24. Please pick the good answer available or submit your answer.Informatica .How to implement de-normalization concept in Informatica Mappings? QUESTION #364 No best answer available.What are the Data Cleansing Tools used in the DWH?What are the Data Profiling Tools used for DWh? QUESTION #365 No best answer available. It is a command you can use within unix script to run the workflow.

file:///C|/Perl/bin/result.html Click Here to view complete document Data Cleansing Tool: Trilium Profile tool: Informatica Profiler ======================================= 366.Informatica .how to use the shared cache feature in look up transformation QUESTION #367 No best answer available.Informatica . Please pick the good answer available or submit your answer. 2007 11:26:00 chandrarekha RE: how to use the shared cache feature in look up tra.. #1 file:///C|/Perl/bin/result.how do you use Normalizer to convert columns into rows ? QUESTION #366 No best answer available. February 18. September 28. Please pick the good answer available or submit your answer. 2008 03:47:43 Deepak Rajkumar Member Since: June 2007 Contribution: 4 #1 RE: how do you use Normalizer to convert columns into rows ? Click Here to view complete document Using Normalizer in the normalizer properties we can add coloumns and for a coloumn we can specify the occurance levels which converts rows into coloumns. ======================================= 367..html (343 of 363)4/1/2009 7:50:59 PM .

#1 file:///C|/Perl/bin/result.Informatica ..html (344 of 363)4/1/2009 7:50:59 PM . Please pick the good answer available or submit your answer.html Click Here to view complete document If you are using single lookup and you are using only for reading tha data you can use satatic cache ======================================= Instead of creating multiple cache files informatica server creates only one cache file for all the lookups used with in the mapping which selected as a shared cache option.file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer.What is Repository size.. September 28. What is its min and max size? QUESTION #368 No best answer available. 2007 08:23:22 Samir Desai RE: What will be the way to send only duplicate record. Click Here to view complete document 10GB ======================================= #1 369. What is its min and max .. 2007 11:24:21 chandrarekha RE: What is Repository size. July 23. ======================================= 368.Informatica .What will be the way to send only duplicate records to the Target? QUESTION #369 No best answer available..

file:///C|/Perl/bin/result. file:///C|/Perl/bin/result.n having count(*)>1 to get only duplicate records in the traget.What are the Commit & Commit Intervals? QUESTION #370 No best answer available.n from source group by Col_1 Col_2 Col. ======================================= If you take a ex:EMP TABLE having the duplicate records The query is SELECT *FROM EMP WHERE EMPNO IN SELECT EMPNO FROM EMP GROUP BY EMPNO HAVING COUNT(*)>1.Explain Session Recovery Process? QUESTION #371 No best answer available.. Please pick the good answer available or submit your answer.Informatica . Sanjeeva Reddy ======================================= 370. 2007 13:05:53 rasmi Member Since: June 2007 Contribution: 20 #1 RE: What are the Commit & Commit Intervals? Click Here to view complete document Commit interval is a interval in which the Informatica server loads the data into the target.. Best Regrads Samir Desai.html Click Here to view complete document You can use following query in SQ --> Select Col_1 Col_2 Col. Please pick the good answer available or submit your answer. July 31.Informatica .html (345 of 363)4/1/2009 7:50:59 PM . ======================================= 371.

Please pick the good answer available or submit your answer. when it starts recovery process again it starts from the next row_id.file:///C|/Perl/bin/result. 2007 11:54:03 rasmi RE: Explain pmcmd? Member Since: June 2007 Contribution: 20 #1 file:///C|/Perl/bin/result. 2007 11:21:50 chandrarekha RE: Explain Session Recovery Process? Click Here to view complete document You have 3 steps in session recovery If Informatica server performs no commit run the session again At least one commit perform recovery perform recovery is not possible truncate the target table and run the session again. ======================================= 372.html September 28.Informatica .html (346 of 363)4/1/2009 7:50:59 PM .Explain pmcmd? QUESTION #372 No best answer available. if session recovery should take place atleast one commit must be executed. July 31. #1 Rekha ======================================= hi when the informatica server starts a recovery session it reads the opb_srvr_recovery table and notes the rowid of the last row commited to the target database.

html Click Here to view complete document 1. ======================================= pmcmd means powermart command prompt which used to perform the tasks from command prompt and not from Informatica GUI window ======================================= It a command line program.html (347 of 363)4/1/2009 7:50:59 PM . PMCMD performs following tasks 1)start and stop batches and sessions 2)recovery sessions 3)stops the informatica 4)schedule the sessions by shell scripting 5)schedule the sessions by using operating system schedule tools like CRON 373.start and stop the sessions and batches 2. Please pick the good answer available or submit your answer.Informatica .check whether informatica server working or not. It performs the following tasks Start and stop the session and batches Stop the Informatica Server Checks whether the server is working or not ======================================= hi PMCMD means program command line utility. file:///C|/Perl/bin/result. it is a program command line utility to communicate with informatica server.stop the informatica server 4.file:///C|/Perl/bin/result.What does the first column of bad file (rejected rows) indicate? Explain ======================================= QUESTION #373 No best answer available.process the sesson recovery 3. Pmcmd is perform following tasks 1.

2007 05:24:31 hpadala RE: What is the size of data mart? #1 file:///C|/Perl/bin/result.Informatica . August 22.html (348 of 363)4/1/2009 7:50:59 PM . Please pick the good answer available or submit your answer.. Click Here to view complete document First column of the bad file indicates row -indicator and second column indicates column indicator Row indicator : Row indicator tells the writer what to do with the row of wrong data Row indicator meaning rejected by #1 0 Insert target/writer 1 update target/writer 2 delete target/writer 3 reject writer If the row indicator is 3 the writer rejects the row because the update starategy expression is marked as reject ======================================= 374.What is the size of data mart? QUESTION #374 No best answer available.file:///C|/Perl/bin/result. 2007 14:24:17 chandrarekha RE: What does the first column of bad file (rejected r.html September 24..

for example In an organization we can manage Employee personal information as one data mart and Project information as one data mart one data ware house may have any number of data marts. of runs. ======================================= Hi Data mart is a part of the data warehouse.html (349 of 363)4/1/2009 7:50:59 PM . Everytime you run the session the cache will be rebuilt. August 24. Cheers Thana ======================================= 375..Informatica . Click Here to view complete document By default there will be no name for the cache in lookup transformation. 2007 13:54:43 sn3508 Member Since: April 2006 Contribution: 20 #1 RE: What is meant by named cache?At what situati.The size of the data mart depends on your business needs it varies business to business..html Click Here to view complete document Datamart size around 1 gb.file:///C|/Perl/bin/result. file:///C|/Perl/bin/result. In this case the first time you run the session the cache will be build and the same cache is used for any no.What is meant by named cache?At what situation we can use it? QUESTION #375 No best answer available. its based on your project. You can rebuilt it again by deleting the cache ======================================= 376. If you give a name to it it is called Persistent Cache.Informatica . some times OLTP database may act as data mart for ware house.How do you define fact less Fact Table in Informatica QUESTION #376 No best answer available. Please pick the good answer available or submit your answer. This means the cache doesn't have any changes reflected to it even if the lookup source is changed. Please pick the good answer available or submit your answer.

February 06.. -regards rasmi ======================================= 1. keep that file in exact path which you have specified in the session . We have to use transformations for all our requirements 2.html (350 of 363)4/1/2009 7:50:59 PM . Please pick the good answer available or submit your answer. 2008 09:25:16 rasmi Member Since: June 2007 Contribution: 20 #1 RE: what are the main issues while working with flat files as source and as targets ? Click Here to view complete document We need to specify correct path in the session and mension either that file is 'direct' or 'indirect'.file:///C|/Perl/bin/result. (i) Your data file may be fixed width but the definition is delimited----> truncated data (ii) Your data file as well as definition is delimited but specifying a wrong delimiter (a) a delimitor file:///C|/Perl/bin/result. Click Here to view complete document A Fact table with out measures is called fact less fact table ======================================= Fact table without measure is called factless fact table fact less fact tables are used to capture date transaction events ======================================= #1 377. Testing the flat files is a very tedious job 3.what are the main issues while working with flat files as source and as targets ? QUESTION #377 No best answer available. Most of the time erroneous result come when the data file layout is not in sync with the actual file. The file format (source/target definition) should match exactly with the format of data file.html September 18. We can not use SQL override.. 2007 00:30:12 ramakrishna RE: How do you define fact less Fact Table in Informat.Informatica .

That missed column wont exist in the target data file.When do you use Normal Loading and the Bulk Loading. If we use Bulk loading then the session performence will be increased.. September 19.Informatica . Please keep adding to this list. Please pick the good answer available or submit your answer. If you miss link to any column of the target then all the data will be placed in wrong fields. #1 ======================================= Normal Load: It loads the records one by one Server writes log file for each record So it takes more time to load the data. how means. 2007 11:35:05 rama krishna RE: When do you use Normal Loading and the Bulk Loadin. So automatically performence will be increased. Tell the difference? QUESTION #378 No best answer available.html (351 of 363)4/1/2009 7:50:59 PM . There are tremendous challenges which can be overcome by being a bit careful.html other than present in actual file or (b) a delimiter that comes as a character in some field of the file-->wrong data again (iii) Not specifying NULL character properly may result in wrong data (iv) there are other settings/attributes while creating file definition which one should be very careful 4.. Bulk load : It loads the number of records at a time it does not write any log files or tracing levels so it takes less time. ======================================= 378.. Click Here to view complete document If we use SQL Loder connections then it will be to go for Bulk loading.file:///C|/Perl/bin/result. ======================================= You would use Normal Loading when the target table is indexed and you would use bulk loading when file:///C|/Perl/bin/result. if we use the bulk loading the data will be BYPASS through the DATALOGS.. And if we use ODBC connections for source and target definations then it is better to go for Normal loading.

2007 13:41:04 vemurisasidhar Member Since: August 2007 Contribution: 10 #1 RE: how do you measure slowly changing dimensions usin.file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer. 2007 13:39:09 #1 file:///C|/Perl/bin/result. September 26.html (352 of 363)4/1/2009 7:50:59 PM . If yes give some example? QUESTION #381 No best answer available.Informatica .Informatica .Informatica .. Click Here to view complete document the lookup table is used to split the data by comparing the source and target data.html the target table is not indexed. September 24. Please pick the good answer available or submit your answer. Please pick the good answer available or submit your answer.how do you measure slowly changing dimensions using lookup table QUESTION #380 No best answer available.Have you implmented Lookup in your mapping. January 24.What is key factor in BRD? QUESTION #379 No best answer available. ======================================= 379. Running Bulk Load in an indexed table will cause the session to fail. the data is branched accordingly whether to update or insert ======================================= 381. 2008 19:05:37 kasisarath RE: What is key factor in BRD? Click Here to view complete document could you please eloborate what is BRD ======================================= Member Since: January 2008 Contribution: 6 #1 380..

html vemurisasidhar Member Since: August 2007 Contribution: 10 RE: Have you implmented Lookup in your mapping.where you implement and deliver projects in small manageable chunks (90 days ~ 1 qtr) and keep maturing your data warehouse. 2007 17:28:25 anjanroy Member Since: September 2005 Contribution: 4 #1 RE: Which SDLC suits best for the datawarehousing proj. If yes.file:///C|/Perl/bin/result. September 12.. QUESTION #382 No best answer available..Which SDLC suits best for the datawarehousing project. The best approach here is of a phased iteration . Click Here to view complete document we do. so inorder to split the rows either to update or insert into the target table we use the lookup transformation in reference to target table and compared with source table. A datawarehouse project is never "complete". Please pick the good answer available or submit your answer.What is the difference between source definition database and source qualifier? QUESTION #383 No best answer available. ======================================= 382. So here a traditional waterfall model is not the optimal SDLC approach. First of all they are ongoing. Click Here to view complete document Datawarehousing projects are different from the traditional OLTP project. Here most of the time the business users would say "give us the data and then we will tell you what do we want". ======================================= 383.. we have to update or insert a row in the target depending upon the data from the sources.Informatica .html (353 of 363)4/1/2009 7:50:59 PM ..Informatica . Please pick the good answer available or submit your file:///C|/Perl/bin/result.

September 24.html (354 of 363)4/1/2009 7:50:59 PM . September 24. Please pick the good answer available or submit your file:///C|/Perl/bin/result. Click Here to view complete document we can do this by using mapping wizard. 2007 13:26:52 vemurisasidhar Member Since: August 2007 Contribution: 10 #1 RE: What is the difference between source definition d.. scd can hold the historical data.. 2007 13:24:01 vemurisasidhar Member Since: August 2007 Contribution: 10 #1 RE: What is the logic will you implement to load data .. ======================================= 384. which is easy to work with. there are of basically two types: 1)getting started wizzard 2)scd gettign started wizard is used when there is no need to change the previous data.Informatica . ======================================= 385.What is a Shortcut and What is the difference between a Shortcut and a Reusable Transformation? QUESTION #385 No best answer available. Click Here to view complete document a source definition database contain the datatypes that are used in the orginal database from which the source is extracted.file:///C|/Perl/bin/result. where the source qualifier is used to convert the source definition datatypes to the informatica datatypes. Please pick the good answer available or submit your answer.What is the logic will you implement to load data into a fact table from n dimension tables? QUESTION #384 No best answer available.html answer.Informatica ..

. 2008 23:32:01 Sant_parkash Member Since: October 2007 Contribution: 22 #1 RE: What is a Shortcut and What is the difference between a Shortcut and a Reusable Transformation? Click Here to view complete document A Reusable Transformation can only be used with in the folder. 2007 17:50:21 naina RE: In what all transformations the mapplets cant be u. A reusable transformaion is usually something that is kept local to a folder examples would be the use of a reusable sequence generator for allocating warehouse Customer Id's which would be useful if you were loading customer details from multiple source systems and allocating unique ids to each new source-key. #1 file:///C|/Perl/bin/result. but a shortcut can be used anywhere in the Repository and will point to actual Transformation. ======================================= to add to this the shortcut points to objects of shared folder only ======================================= A shortcut is a reference (link) to an object in a shared folder these are commonly used for sources and targets that are to be shared between different environments / or projects.html answer. October 06... Please pick the good answer available or submit your answer.file:///C|/Perl/bin/result. A shortcut is created by assigning 'Shared' status to a folder within the Repository Manager and then dragging objects from this folder into another open folder. this provides a single point of control / reference for the object multiple projects don't all have import sources and targets into their local folders.Informatica . January 29.html (355 of 363)4/1/2009 7:50:59 PM . Many mappings could use the same sequence and the sessions would all draw from the same continuous pool of sequence numbers generated. ======================================= 386.In what all transformations the mapplets cant be used in informatica?? QUESTION #386 No best answer available.

you'll have to write sql queries in teh data base table using rownum/rowid concept. in that there are 100 records are duplicate records.html (356 of 363)4/1/2009 7:50:59 PM . load the unique rows in a temp table followed by a truncate on the original table and moving data back to it from the temp table..Informatica . Regards Mahesh Reddy Click Here to view complete document Submitted by: vivek1708 In order to beable to delete those entries I think. Above answer was rated as good by the following members: ayappan. Thanks kumar ======================================= file:///C|/Perl/bin/result..file:///C|/Perl/bin/result. soo we want to eliminate those records. Or by using the sorter and distinct option. hope it helps.which is the best method we have to follow.html Click Here to view complete document The mapplets cant use these following transformations xml source qualifier normalizer non-reusable sequence generator (it can use only reusable sequence generator) ======================================= 387.a ======================================= You can put sorter transformation after source qualifier transformation and in sorter tranformation' properties enable distinct property.Eliminate Duplicate Records QUESTION #387 Hi I am having 10000 records in flat file.

what is the difference between reusable transformation and mapplets? QUESTION #388 No best answer available..Informatica .What is the functionality of Lookup Transformation QUESTION #389 (connected & un connected) Click Here to view complete document No best answer available. Please pick the good answer available or submit your answer.html (357 of 363)4/1/2009 7:50:59 PM ..file:///C|/Perl/bin/result. 2007 06:59:15 Thananjayan Member Since: November 2007 Contribution: 15 #1 file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer. Click Here to view complete document Reusable transformation is a single transformatin which can be resuable & mapplet is a set of transformations which can be reusable. November 20. ======================================= #1 389. Or by using the sorter and distinct option load the unique rows in a temp table followed by a truncate on the original table and moving data back to it from the temp table. 2007 03:19:32 ramesh raju RE: what is the difference between reusable transforma.html In order to beable to delete those entries I think you'll have to write sql queries in teh data base table using rownum/rowid concept. November 17. hope it helps. ======================================= use aggregate on primary keys ======================================= 388.Informatica .

The design change as per your requirement. ======================================= Look up transformation compares the soure with specified target table and forwarded the records to next tranformation only matched records.. 2007 12:02:59 ravi RE: How do you maintain Historical data and how to ret.If not match it returns NULL ======================================= 390.Informatica . If you need to maintain the histrory of the data for ex The cost of the product change happen frequently but you would like to maintain all the rate history go to SCD Type2. Click Here to view complete document by using update statergy transformations ======================================= You can maintain the historical data by desing the mapping using Slowly changing dimensions types. October 23. If you need to insert new and update old data best go for Update strategy...html (358 of 363)4/1/2009 7:50:59 PM .html RE: What is the functionality of Lookup Transformation.What is difference between cbl (constaint based file:///C|/Perl/bin/result.file:///C|/Perl/bin/result.How do you maintain Historical data and how to retrieve the historical data? QUESTION #390 No best answer available.If you make your question more clear I can provide your more information.. Cheers Thana ======================================= #1 391.Informatica . Please pick the good answer available or submit your answer.

e if we want to load the EMP&DEPT data first it loads the data of DEPT then EMP because DEPT is PARENT table EMP is CHILD table. ======================================= 392. 2007 22:36:19 chandrarekha RE: Repository deletion Member Since: May 2007 Contribution: 14 #1 file:///C|/Perl/bin/result.Repository deletion QUESTION #392 what happens when a repository is deleted? If it is deleted for some time and if we want to delete it permanently? where is stored (address of the file) Click Here to view complete document No best answer available.Informatica .html (359 of 363)4/1/2009 7:50:59 PM .file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer.html commit) and target based commit?When we use cbl? QUESTION #391 No best answer available.i. Please pick the good answer available or submit your answer.the data was loaded into the the target table based on the Constraints. 2008 01:30:51 say2joshi Member Since: April 2007 Contribution: 3 #1 RE: What is difference between cbl (constaint based commit) and target based commit?When we use cbl? Click Here to view complete document could u pls clear ur question tht CBL stands for constarint based loading? thanks ======================================= CBL means constaraint based loading. January 16. easily to undestand it loads PARENT table first then CHILD table. November 28.

html ======================================= I too want to know the answer for this question I tried to delete the repository that time it was deleted after that i tried to create the repository with the same name what i was deleted earliar it is showing the repositort exist in the target we have to delete the repository in the target database ======================================= repository is stored in database go in repository admin console right click on repository then choose delete . Please pick the good answer available or submit your answer. November 20.What is Informatica basic data flow? QUESTION #394 No best answer available. Please pick the good answer available or submit your answer.html (360 of 363)4/1/2009 7:50:59 PM .What is pre-session and post-session? QUESTION #393 No best answer available. if any queries plz mail me sajjan.Informatica .file:///C|/Perl/bin/result.Informatica . 2007 23:55:12 vizaik RE: What is pre-session and post-session? Click Here to view complete document Pre-session: ======================================= Member Since: March 2007 Contribution: 30 #1 394.thereafter u can delete it. November 13. 2007 07:04:44 Nick RE: What is Informatica basic data flow? #1 file:///C|/Perl/bin/result. com ======================================= 393. a dialog box is displayed fill up the user name(database user wr ur repository is reside) give password and fill all the field .s25@gmail.

file:///C|/Perl/bin/result. October 27. ======================================= You can use the Repository Manager to perform the following tasks: ======================================= #1 396.html Click Here to view complete document INF Basic Data flow means Extraction Transformation and Loading of data from Source to the target. Click Here to view complete document Using Repository manager we can create new folders for the existing repositories and manage the repository from it ======================================= Using repository manager *We can create folders under the repository *Create Sub Folders and Folder Management *Create Users and USer Groups *Set Security/Access Privilges to the users and many more.what is the economic comparision of all the Informatica versions? QUESTION #396 No best answer available.html (361 of 363)4/1/2009 7:50:59 PM .. 2007 00:47:25 karuna RE: which activities can be performed using the reposi.Informatica . file:///C|/Perl/bin/result.. Please pick the good answer available or submit your answer..which activities can be performed using the repository manager? QUESTION #395 No best answer available. Cheers Nick ======================================= 395.Informatica . Please pick the good answer available or submit your answer..

.. ======================================= 398. ======================================= 397. ======================================= You can join the data from multiple tables in the same database by using lookup override You can use sql override if 1.why do we need lookup sql override? Do we write sql override in lookup with special aim? QUESTION #397 No best answer available.file:///C|/Perl/bin/result. 2008 04:59:08 sri.What are tracing levels in transformation? QUESTION #398 file:///C|/Perl/bin/result.html (362 of 363)4/1/2009 7:50:59 PM . Click Here to view complete document Yes sql override in lookup used to lookup more than one value from more than one table.html April 10. 2007 23:47:42 vizaik Member Since: March 2007 Contribution: 30 #1 RE: why do we need lookup sql override? Do we write sq.Informatica . November 13.. Adavantages are that the whole table need not be looked up.kal Member Since: January 2008 Contribution: 6 #1 RE: what is the economic comparision of all the Informatica versions? Click Here to view complete document Version controlling ======================================= Economic comparision is nothing but the price-tag of informatica available versions. Please pick the good answer available or submit your answer. You can use SQL override for filtering records in the cache to remove unwanted data ======================================= Lookup override can be used to get some specific records(using filters in where clause) from the lookup table. To use more than one look up in the mapping 2.Informatica .

======================================= #1 file:///C|/Perl/bin/result. Please pick the good answer available or submit your answer. there are 4 kind of tracing level which is responsible for giving more information on basis of their characterictis.html No best answer available. 4.Terse specifies Normal + Notification of data 3. Verbose Initialisation : In addition to the Normal tracing specifies the location of the data cache files and index cache files that are treated and detailed transformation statistics for each and every transformation within the mapping. 4 types of tracing levels supported 1. Verbose data: Along with verbose initialisation records each and every record processed by the informatica server For better performance of mapping execution the tracing level should be specified as TERSE Verbose initialisation and verbose data are used for debugging purpose.Normal: It specifies the initialization and status information and summerization of the success rows and target tows and the information about the skipped rows due to transformation errors.html (363 of 363)4/1/2009 7:50:59 PM . 2.file:///C|/Perl/bin/result. 2007 00:18:10 Abhishek Shukla RE: What are tracing levels in transformation? Click Here to view complete document Tracing level keeps the information aboaut your mapping and transformation. Thanks Abhishek Shukla ======================================= Tracing level in the case of informatica specifies the level of detail of information that can be recorded in the session log file while executing the workflow. November 19.

Sign up to vote on this title
UsefulNot useful