Informatica interview questions and answers.

What is the difference between Informatica 7.0 and 8.0 ? Features of Informatica 8 The architecture of Power Center 8 has changed a lot; 1. PC8 is service-oriented for modularity, scalability and flexibility. 2. The Repository Service and Integration Service (as replacement for Rep Server and Informatica Server) can be run on different computers in a network (so called nodes), even redundantly. 3. Management is centralized, that means services can be started and stopped on nodes via a central web interface. 4. Client Tools access the repository via that centralized machine, resources are distributed dynamically. 5. Running all services on one machine is still possible, of course. 6. It has a support for unstructured data which includes spreadsheets, email, Microsoft Word files, presentations and .PDF documents. It provides high availability, seamless fail over, eliminating single points of failure. 7. It has added performance improvements (To bump up systems performance, Informatica has added "push down optimization" which moves data transformation processing to the native relational database I/O engine whenever its is most appropriate.) 8. Informatica has now added more tightly integrated data profiling, cleansing, and matching capabilities. 9. Informatica has added a new web based administrative console. 10.Ability to write a Custom Transformation in C++ or Java. 11.Midstream SQL transformation has been added in 8.1.1, not in 8.1. 12.Dynamic configuration of caches and partitioning 13.Java transformation is introduced. 14.User defined functions 15.PowerCenter 8 release has "Append to Target file" feature. In a scenario I have col1, col2, col3, under that 1,x,y, and 2,a,b and I want in this form col1, col2 and 1,x and 1,y and 2,a and 2,b, what is the procedure? Use Normalizer : create two ports first port occurs = 1 second make occurs = 2 two output ports are created and connect to target On a day, I load 10 rows in my target and on next day if I get 10 more rows to be added to my target out of which 5 are updated rows how can I send them to target? How can I insert and update the record? We can use do this by identifying the granularity of the target table . We can use CRC external procedure after that to compare newly generated CRC no. with the old one and if they do not match then update the row.

What is the method of loading 5 flat files of having same structure to a single target and which transformations I can use? union transformation, otherwise write all file paths of five files in one file and use this file in session properties as indirect In a scenario I want to change the dimensions of a table and normalize the renormalized table which transformation can I use? you can use normalizer transformation .It will normalize the records. Why is meant by direct and indirect loading options in sessions? Direct loading can be used to Single transformation where as indirect transformation can be used to multiple transformations or files In the direct we can perform recovery process but in Indirect we cant do it . SCD Mappings are mosltly used Unconnected Lookup Transformation. Unconnected Lookup used static Cache only.At this time how can u insert or update date in Target by using static cache? How many types of dimensions are available in informatica? There r 3 types of dimensions 1.star schema 2.snowflake schema 3.glaxy schema 3 types.That are 1. SCD(slowly changing dimension) type1 2. SCD type2 3. SCD type3 When we create a target as flat file and source as oracle.. how can i specify first rows as column names in flat files... use a pre sql statement....but this is a hard coding method...if you change the column names or put in extra columns in the flat file, you will have to change the insert statement You can also achive this by changing the setting in the Informatica Repository manager to display the columns heading. The only disadvantage of this is that it will be applied on all the files that will be generated by This server 1.can u explain one critical mapping? 2.performance issue which one is better? whether connected lookup tranformation or unconnected one? it depends on your data and the type of operation u r doing. If u need to calculate a value for all the rows or for the maximum rows coming out of the

source then go for a connected lookup. Or,if it is not so then go for unconnectd lookup. Specially in conditional case like, we have to get value for a field 'customer' from order tabel or from customer_data table,on the basis of following rule: If customer_name is null then ,customer=customer_data.ustomer_Id otherwise customer=order.customer_name. so in this case we will go for unconnected lookup. Dimesions are 1.SCD 2.Rapidly changing Dimensions 3.junk Dimensions 4.Large Dimensions 5.Degenerated Dimensions 6.Conformed Dimensions. How can you improve the performance of Aggregate transformation? we can improve the agrregator performence in the following ways 1.send sorted input. 2.increase aggregator cache size.i.e Index cache and data cache. 3.Give input/output what you need in the transformation.i.e reduce number of input and output ports. Use Sorter Transformation to sort input in aggregrator properties filter the records before Why did you use stored procedure in your ETL Application? usage of stored procedure has the following advantages 1checks the status of the target database

so that data can be integrated easily.Refer : Update Strategy in Transformation Guide for more information I have used in the case where i wanted to insert and update the records in the same mapping . while lookups speed will be lesser. You can do this on session level tooo but there you cannot define any condition. After draging the ports of three sources(sql server. Sorting. As there is no indexes. We can also use procedure to populate Time Dimension What is a source qualifier?When you add a relational or a flat file source definition to a mapping.How do you create single lookup transformation using multiple tables? Write a override sql query. can u map these three ports directly to target? if u drag three hetrogenous sources and populated to target without any join means you are entertaining Carteisn product. .oracle. Basic purpose of a source qualifier is to make the database specific data types into informatica specific types .For eg: If you want to do update and insert in one mapping. Adjust the ports as per the sql query. If you don't use join means not only diffrent sources but homegeous sources are show same error..2drops and recreates indexes 3determines if enough space exists in the database 4performs aspecilized calculation why did u use update stategy in your application? Update Strategy is used to drive the data to be Inert. you need to connect it to a Source Qualifier transformation. If you are not interested to use joins at source qualifier level u can add some joins sepratly.you will create two flows and will make one as insert and one as update depending upon some condition. In Source qualifier we can join the tables from same database only. By writing SQL override and specifying joins in the SQL override.. Cons: There is no concept of updating existing records in flat file. The Source Qualifier represents the rows that the Informatica Server reads when it executes a session. Update and Delete depending upon some condition. How to load time dimension? We can use SCD Type 1/2/3 to load any Dimensions based on the requirement. Merging operations will be faster as there is no index concept and Data will be in ASCII mode.informix) to a single source qualifier. In update strategy target table or flat file which gives more performance ? why? Pros: Loading.

A set of transformations that you can use in multiple mappings. Definitions of database objects (tables.source definitions. A session is a type of task that you can put in a workflow. Sessions and workflows store information about how and when the Informatica Server moves data.target definitions.location. Define informatica repository? Infromatica Repository:The informatica repository is at the center of the informatica suite. Each session corresponds to a single mapping. Target definitions. Power Center repository is used to store informatica's meta data . These are the instructions that the InformaticaServer uses to transform and move data.) A repository within a domain that is not the global repository. Transformations that you can use in multiple mappings. A repository that functions individually. What r the types of metadata that stores in repository Source definitions. (PowerCenter only. unrelated and unconnected to other repositories. Mappings. Target definitions that are configured as cubes and dimensions. A workflow is a set of instructions that describes how and when to run tasks related to extracting. a group of connected repositories. Mapplets. Local repository. Global repository. Information such as mapping name. But you have to Configure FTP Connection details IP address User authentication What is power center repository? Standalone repository.How can u work with remote database in informatica?did u work directly by using remote connections? You can work with remote. Reusable transformations. and loading data. Multi-dimensional metadata. You create a set of metadata tables within the repository database that the informatica . (PowerCenter only. A set of source and target definitions along with transformations containing business logic that you build into the transformation. Sessions andworkflows. Each local repository in the domain can connect to the global repository and use objects in its shared folders. synonyms) or files that provide source data.) The centralized repository in adomai n. Definitions of database objects or files that contain the target data. transforming. Each domain can contain one global repository. Theglobal repository can contain common objects to be shared throughout the domain through global shortcuts. views.transformation and flow is stored as meta data in the repository.

For UNIX shell users.Pass-through 4.Hash-Key partitioning 5.For flat file onlydatabase partitioning is not applicable. What is difference between maplet and reusable transformation? Maplet: one or more transformations Reusable transformation: only one transformation Mapplet : set of transformations that are reusable.txt' For Windows command prompt users.application and tools access.Key Range partitioning All these are applicable for relational targets. What is parameter file When you start a workflow. enclose the parameter file name in single quotes: -paramfile '$PMRootDir/myfile. rest will be taken care by informatica session. The Informatica Server runs the workflow using the parameters in the file you specify.RoundRobin 3.Database partitioning 2. The informatica client and server access the repository to save and retrieve metadata. Informatica supports Nway partitioning. Informatica supports following partitions 1. Reusuable Transformation: Single transformation which is reusable. What is difference between partioning of relatonal target and partitioning of file targets? Partition's can be done on both relational and flat files.U can just specify the name of the target file and create the partitions. the parameter file name cannot have beginning or . you can optionally enter the directory and name of a parameter file.

If u want to start batch that resides in a batch. There are two types of batches 1.txt' Can u start a batches with in a batch? U can not. .Simply by just dragging the session to the target desti nation. Informatica server can achieve high performance by partitioning the pipleline and performing the extract . Sequential Can u copy the session to a different folder or repository? In addition. What is batch and describe about types of batches? Batch--.txt” Note: When you write a pmcmd command that includes a parameter file located on another machine. If the name includes spaces.is a group of any thing Different batches----Different groups of different things. we can use the repository manager ( which is client sid etool). This ensures that the machine where the variable is defined expands the server variable.Similarly for loading also informatica server creates multiple connections to the target and loads partitions of data concurently.Informatica server reads multiple partitions of a single source concurently. and load for each partition in parallel. Why we use partitioning the session in informatica? Performance can be improved by processing data in parallel in a single session by creating multiple partitions of the pipeline.targets and session to the target folder. enclose the file name in double quotes: -paramfile ”$PMRootDirmy file. the session will be copied. use the backslash () with the dollar sign ($).trailing spaces. you can copy the workflow from the Repository manager. This will automatically copy the mapping. Concurrent 2. transformation. pmcmd startworkflow -uv USERNAME -pv PASSWORD -s SALES:6258 -f east -w wSalesAvg -paramfile '$PMRootDir/myfile. Yes it is possible. associated source. create a new independent batch and copy the necessary sessions into the new batch. For copying a session to a folder in the same repository or to another in a different repository. How the informatica server increases the session performance through partitioning the source? For a relational sources informatica server creates multiple connections for each parttion of a single source and extracts seperate range of data for each connection.

Slowly Changing the Dimension Type1 Most recent values Type2 Full History Version Flag . What r the different types of Type2 dimension maping? 1.Simple Pass through 2. but you can generate metadata report. but if u want to get the info of all theinsert statements and Updates you need to use session log file where you configure it toverbose. Slowly Growing Target. You will get complete set of data which record was inserted and which was not. that is not going to be used for business analysis How can u recognise whether or not the newly added rows in the source r gets insert in the target ? If it is Type 2 Dimension the abouve answer is fine. • Expression • Lookup • Filter • Sequence Generator • Update Strategy What r the types of maping in Getting Started Wizard? 1.and also want project on slowly changing dimension in informatica. We can use the following Mapping for slowly Changing dimension table. Flag 3.Which tool U use to create and manage sessions and batches and to monitor and stop the informatica server? Informatica Workflow Managar and Informatica Worlflow Monitor Can u generate reports in Informatcia? It is a ETL tool. Version number 2. you could not make reports from here. Slowly Growing Target What r the types of maping wizards that r to be provided in Informatica? Simple Pass through.Date What r the mapings that we use for slowly changing dimension table? i want whole information on slowly changing dimension.

delete. In other words. you set your update strategy at two different levels: Within a session. What r the basic needs to join two sources in a source qualifier? The both the table should have a common feild with same datatype.update. or reject. treat all rows as inserts).delete. In PowerCenter and PowerMart. and targets . which you set in the Properties tab: Normal (Default) Master Outer Detail Outer Full Outer Inner equil joint is default joint in source qualifier.update as update. if they are new records from source. update. What is the target load order? A target load order group is the collection of source qualifiers. instead of updating the records in the target they are inserted as new records. If any relation ship exists that will help u in performance point of view. When you configure a session. delete. Within a mapping. Update else Insert: This option enables informatica to flag the records either for update if they are old or insert. how to handle changes to existing rows. transformations. or use instructions coded into the session mapping to flag rows for different database operations. you can instruct the Informatica Server to either treat all rows in the same way (for example. Its not neccessary both should follow primary and foreign relationshi p.insert as update. this field is marked Data Driven by default What is the default source option for update stratgey transformation? DATA DRIVEN What is update strategy transformation ? The model you choose constitutes your update strategy. insert. you use the Update Strategy transformation to flag rows for insert.update else insert. or reject. Within a mapping. If the mapping for the session contains an Update Strategy transformation.Date Type3 Current and one previous What r the options in the target session of update strategy transsformatioin? Update as Insert: This option specified all the update records from source to be flagged as inserts in the target. What is the default join that source qualifier provides? The Joiner transformation supports the following join types.update. What is Datadriven? The Informatica Server follows instructions coded into Update Strategy transformations within the session mapping to determine how to flag rows for insert.

If you include a filter condition. the Informatica Server replaces the join information specified by the metadata in the SQL query. The Source Qualifier represents the rows that the InformaticaServer reads when it executes a session. Create a custom query to issue a special SELECT statement for the Informatica Server to read source data. If you choose Select Distinct. What is source qualifier transformation? When you add a relational or a flat file source definition to a mapping. You can join two or more tables with primary-foreign key relationships by linking the sources to one Source Qualifier. If you include a filter condition. For example. the Informatica Server adds a WHERE clause to the default query. Specify sorted ports. For example. If you include a user-defined join. Specify sorted ports. What r the tasks that source qualifier performs? Join data originating from the same sourcedatabase. Join data originating from the same source database.linked together in a mapping. Filterrecords when the Informatica Server reads source data. If you specify a number for sorted ports. the Informatica Server adds a WHERE clause to the default query. Specify an outer join rather than the default inner join. Filter records when the Informatica Server reads source data. the Informatica Server adds an ORDER BY clause to the default SQL query. Database administrators create stored procedures to automate time-consuming tasks that are too complicated for standard SQL statements. If you choose Select Distinct. Specify an outer join rather than the default inner join. the Informatica Server adds a SELECT DISTINCT statement to the default SQL query. If you include a user-defined join. the Informatica Server adds an ORDER BY clause to the default SQL query. What r the types of groups in Router transformation? A Router transformation has the following types of groups: Input Output Input Group . Create a custom query to issue a special SELECT statement for the Informatica Server to read source data. you might use a custom query to perform aggregate calculations or execute a stored procedure Why we use stored procedure transformation? A Stored Procedure transformation is an important tool for populating and maintaining databases. you might use a custom query to perform aggregate calculations or execute a stored procedure. Select only distinct values from the source. If you specify a number for sorted ports. the Informatica Server adds a SELECT DISTINCT statement to the default SQL query. the Informatica Server replaces the join information specified by the metadata in the SQL query. you need to connect it to a Source Qualifier transformation. You can join two or more tables with primary-foreign key relationships by linking the sources to one Source Qualifier. Select only distinct values from the source.

How the informatica server sorts the string values in Ranktransformation? When InformaticaServer runs in UNICODE data movement mode . We can run informatica server either in UNICODE data moment mode orASCII data moment mode. A Normalizer transformation can appear anywhere in a data flow when you normalize a relational source. creating input and output ports for every column in the source Difference between static cache and dynamic cache Static cache Dynamic cache . Two types of groups user defined group default group. the generated values are known as rank index. What is the Rankindex in Ranktransformation? Based on which port you want generate Rank is known as rank port. Output Groups There are two types of output groups: User-defined groups Default group You cannot modify or delete output ports or their properties. Unicode mode: in this mode informatica server sorts the data as per the sorted order in session.then it uses the sort order configured in session properties. allowing you to organize the data according to your own needs.TheDesigner copies property information from the input ports of the input group to create a set of output ports for each output group. ASCII Mode:in this mode informatica server sorts the date as per the binary order Which transformation should we use to normalize the COBOL and relational sources? The Normalizer transformation normalizesrecords from COBOL and relational sources. the Normalizer transformation automatically appears. A Filter transformation tests data for one condition and drops the rows of data that do not meet the condition. Use a Normalizer transformation instead of the Source Qualifier transformation when you normalize a COBOL source. What is the Router transformation? A Router transformation is similar to a Filter transformation because both transformations allow you to use a condition to test data. When you drag a COBOL source into the MappingDesigner workspace. a Router transformation tests data for one or more conditions and gives you the option to route rows of data that do not meet any of the conditions to a default output group. Hi. However.

This indicates that the the row is not in the cache or target table. U can pass these rows to the target table What r the types of lookup caches? Cache 1. Persistent cache Differences between connected and unconnected lookup? Connected lookup Unconnected lookup Receives input values diectly from the pipe line.Then instead of adding another tbl which contains Empname as a source . What r the joiner caches? Specifies the directory used to cache masterrecords and the index to these records. . For Ex:Suppose the source contains only Empno. but we want Empname also in the mapping. The directory can be a mapped or mounted drive. Static cache 2.we can Lkp the table and get the Empname in target. The informatic server inserts rows into cache when the condition is false. make sure the directory exists and contains enough disk space for the cache files. U can use a dynamic or static cache U can use a static cache. Cache includes all lookup columns used in the maping Cache includes all lookup out put ports in the lookup condition and the lookup/return port. Receives input values from the result of a lkp expression in a another transformation.When the condition is not true. Support user defined default values Does not support user defiend default values what is the look up transformation? Using it we can access the data from a relational table which is not a source in the mapping. informatica server returns the default value for connected transformations and null for unconnected transformations. Dynamic cache 3. If you override the directory. the cached files are created in a directory specified by theserver variable $PMCacheDir. By default.U can not insert or update the cache U can insert rows into the cache as u pass to the target The informaticserver returns a value from the lookup table or cache when the condition is true.

Mapping variables have two identities: Start value and Current value Start value = Current value ( when the session starts the execution of the undelying mapping) . Data cache 2. What r the diffrence between joiner transformation and source qualifier transformation? Source qualifier – Homogeneous source Joiner – Heterogeneous source What is aggregate cache in aggregator transforamtion? When you run a workflow that uses an Aggregator transformation. the InformaticaServercreates index and data caches in memory to process the transformation.There r 2-types of cache in the joiner 1. Index Cache what r the settiings that u use to cofigure the joiner transformation? Master and detail source Type of join Condition of the join the Joiner transformation supports the following join types. Can U use the maping parameters or variables created in one maping into another maping? NO. If the InformaticaServer requires more space. it stores overflow values in cache files. You might want to use aworkflow parameter/variable if you want it to be visible with other mappings/sessions What r the mapping paramaters and maping variables? Please refer to thedocumenta tion for more understanding.2 Now we can use a joiner even if the data is coming from the same source. which you set in the Properties tab: Normal (Default) Master Outer Detail Outer Full Outer In which condtions we can not use joiner transformation(Limitaions of joiner transformation)? This is no longer valid in version 7.

such as a Filter transformation that removes rows that do not meetthe filter condition. Mappings. that is know as reusable transformation You can design using 2 methods using transformation developer create normal one and promote it to reusable What r the active and passive transforamtions? Transformations can be active or passive. Definitions of database objects or files that contain the target data. What r the methods for creating reusable transforamtions? You can desig n using 2 methods using transformation developer create normal one and promote it to reusable What r the reusable transforamtions? A transformation can reused. An active transformation can change the number ofrows that pass through it. Target definitions. transforming. A session is a type of task that you can put in a workflow. Multi-dimensional metadata. and a output transformation which passes the final modified data to back to the mapping. Definitions of database objects (tables. What is the maplet? A mapplet should have a mapplet input transformation which recives input values. when the mapplet is displayed with in the mapping only input & output ports are displayed so that the internal logic is hidden fromend-user point of view. A set of source and target definitions along with transformations containing business logic that you build into the transformation. Target definitions that are configured as cubes and dimensions. and loading data. Mapplets. A set of transformations that you can use in multiple mappings. Transformations that you can use in multiple mappings. What r the unsupported repository objects for a mapplet? Source definitions. Sessions and workflows store information about how and when the Informatica Server moves data. . views. Reusable transformations. A workflow is a set of instructions that describes how and when to run tasks related to extracting. These are the instructions that the Informatica Server uses to transform and move data. Each session corresponds to a single mapping. synonyms) or files that provide source data. A passive transformation does not change the number of rows that pass through it. Sessions and workflows.Start value <> Current value ( while the session is in progress and the variable value changes in one ore more occasions) Current value at the end of the session is nothing but the start value for the subsequent run of the same session. such as an Expression transformation that performs a calculation on data and passes all rows through the transformation.

Creates the workflow log file. Reads the parameter file and expands workflow variables. Distributes sessions to worker servers. Fetches session and mapping metadata from the repository. Starts the DTM to run sessions. To provide support for Mainframes source data. 5. 8. Edit the definition 2.. Reimport the defintion Where should U place the flat file to import the flat file defintion to the designer? There is no such restrication to place the source file.which files r used as a source definitions? COBOL Copy-book files What is Data cleansing. 3.1 manual. 6.Since cobol sources r oftenly consists of Denormailzed data. How many ways you can update a relational source defintion and what r they? Two ways 1. the Load Manager performs the following tasks: 1. Locks the workflow and reads workflow properties.Which transformation should u need while using the cobol sources as source defintions? Normalizer transformaiton which is used to normalize the data.When the PowerCenter Server runs a workflow. . Ans: While running a Workflow. 7. What is Load Manager? I am providing the answer which I have taken it from Informatica 7. if you need path please check the server propertiesavailble at workflow manager. if we place in server src folder by default src will be selected at time session creation. 2. Creates and expands session variables. When the PowerCenter Server runs a session. It doesn't mean we should not place in any other folder. Which is a transformation? It is a process of converting given input to desired output. Runs workflow tasks.1. Runs sessions from master servers. the DTM performs the following tasks: 1.the PowerCenter Server uses the Load Manager process and the Data Transformation Manager Process (DTM) to run the workflow and carry out workflow tasks. 2. In performance point of view its betterto place the file inserver local src folder. 4.? Data cleansing is a two step process including DETECTION and then CORRECTION of errors in a data set. Sends post-session email if the DTM terminates abnormally.

GUI-based tools reduce the development effort necessary to create data partitions and streamline ongoing troubleshooting and performance tuning tasks.3. Sends post-session email. writer. 9. It connects to the Repository Server and Repository Agent to retrieve workflow and mapping metadata from the repository database. reader. while ensuringda ta integrity throughout the execution process. 8. Verifies connection object permissions. The Repository Server then re-directs the PowerCenter Server to connect directly to the Repository Agent. How can we partition a session in Informatica? The Informatica® PowerCenter® Partitioning option optimizes parallel processing on multiprocessor hardware by providing a thread-based architecture and built-in data partitioning.transform. Runs pre-session shell commands. and transformation threads to extract. Runs post-session shell commands. 10. 11. As the amount of data within an organization expands and real-time demand for information grows. Creates and runs mapping. Runs pre-session stored procedures and SQL. 4. and load data. 7. 6. Explain the informatica Architecture in detail informaticaserver connects source data and target data using native odbc drivers again it connect to the repository for running sessions and retriveing metadata information source------>informatica server--------->target ||REPOSITORY The PowerCenter Server is a repository client application. Runs post-session stored procedures and SQL. How to read rejected data or bad data from bad file and reload it to target? Correction the rejected data and send to target relational tables using loadorder utility. 5. the PowerCenter Partitioning optionenables hardware andap plications to provide outstanding performance and jointly scale tohandle large volumes of data and users. When the PowerCenter Server requests a repository connection from the Repository Server. Diff between informatica repositry server & informatica server Informatica Repository Server:It's manages connections to the repository from client application. the Repository Server starts and manages the Repository Agent. Creates the session log file. Find out the rejected data by using column indicatior and row indicator. . Validates session code pages if data code page validation is enabled. Checks query conversions if data code page validation is disabled.

performs the data transformation. In a dimensional data modeling(star schema).Once the target is created. A flat file target is built by pulling a source into target space usingWarehouse Designer tool. save it and u can enter various columns for that created target by editing its properties.When u configure the session the loadmanager maintains list of list of sessions and session start times. What r the tasks that Loadmanger process will do? Manages the session and batch scheduling: Whe u start the informatica server the load maneger launches and queries the repository for a list of sessions configured to run on the informatica server.Informatica Server:It's extracts the source data. Locking and reading the session: When the informatica server starts a session lodamaager locks the session from the repository. Creating log files: Loadmanger creates logfile contains the status of session. quarter lookup. How can U create or import flat file definition in to the warehouse designer? U can create flat file definition in warehouse designer. What r the connected or unconnected transforamations? An unconnected transformation cant be connected to another transformation. and week lookups are not merged as a single table. weekly. In a relational data model. save it. month lookup. for normalization purposes.and loads the transformed data into the target How do you transfert the data from data warehouse to flatfile? You can write a mapping with the flat file as a target using a DUMMY_CONNECTION. The LM also sends the 'failure mails' in case of failure in execution of the Subsequent DTM process. This dimensions helps to find the sales done on date.When u sart a session loadmanger fetches the session information from the repository to perform the validations and verifications prior to starting DTM process.u can create new target: select the type as flat file. but it can be called inside another transformation What is a time dimension? give an example. Reading the parameter file: If the session uses a parameter files. monthly and yearly basis. these tables would be merged as a single table called TIME DIMENSION for performance and slicing data.Locking prevents U starting the session again and again. Wecan have a trend analysis by comparing this year sales with the previous year or this weeksales with the previous week..in the warehouse designer. year lookup. u can import it from the mappingdesigner.loadmanager reads the parameter file and verifies that the session level parematers are declared in the file Verifies permission and privelleges: When the sesson starts load manger checks whether or not the user have privelleges to run the session. Discuss the advantages & Disadvantages of star & snowflake schema? .

Putcounter/sequence generator in mapping and perform it.in such cases we should you normal load only.Use test download option if you want to use it for testing. Can we lookup a table from a source qualifer transformation-unconnected lookup No. compartivly Bulk load is pretty faster than normal load. 1) Unless you assign the output of the source qualifier to another transformation or to target no way it will include the feild in the query. Also. whereas in a SNOWFLAKE schema there is a possible relation between the dimension tables. Detail Filter --. The junk dimension is simply a structure thatprovides a convenient place to store the junk attributes.In a STAR schema there is no relation between any two dimension tables. else the session will be failed. Difference between summary filter and details filter? Summary Filter --. and the included transformations in the perticular related transformatons What is the difference between Narmal load and Bulk load? Normal Load: Normal load will write information to the database log file so that if any recorvery is needed it is will be helpful. flags and/or text attributesthat are unrelated to any particular dimension. 2) source qualifier don't have any variables feilds to utalize as expression. when the source file is a text file and loading data to a table. A good example would be a tradefact in a company thatbrokers equity trades.we can apply records group by that contain common values. How to get the first 100 rows from the flat file into the target? 1. what are the difference between view and materialized view? . we can't do. Waht are main advantages and purpose of using Normalizer Transformation in Informatica? Narmalizer Transformation is used mainly with COBOL sources where most of the time data is stored in de-normalized format. I will explain you why.we can apply to each and every record in a database. 2. What is a junk dimension A "junk" dimension is a collection of random transactional codes. Bulk Mode: Bulk load will not write information to the database log file so that if any recorvery is needed we can't do any thing in such cases. Normalizer transformation can be used to create multiple rows from a single row of data At the max how many tranformations can be us in a mapping? In a mapping we can use any number of transformations depending on the project.

to construct a data warehouse. What is the difference between connected and unconnected stored procedures. or is called by an expression in another transformation in the mapping.Materialized views are schema objects that can be used to summarize.2 and Informatica 7. Differences between Informatica 6. Normal Load and Bulk load It depends on the requirement.0 Yours sincerely Features in 7.we can use pmcmdrep 5. You should use a connected Stored Procedure transformation when you need data from an input port sent as an input parameter to the stored procedure.g. All data entering the transformation through the input ports affects the stored procedure. Otherwise Incremental load which can be better as it takes onle that data which is not available previously on the target. precompute. E.union and custom transformation 2. It either runs before or after the session. connected: The flow of data through a mapping in connected mode also passes through the Stored Procedure transformation.1 are : 1. which does not take up any storage spaceor contain any data Compare Data Warehousing Top-Down approach with Bottom-up approach Top down ODS-->ETL-->Datawarehouse-->Datamart-->OLAP Bottom up ODS-->ETL-->Datamart-->Datawarehouse-->OLAP Discuss which is better among incremental load.grid servers working on different operating systems can coexist on same server 4. Unconnected: The unconnected Stored Procedure transformation is not connected directly to the flow of the mapping.we can export independent and dependent rep objects . and distribute data. A materialized view provides indirect access to table data by storing the results of a query ina separate schema object.lookup on flat file 3. replicate. Unlike an ordinary view. or the results of a stored procedure sent as an output parameter to another transformation.

Filter transformation filters the rows that are not flagged and passes the flagged rows to the Update strategy transformation In a filter expression we want to compare one date field with a db2 system field CURRENT DATE. The reposiitory server controls the repository and maintains the data integrity and Consistency across the repository when multiple users use Informatica. also between Versions 6.6.1).we ca move mapping in any web application 7.version controlling 8. Thanks Briefly explian the Versioning Concept in Power Center 7. So create using the same layout as in your source tables or using the Generate SQL option in theWarehouse Designer tab..1 is that in 6.2 and 5.1 and 6. Thedb2 date formate is "yyyymmdd" where as sysdate in oracle will give "dd-mm-yy" so conversion of db2 date formate to localdat abase date formate is compulsary. Can someone help us. other wise u will get that type of error .1.1 they introduce a new thing called repository server and in place of server manager(5. What does the expression n filter transformations do in Informatica Slowly growing target wizard? EXPESSION transformation detects and flags the rows from source. How to create the staging area in your database A Staging area in a DW is used as a temporary space to hold all therecords from the sourcesystem.data profilling What are the Differences between Informatica Power Center versions 6.1. Powercenter Server/Infa Server is responsible for execution of the components (sessions) stored in the repository.2 and 7. So more or less it should be exact replica of the sourcesystems except for the laodstartegy where we use truncate and reload options. but this is not valid (PMParser: Missing Operator). Whats the diff between Informatica powercenter server.1? The main difference between informatica 5. repositoryserver and repository? Repository is adatabase in which all informatica componets are stored in the form of tables. its a system field ). Our Syntax: datefield = CURRENT DATE (we didn't define it by ports. they introduce workflow manager and workflow monitor.

0.Delete all the source qualifiers.0.0.Add a common source qualifier for all. Can Informatica be used as a Cleansing Tool? If Yes.Click on the properties tab. 1.mapping---source--.there you have an option called User Defined Join there you can write your SQL Identifying bottlenecks in various components of Informatica and resolving them.0.5. It depends upon performance again else we can use expression to cleasing data. if you have a shortcut to a source definition in theMarketing folder. we can use Informatica for cleansing data. but this solution is not practical for much-used objects. all shortcuts continue to reference their original object in the original version. How to join two tables without using the Joiner Transformation. inside an expression we can assign space or some constant value to avoidsession failure.Briefly explian the Versioning Concept in Power Center 7. When u drag n drop the tables u will getting the source qualifier for each table. Therefore. then you create a new folder version.0.u will find sql query in that u can write ursqls You can also do it using Session --.But provided the tables should have relationship. When you create a version of a folder referenced by shortcuts. Maintaining versions of shared folders can result in shortcuts pointing to different versions of the folder. give example of transformations that can implement a data cleansing routine. . some time we use stages to cleansing the data. They do not automatically update to the current folder version. version 1. For example. To avoid this. the shortcut continues to point to the source definition in version 1. Yes. Itz possible to join the two or more tables by using source qualifier. The best way to find out bottlenecks iswri ting to flat file and see where the bottle neck is . Though shortcuts to different versions do not affect theserver. when possible. For example an feild X have some values and other with Null values and assigned to target feild wheretarget feild is notnull column. do not version folders referenced by shortcuts. they might prove more difficult to maintain.Right click on the source qualifier u will find EDIT click on it. you can recreate shortcuts pointing to earlier versions.1.

for every6.000. We are using Update Strategy Transformation in mapping how can we knowwhether insert or update or reject or delete option has been selected duringrunning of sessions in Informatica. How do we estimate the number of partitons that a mapping really requires? Is it dependent on the machine configuration? It depends upon the informatica version we r using. it commitsthe data into target when ever the buffer fills.e. Unless this can be done if nobody deleted that cache files. . No necessary to process entire values again and again. Assume appropriate value wherever required.If you have good processing database you can create aggregationtable or view at database level else its better to use informatica. In database we don't have Incremental aggregation facility. we can change the format in expression. but in Informatica an option we called "Incremental aggregation" which will help you to update the current values with current values +new values.for every 10. u can set to any no.000 rows it commits the data. How do we estimate the depth of the session scheduling queue? Where do we setthe number of maximum concurrent sessions that Informatica can run at a giventime? please be more specific on the first half of the question. Explain the commit points for Source based commit and Target basedcommit. How do you decide whether you need ti do aggregations at database level or at Informatica level? It depends upon our requirment only..So. Here i'm explaing why we need to useinformatica.by default its 10. Suppose session is configured with commit interval of 10. suppose if we r using informatica 6 itsupports only 32 partitions where as informatica 7 supports 64 partitions.000 rows and source has50.000 rows. we can assign some default values to the target to represent complete set of data in the target.The input data is in one format and target is in another format. u set the max no of concurrent sessions in the info server.Let us assume that the buffer size is 6.i.000 rows it will commit into target. what ever it may be informatica is a thrid party tool. so it will take more time to process aggregation compared to the database. Target based commit will commit the data into target based on buffer size of the target.so. If that happend total aggregation we need to execute on informatica also. Source based commit will commit thedata into target based on commit interval.

source is not well defined or from different database you can go for unconnected We are using like that only In Dimensional modeling fact table is normalized or denormalized?in case of star schema and incase of snow flake schema? In Dimensional modeling. source.It reduces database and informatica server performance Which is better among connected lookup and unconnected lookup transformations in informatica or any other ETL tool? If you are having defined source you can use connected. Update or insert files are known by checking the target file ortable only.sal>emp. What is the limit to the number of sources and targets you can have in a mapping As per my knowledge there is no such restriction to use this number of sources or targets inside a mapping. target. in SQL Server:-(take emp table) select top 10 sal from emp Which objects are required by the debugger to create a valid debug session? Intially the session should be valid session.sal) order by sal desc. expressions should be availble. If any rejected rows are there automatically it will be updated to the session log file.InDesigner while creating Update Strategy Transformation uncheck "forward to next transformation". min 1 break point should be available for debugger to debug your session. lookups. What is the procedure to write the query to list the highest salary of three employees? The following is the query to find out the top three salaries in ORACLE:--(take emptable) select * from emp e where 3>(select count (*) from emp where e. Question is " if you make N number oftables to participate at a time in processing what is the positionof yourdat abase. Star Schema: A Single Fact table will be surrounded by a group of Dimensional tables comprise of de.normalizeddat a Snowflake Schema: A Single Fact table will be . I orginzation point of view it is never encouraged to use N number of tables at a time.

The following example tests for various conditions and returns 0 if sales is zero or negative: . it increases the number of dimension tables and requires more foreign key joins. Another reason for using star schema is its ease of understanding. Although query performance may be improved by advanced DBMS technology and hardware. numeric facts.The Snowflake Schema is a more complex data warehouse model than a star schema.surrounded by a group of Dimensional tables comprised of normalized dataThe Star Schema (sometimes referenced as star join schema) is the simplest data warehouse schema. For example. implementing multidimensional views of data using a relational database is very appealing. but dimensional tables in de-normalized second normal form (2NF). and is a type of star schema. If you want to normalize dimensional tables.The Star Schema makes multi-dimensionaldatabase (MDDB) functionality possible using a traditional relational database. Even if you are using a specific MDDB solution. a Product-category table. a product dimension table in a star schema might be normalized into a products table. consisting of a single "fact table" with a compound primary key. It is called a snowflake schema because the diagram of the schema resembles a snowflake. and a product-manufacturer table in a snowflake schema. Because relational databases are the most common data management system in organizations today. The result is more complex queries and reduced query performance. the dimension data has been grouped into multiple tables instead of one large table. While this saves space. they look like snowflakes (see snowflake schema) and the same problems of relational databases arise .Snowflake schemas normalize dimensions to eliminate redundancy. highly normalized tables make reporting difficult and applications complex. its sources likely are relational databases. That is. with one segment for each "dimension" and with additional columns of additive. Fact tables in star schema are mostly in third normal form (3NF). What is difference between IIF and DECODE function You can use nested IIF statements to test multiple conditions.you need complex queries and business users cannot easily understand the meaning of data.

and a where clause. SALARY1. u can now continue this query. 0 ) You can use DECODE instead of IIF in many cases. add column_names of the 2nd table with the qualifier.at the end of the order by. SALES > 49 AND SALES < 100.see we can make sure with connection in the properties of session both sources && targets How to retrive the records from a rejected file. How to lookup the data on multiple tabels.u cannot join a flat file and a relatioanl table. SALARY3. SALARY2. if the twotab les are relational. IIF( SALES < 100. SALES > 99 AND SALES < 200.1Server) These bad files can be imported as flat a file in source then thro' direct maping we can load these files in desired format. eg: lookup default query will be select lookup table column_names from lookup_table. if u want to use a order by then use -.if it is relational. SALARY3. IIF( SALES < 50. The following shows how you can use DECODE instead of IIF : SALES > 0 and SALES < 50. Outport. SALARY1. If any addition i will be more than happy if you can share. Outport is used when data is mapped to next transformation. Variable port. DECODE may improve readability. What is the procedure to load the fact table. Variable port is used when we mathematical caluculations are required. BONUS))).C:Program FilesInformatica PowerCenter 7. BONUS) What are variable ports and list two situations when they can be used? We have mainly tree ports Inport..Give in detail? . IIF( SALES < 200. explane with syntax or example During the execution of workflow all the rejected rows will be stored in bad files(where your informatica server get installed. SALARY2.if is flat fileFTP connection. then u can use the SQL lookup over ride option to join the two tables in the lookup properties. How does the server recognise the source and target databases? By using ODBC connection. SALES > 199. Inport representsdat a is flowing into transformation.IIF( SALES > 0.

when the informatica server performs incremental aggr. there comes the concept of mapping parameters and variables. so every day u have to go to that mapping and change the day so that the particular data will be extracted . in that go for persistant values. the server takes the saved variable value in the repository and starts assigning the next value of the saved value. run the session. it should start with the value of 70. you need a primary key so use a sequence generator transformation to generate a unique key and pipe it to the target (fact) table with the foreign keys from the source tables. if we do that it will be like a layman's work. it passes new sourcedata through the mapping and uses historical chache data to perform new aggregation caluculations incrementaly.. in the concept of mapping parameters and variables. suppose ur source system contains the day column. in workflow manager start-------->session. What is the use of incremental aggregation? Explain me in brief with an example Its a session option. how to do this.. How to delete duplicate rows in flat files source is any option in informatica Use a sorter transformation . not with the value of 51. forperformance we will use it. choose the sources anddat a and transform it based on your business needs. in that u will have a "distinct" option make use of it . then remove it and put ur desired one.next time when i run the session. How to use mapping parameters and what is their use In designer u will find the mapping parameters and variables options. comming to there uses suppose u r doing incremental extractions daily. the variable value will be saved to the repository after the completion of the session and the next time when u run the session.. For the fact table. u can do onething after running the mapping. once if u assign a value to a mapping variable then it will change between sessions.Based on the requirement to your fact table. i hope ur task will be done .u can assign a value to them in designer. there u will find the last value stored in the repository regarding to mapping variable. for example i ran a session and in the end it stored a value of 50 to the repository. right clickon the session u will get a menu.

. all details should be maintain in one table. say you are using SUM function. once you perform the update strategy.this will stop the current session and the sessions next to it. to maintain historical data means suppose oneemployee details like where previously he worked.can any one comment on significance of oracle 9i in informatica when compared tooracle 8 or 8i. In a sequential Batch how can we stop single session? We can stop it using PMCMD command or in themonitor right click on that perticular session and select stop. The problem will be. they are de normalized. so all the dimensions are marinating historical data. then the deleted rows will be subtracted from this aggregator transformation. How do you handle decimal places while importing a flatfile into informatica? while importing flat file definetion just specify the scale for a neumaric data type. theflat file sourcesupports only number datatype(no decimal and integer). in the mapping. and now where he is working. I mean how is oracle 9i advantageous when compared to oracle 8 or8i when used in informatica it's very easy Actually oracle 8i not allowed userdefined data types but 9i allows and then blob.clob allow only 9i not 8i and more over list partinition is there in 9i only Can we use aggregator/active transformation after update strategy transformation You can use aggregator after update strategy. so to maintain historical data we are all going for concept data warehousing by using surrogate keys we can achieve the historical data(using oracle sequence for critical column). say you had flagged some rows to be deleted and you had performed aggregator transformation for all rows. In the SQ associated with thatsource will have a data type as decimal for that number port of the source. Why dimenstion tables are denormalized in nature ? Because in Data warehousing historicaldata should be maintained. if u maintain primary key it won't allow the duplicaterecords with same employee id. because of duplicate entry means not exactly duplicate record with same employee number another record is maintaining in the table.

these r existing rows. only the new rows will come to mapping and the process will be fast . 1) we can create an index for the lookup table if we have permissions(staging area). 3)we can increase the chache size of the lookup.Integer is not supported. .. hence decimal is taken care.. 2) divide the lookup mapping into two (a) dedicate one for insert means: source . If you are workflow is running slow in informatica. these r new rows . Go to the session log file there we will find the information regarding to the session initiation process. How do you troubleshoot to improve performance? There r many ways to improve the mapping which has multiple lookups. (b) dedicate the second one to update : source=target.source ->number datatype port ->SQ -> decimal datatype.target. Can anyone explain error handling in informatica with examples so that it will be easy to explain the same in the interview. errors encountered. Where do you start trouble shooting and what are the steps you follow? When the work flow is running slowly u have to find out the bottlenecks in this order target source mapping session system If you have four lookup tables in the workflow. only the rows which exists allready will come into the mapping.

we can resolve the errors. What is data merging. updation etc. value error. Normalizer: It is a transormation mainly using for cobol sources.). 2. deletion. Differences between Normalizer and Normalizer transformation.load summary. The column indicators contain information regarding why the column has been rejected. how can we implement it 1. so by seeing the errors encountered during the session running. The row indicators signifies what operation is going to take place ( i. There are two parameters one fort the types of row and other for the types of columns.( such as violation of not null constraint.This file is maily used inCognos Impromptu toolafter creating a imr( report) we save the imr as IQD file which is used while creating a cube in powerplay transformer. data cleansing. How do I import VSAM files from source to target. Split the mapping into 3 seperate flows using a router transformation. overflow etc.e.Determine if the incoming row is 1) a new record 2) an updated record or 3) a record that already exists in the table using two lookup transformations. insertion.If 1) create a pipe that inserts all the rows into the table.In data source type we select Impromptu Query Definetion.) If one rectifies the error in the data preesent in the bad file and then reloads the data in the target.bad and it contains the records rejected by informatica server. sampling? Cleansing:---TO identify and remove the retundacy and inconsistency sampling: just smaple the data throug send the data from source to target Could anyone please tell me what are the steps required for type2 dimension/version data mapping. Do I need a special plugin As far my knowledge by using powerexchange tool convert vsam file to oracletables then do mapping as usual to the target table. There is one file called the bad file which generally has the format as *. .then the table will contain only valid data. it's change the rows into coloums and columns into rows Normalization:To remove the retundancy and inconsitecy What is IQD file? IQD file is nothing but Impromptu Query Definetion.

error related data) in a report format You can select these details from the repositorytable.0 where u can append to a flat file. There is an option insert update insert as update update as update like that by using this we will easily solve Two relational tables are connected to SQ Trans.3. but heard that its included in the latest version 8. Hope this makes sense. With out using Updatestretagy and sessons options. Since informatica does not gurantee keys are loaded properly(order!) into those tables. Its about to be shipping in the market. one to insert the new. What are partition points? .If 2) create two pipes from the same source. What are the different ways you could handle this type of situation? foreign key How to append the records in flat file(Informatica) ? Where as in Datastage we have the options i) overwrite the existing file ii) Append existing file This is not there in Informatica v 7.what are the possible errors it will be thrown? The only two possibilities as of I know is Both the table should have primary key/foreign key relation ship Both the table should be available in the same schema or same database what is the best way to show metadata(number of rows at source. target and each transformation level. how we can do the update our target table? In session properties. One as surrogate and other as primary. one updating the old record. you can use the view REP_SESS_LOG to get these data If u had to split the source level key going into two seperate tables.

. 2. Use: If the table you are trying to query is already analysed. it reads the queryand decides which will the best possible way for executing the query. So in this process. Oracle will go with full table scan. then oracle will go with CBO. What are cost based and rule based approaches and the difference Cost based and rule based approaches are the optimization techniques which are used in related to databases. it basically calculates the cost of each path and theanalyses for which path the cost of execution is less and then executes that path so that it can optimizethe quey execution. IT HAS A FULL RANGE OF REPORTING ON WEB ALSO IN WINDOWS.. the Oracle follows RBO. cost based Optimizer(CBO): If a sql query can be executed in 2 different ways ( like may have path 1and path2 for same query). where we need to optimize a sql query. bcz the third has some disadvantages. What is mystery dimention? using Mystery Dimension ur maitaining the mystery data in ur Project. If the table is not analysed . 1... Can i start and stop single session in concurent bstch? . Basically Oracle provides Two types of Optimizers (indeed 3 but we use only these two techniques..) When ever you process any sql query in Oracle.Partition points mark the thread boundaries in a source pipeline and divide the pipeline into stages. For the first time. if table is not analysed. So depending on the number of rules which are to be applied.. Rule base optimizer(RBO): this basically follows the rules which are needed for executing a query.. u can create 2 dimensional report and also cubes in here. Oracle followsthese optimization techniques..basically a reporting tool. the optimzer runs the query. what oracle engine internally does is.. What is Micro Strategy? Why is it used for? Can any one explain in detail about it? Micro strategy is again an BI tool whicl is a HOLAP.then What CBO does is.

Prepare a questionnaire consisting of at least 15 non-trivial questions to collect requirements/information about the organization. This information is required to build data warehouse.For example they need :customer billing process.Now goto business managment team :they can ask for metrics out of billing process for their use.Now magament people :monthly usage. 3. an insurance company.rate plan to perform sales rep and channel performance analysis and rate plan analysis.Just right click on the particular session and going to recovery option or by using event wait and event rise I want to prepare a questionnaire.minutes used.Bill date.name. say a telecom company? First of all meet your sponsors and make a BRD(business requirement document) about their expectation from this datawarehouse(main aim comes from them). Give at least four reasons for the selecting the organization.Depend upon the granualirty of your data. what is difference between lookup cashe and unchashed lookup? Can i run the mapping with out starting the informatica server? The difference between cache and uncacheed lookup iswhen you configure the lookup transformation cache lookup it stores all the lookuptable data in the cache when the first input record enter into the lookup transformation.city. (For example Telecommunication. Identify a large company/organization that is a prime candidate for DWH project.call details etc)you can follow star and snow flake schema in this case.Numberrate plan:rate plan codeAnd Fact table can be:Billing details(bill #. may be the prime candidate for this) 2.name.sales rep number.state etc)Sales rep. The details about it are as follows: .idsales org:sales ord idBill dimension: Bill #.sales organization.customer id. So your dimensiontab les can be:Customer (customer id. in cache lookup the select statement executes only once and compares the values of the input record with the values in the cachebut in uncache lookup the the select statement executes for each input record entering into the lookup transformation and it has to connect to database each time entering the new record What is the difference between stop and abort .1.ya sure. banks. Can you please tell me what should be those 15 questions to ask from a company.billing metrics.

If you find your box running slower and slower over time. How do I get the session to stop fast? well. the entire box must be re-started to get the memory back. it takes forever. SIGTERM ABEND (Force ABEND) on Mainframe 3. What does this do? Each session uses SHARED/LOCKED (semaphores) memory blocks. The good news: It appears as if AIX Operating system cleans up these lost memory blocks. then I suggest that ABORT not be used.stop:_______If the session u want to stop is a part of batch you must stop the batch. It fires a request (equivalent to a control-c in SQL*PLUS) to the source database. Kill -9 2. if the batch is part of nested batch. Windows FORCE QUIT on application. Stop the outer most bacth Abort:---You can issue the abort command . The bigger the data in . The ABORT function kills JUST THE CODE threads. first things first. yes. waits for the source database to clean up. The bad news? Most other operating systems DO NOT CLEAR THE MEMORY. or not having enough memory to allocate new sessions. If the server cannot finish processing and commiting data with in 60 sec Here's the difference: ABORT is equivalent to: 1. it is similar to stop command except it has 60 second time out . Kill -9 on Unix (NOT kill -7) but YES. leaving the memory "taken" from the system. So then the question is: When I ask for a STOP. The only way to clear this memory is to warm-boot/cold-boot (restart) the informatica SERVER machine. STOP is a REQUEST to stop. leaving the memory LOCKED and SHARED and allocated.

Then it fires a request to stop against the target DB. This will send an EOF (end of file) downstream to Informatica. (ie: join of hugetables. chances are. FINALLY. The higher the commit point. DELETE. If you use ABORT and you then re-start the session.How can u load the records from 10001 th record when u run the session next time in informatica 6. not only have you lost memory . have them KILL the source query IN THE DATABASE. you are choosing to "LOSE" memory on the server in which Informatica is running (except AIX). the more data the target DB has to "roll-back". and waits for the target to roll-back. and Infa will take less time to stop the session. What are mapping parameters and varibles in which situation we can use it If we need to change certain attributes of a mapping after every time the session is run. but the target load type should be normal. big order by). Can we run a group of sessions without using workflow manager ya Its Posible using pmcmd Command with out using the workflow Manager run the group of session. This is must for Incremental Data Loading. it shuts the session down.000 records in to the target.butnow you have TWO competing queries on the source system after the same data. the more time it takes to "roll-back" the source query. be aware. UPDATE or REJECT for target database. If its bulk then recovery wont work as expected Explain use of update strategy transformation To flag source records as INSERT. If you use abort. If a session fails after loading of 10. WHAT IF I NEED THE SESSION STOPPED NOW? Pick up the phone and call the source system DBA. the longer it takes to clean up. big group by. to maintain transaction consistency in the source database.the source query. You're competing for resources with a defunct querythat's STILL rolling back. It then cleans up the buffers in memory by releasing the data (without writing to the target) but it WILLrun the data all the way through to the target buffers. it will be very . as per my knowledge i give the answer. The bigger thesession memory allocations. Default flag is Insert.1? Running the session in recovery mode will work. and you've locked outany hope of performance in the source database. never sending it to the target DB.

So we use mapping parameters and variables and define the values in a parameter file. semi additive In the dimensions table contain textual descrption of data and also contain meny columns. But we r use diffrent situations by using this only What is difference between dimention table and fact table and what are different dimention tables and fact tables In the fact table contain measurable data and less columns and meny rows. If we need to increment the attribute value by 1 after every session run then we can use mapping variables .. But value of mapping variables can be changed by using variable function. Then we could edit the parameter file to change the attribute values..less rows Its contain primary key How do you configure mapping in informatica .. Mapping parameter values remain constant.. This makes the process simple. Workflow tasks means 1)timer2)decesion3)command4)eventwait5)eventrise6)mail etc.non additive. What is worklet and what use of worklet and in which situation we can use it A set of worlflow tasks is called worklet.difficult to edit the mapping and then change the attribute. In a mapping parameter we need to manually edit the attribute value in the parameter file after every session run. It's contain primarykey Diffrent types of fact tables: additive. If we need to change the parameter value then we need to edit the parameter file ..

Rank. limit connected input/output or output ports. Joiner. Rank. limit connected input/output or output ports. For transformations that use data cache (such as Aggregator. Joiner. Optimize expressions. For transformations that use data cache (such as Aggregator. You should minimize the amount of data moved by deleting unnecessary links between transformations. You should minimize the amount of data moved by deleting unnecessary links between transformations. Eliminate transformation errors. You can also perform the following tasks to optimize the mapping: Configure single-pass reading. Limiting the number of connected input/output or output ports reduces the amount of data the transformations store in the data cache. Optimize datatype conversions. Eliminate transformation errors. You should configure the mapping with the least number of transformations and expressions to do the most amount of work possible. Optimize expressions. Limiting the number of connected input/output or output ports reduces the amount of data the transformations store in the data cache. What is the logic will you implement to laod the data in to one factv from 'n' number of dimension tables. Optimize transformations. and Lookup transformations). and Lookup transformations). Optimize transformations. Noramally evey one use !)slowly changing diemnsions 2)slowly growing dimensions . You can also perform the following tasks to optimize the mapping: Configure single-pass reading.You should configure the mapping with the least number of transformations and expressions to do the most amount of work possible. Optimize datatype conversions.

Sign up to vote on this title
UsefulNot useful