You are on page 1of 15

Informatica CCP

:

1) Can we import VSAM sources in Informatica? Yes 2) Applied Rows and Effected rows are always same in Informatica False 3) If you have a mapping with version 4 in folder1, you take an export and import it in folder2 having no mappings before this. What would be the version no of that mapping in folder2? Version 1 4) Data maps and Controller 5) Which of the nesting is possible? Worklet in a Worklet 6) Dependent/Independent Repository Objects related to Reusable/ Non-Reusable Transformation. 7) How Error Logging is done in Informatica? With pmerr tables 8) Is row level testing possible in Informatica? 9) Purging of Objects: Purging means deleting a version permanently from the repository. We do it by using either a History Window or Query Result Window. 10) Default Permissions when a folder is created: 11) External Loader

Hash User Keys. Authorization. Licensing 14) Status of a user when logged in for the first time in the rep: Enabled 15) When does the Power Centre Server run in exclusive mode? During its Downtime 16) When do you need to validate a mapping? Changes might invalid the mappings using Mapplets 17) Which Partitions are used for flat files? Pass Through 18) Types of Partitions: Pass Through. 2) Round Robin: Evenly distribution of data in each partition.12) Active and Passive Mapplets: Active Mapplet: Having one/more Active X’formation Passive Mapplet: Having passive X’formation 13) Master Gateway: It has the below privileges: Authentication. Round Robin. 3) Hash: They are used where we have group of data. Data is distributed unevenly. Two types: . Database Partitioning 1) Pass Through: Passes by pass through based on the partition points defined. Hash Auto Keys.

partition is passthrough. XML Target. Cobol sources. Seq Gen. XML Parser. Unconnected Transformation 23) For Rank and Unsorted Agg. we only have pass-through partitions. 25) Workflow Recovery: Recovering the workflow from the last failed task. . Sorter and Unsorted Aggregator. 22) Can’t create Partition points for below: Source Defn. 4) Database Partitioning: Only for source and target tables(Only for DB2 target tables). 20) We can’t add below things in a Mapplet: Normalizer. 19) For Flat files. the default partition is Hash Auto Keys 24) For transaction Control related Transformation.Hash Auto Keys: It generates the key based on the grouped and sorted ports automatically. Used for Rank. Target: Key Range to optimize writing to target. upstream and Downstream. Hash User Keys: User defines the partition key. XML Source. Another Mapplet 21) Using suitable partitions for below X’formation for better perf: Filter: Choose Round Robin to balance the load Source Qualifier: N partitions for N flat files Sorter: To eliminate overlapping group do partition on sorter and can delete the default one on Aggregator.

26) If Workflow Recovery is enabled and there is no sorter or rank X’formation. 27) Deployment Group: Can be static and dynamic 28) Push Down optimization works with all partition types except database Partitioning. then by default pass through is set. Round Robin Exp Sorted Agg Unsorted Agg Joiner Source (Relation al) Target (Relation √ √ Hash Auto Hash User √ Key Range √ Pass Throug h √ √ Database Partitionin g √ √ √ √ √ √ √ √ √ √ √ (DB2 only) .

When you enable version control for a repository. If you have the team-based development option. You can also use labels and deployment groups to associate groups of objects and copy them from one repository to another. If you enable version control. .al) Lookup Normalize r Rank Target (FF) √ √ √ √ √ √ √ √ √ √ √ √ √ √ √ 29 ) You must run the Repository Service in exclusive mode to enable version control for the repository. you can maintain multiple versions of an object. After you enable version control for a repository. you cannot disable it. You must run the Repository Service in exclusive mode to enable version control for the repository. control development of the object. the repository assigns all versioned objects version number 1. and each object has an active status. you can enable version control for a new or existing repository. and track changes. A versioned repository can store multiple versions of objects.

You can create multiple partitions in both pipelines. When you create a partition point at a . 34) Cache Partitioning Lookup Transformations Use cache partitioning for static and dynamic caches.30) For CDC sessions. The cache for a pipeline Lookup transformation is built in an independent pipeline from the pipeline that contains the Lookup transformation. You can define the following terminating conditions:  Idle Time  Packet Count  Reader Time Limit 33) You can configure cache partitioning for a Lookup transformation. and named and unnamed caches. 31) The following properties can affect outbound IDoc session performance:  Pipeline partitioning  Outbound IDoc validation  Row-level processing 32) Terminating Conditions Terminating conditions determine when the Integration Service stops reading from the source and ends the session. You can create multiple partitions for static and dynamic lookup caches. PWXPC controls the timing of commit processing and uses source-based commit processing in PowerCenter.

if the lookup condition contains a string port and the database is not configured for case-sensitive comparison.  The lookup condition must contain only equality operators.  The database is configured for case-sensitive comparison. The Integration Service uses cache partitioning when you create a hash auto-keys partition point at the Lookup transformation. it begins creating caches for the Lookup transformation when the first row of any partition reaches the Lookup transformation.connected Lookup transformation. use cache partitioning under the following conditions: Use the hash auto-keys partition type for the Lookup transformation. the Integration Service does not perform cache partitioning and writes the following message to the session log: CMN_1799 Cache partitioning requires case sensitive string comparisons. When the Integration Service creates cache partitions. For example. Lookup will not use partitioned cache as the database is configured for case insensitive string comparisons. the sequence numbers the Integration Service generates for each partition are not consecutive. 35) If you configure multiple partitions in a session on a grid that uses an uncached Sequence Generator transformation. If you configure the Lookup transformation for concurrent caches. . the Integration Service builds all caches for the partitions concurrently.

To improve performance. Use this feature when you know the lookup table does not change between session runs. 38) If you create multiple partitions in the session. The session log contains the ORDER BY statement. sort. The Integration Service issues a SELECT statement for each row that passes into the Lookup transformation. and compare values in the lookup condition columns. index the columns in the lookup condition. Persistent cache. You can share a named cache between transformations in the same or different mappings. 37) Indexing the Lookup Table The Integration Service needs to query. index the columns in the lookup ORDER BY statement. Uncached lookups. You can improve performance for the following types of lookups: Cached lookups.36) Types of Caches Use the following types of caches to increase performance: Shared cache. The Integration Service creates a separate cache . The index needs to include every column used in a lookup condition. You can share an unnamed cache between transformations in the same mapping. you can configure the transformation to use a persistent cache. You can share the lookup cache between multiple transformations. To improve performance. To save and reuse the cache files. the Integration Service uses cache partitioning. Using a persistent cache can improve performance because the Integration Service builds the memory cache from the cache files instead of from the database. It creates one disk cache for the Sorter transformation and one memory cache for each partition.

Instead. You can recover a deleted object by changing its status to Active. 39) When you delete an object in a versioned repository. This makes the object visible in the Navigator and workspace. You create a partition point at the Joiner transformation. see Aggregator Caches. Complete the following steps to recover a deleted object: . the repository removes the object from view in the Navigator and the workspace but does not remove it from the repository database. You use the Repository Manager to recover deleted objects. Use a query to search for deleted objects. For more information about Lookup transformation caches.for each partition and sorts each partition separately. You create multiple partitions in a session with a Rank transformation. You do not have to set a partition point at the Rank transformation. see Rank Caches. For more information about caches for the Joiner transformation. You do not have to set a partition point at the Aggregator transformation. For more information about Rank transformation caches. You create a hash auto-keys partition point at the Lookup transformation. Recovering a Deleted Object You can recover a deleted object by changing the object status to Active. You do not have to set a partition point at the Sorter transformation. For more information about Aggregator transformation caches. see Sorter Caches. You create multiple partitions in a session with a Sorter transformation. see Lookup Caches. For more information about Sorter transformation caches. see Joiner Caches. Transformation Aggregator Transformation Joiner Transformation Lookup Transformation Rank Transformation Sorter Transformation Description You create multiple partitions in a session with an Aggregator transformation. the repository creates a new version of the object and changes the object status to Deleted.

condition when you query the repository for deleted objects: Version Status Is Equal To Deleted 2 Change the status of the object you want to recover from . Include the following . The following table shows the Repository Manager commands that you can use to purge versions at the object. If the recovered object has the same name as another object 3 that you created after you deleted the recovered object.Create and run a query to search for deleted objects in the repository. You can search for all objects marked as deleted. or repository level: Single Object Version By Object Version X (View History Window) By Object Version X (Query Results Window) Based on Criteria X (Navigator) Based on Criteria X (View History Window) Based on Criteria X (Query Results window) Purge Type Multiple Object Versions X Versions at Folder Versions at Level Repository Level X X X X X X . or 1 add conditions to narrow the search. must rename the object. folder. Deleted to Active. you .

Task Recovery Strategies Each task in a workflow has a recovery strategy. You can fix the error and recover the workflow in the Workflow Monitor or you can recover the workflow using pmcmd. Terminated The Integration Service stops unexpectedly or loses network connection to the master service process. it recovers tasks based on the recovery strategy: Restart task. and the Integration Service continues running the workflow. The task status becomes failed. Stopped You stop the workflow or task in the Workflow Monitor or through pmcmd. When the Integration Service recovers a workflow. You can also configure a session to abort based on mapping conditions.Setting Environment Variables http://my. Fail task and continue workflow. The following table describes each recoverable task status Status Aborted Description You abort the workflow or task in the Workflow Monitor or through pmcmd. When the Integration Service recovers a workflow. When the Integration Service recovers a workflow. You can recover the workflow in the Workflow Monitor to recover the task or you can recover the workflow using pmcmd. it restarts each recoverable task that is configured with a restart strategy. All other tasks have a restart recovery strategy by default. You can recover the workflow in the Workflow Monitor to recover the task or you can recover the workflow using pmcmd. You can also choose to stop all running workflows when you disable the service or service process in the Administration Console. You can configure Session and Command tasks with a restart recovery strategy. You can recover a failed task using workflow recovery when the workflow is configured to suspend on task failure. Failed The Integration Service failed the task due to errors. You can also choose to abort all running workflows when you disable the service or service process in the Administration Console. . When the workflow is not suspended you can recover a failed task by recovering just the session or recovering the workflow from the session. You can recover the workflow in the Workflow Monitor or you can recover the workflow using pmcmd after the Integration Service restarts.com.informatica. it does not recover the task.

Configure a fail recovery strategy if you want to complete the workflow. The Integration Service does not recover a worklet. Email Restart task. Session Resume from the last checkpoint. Fail task and continue workflow. or resume: . restart. Fail task and continue workflow. you can choose a recovery strategy of fail. Event-Raise Restart task. Control Restart task. or you can recover the session without running the rest of the workflow. Default is fail task and continue workflow. or terminated session from the last checkpoint. aborted. You can recover the session in the worklet by expanding the worklet in the Workflow Monitor and choosing Recover Task. When you configure a session. Timer Restart task. Restart task. Session Task Strategies When you configure a session for recovery. Event-Wait Restart task. Worklet n/a Comments Default is fail task and continue workflow. The Integration Service might send duplicate email. Resume from the last checkpoint. The Integration Service recovers a stopped. If you use a relative time from the start time of a task or workflow. The following table describes the recovery strategy for each task type: Task Type Recovery Strategy Assignment Restart task. but you do not want to recover the task. set the timer with the original value less the passed time. You can configure a Session task with a resume strategy. You can configure Session and Command tasks with the fail task and continue workflow recovery strategy. Decision Restart task. Command Restart task. you can recover the session when you recover a workflow.

The Integration Service runs the session again when it recovers the workflow. When the Integration Service recovers a workflow. You can use any parameter or variable type that you can define in the parameter file. the Integration Service stops running the rest of the commands and fails the task when one of the commands in the Command task fails. If the Load Balancer has more Command tasks to dispatch than the Integration Service can run at the time. and worklet variables in standalone Command tasks. it does not recover the session. stops. You cannot use session parameters. Standalone Command tasks. see the PowerCenter Administrator Guide. you might need to remove the partially loaded data in the target or design a mapping to skip the duplicate rows. mapping parameters. The session status becomes failed. The Integration Service does not expand these types of parameters and variables in standalone Command tasks. the Load Balancer dispatches tasks from the queue in the order determined by the workflow service level. regardless of the result of the previous command. If you choose to run a command only if the previous command completes successfully. If you configure multiple commands in a Command task to run on UNIX. or mapping variables in standalone Command tasks. If the session aborts.and post-session shell commands. the Integration Service runs all the commands in the Command task and treats the task as completed. When the Integration Service becomes available. If you do not choose this option. and the Integration Service continues running the workflow. Executing Commands in the Command Task The Integration Service runs shell commands in the order you specify them. service process. Pre. For more information about how the Load Balancer uses service levels. or terminates. even if a command fails. If you want the Integration Service to perform the next command only if the previous . workflow. Or. the Load Balancer places the tasks it cannot run in a queue. you can choose to run all commands in the Command task. the Integration Service uses the saved recovery information to resume the session from the point of interruption. You can use service. You can choose to run a command only if the previous command completed successfully. each command runs in a separate shell. Fail task and continue workflow. Restart task. When you recover with restart task. The Integration Service saves the session state of operation and maintains target recovery tables.Resume from the last checkpoint.

Accept the default partition type. and choose round-robin . Set a partition point at the Filter transformation. A parent workflow or worklet is the workflow or worklet that contains the Control task. pass-through. Fail Top-Level Fails the workflow that is running. specify one partition for each flat file in the Source Qualifier transformation. If you choose Fail Me in the Properties tab and choose Fail Parent If This Task Fails in the General tab.” The Integration Service fails the Control task if you choose this option. Level Workflow To increase performance. Abort Parent Aborts the workflow or worklet that contains the Control task. Level Workflow Abort TopAborts the workflow that is running. the Integration Service fails the parent workflow. or fail the top-level workflow or the parent workflow based on an input link condition. select Fail Task if Any Command Fails in the Properties tab of the Command task. Stop Parent Stops the workflow or worklet that contains the Control task. specify partition types at the following partition points in the pipeline: Source Qualifier transformation. Filter transformation. abort. Workflow Stop TopStops the workflow that is running. The following figure shows the Fail Task if Any Command Fails option: Control Task: Use the Control task to stop. Control Option Fail Me Description Marks the Control task as “Failed. Since the source files vary in size. Fail Parent Marks the status of the workflow or worklet that contains the Control task as failed after the workflow or worklet completes.command completes successfully. To read data from multiple flat files concurrently. each partition processes a different amount of data.

Sorter transformation. use hash auto-keys partitioning at the Sorter transformation. To eliminate overlapping groups in the Sorter and Aggregator transformations. . This causes the Integration Service to group all items with the same description into the same partition before the Sorter and Aggregator transformations process the rows.partitioning to balance the load going into the Filter transformation. Target. specify key range partitioning at the target to optimize writing data to the target. Since the target tables are partitioned by key range. You can delete the default partition point at the Aggregator transformation.