Best Practices: Table of Contents

Best Practices Configuration Management Migration Procedures Development Techniques Development FAQs Data Cleansing Data Connectivity Using PowerConnect for BW Integration Server Data Connectivity using PowerConnect for Mainframe Data Connectivity using PowerConnect for MQSeries Data Connectivity using PowerConnect for PeopleSoft Data Connectivity using PowerConnect for SAP Incremental Loads Mapping Design Metadata Reporting and Sharing Naming Conventions Session and Data Partitioning Using Parameters, Variables and Parameter Files Error Handling A Mapping Approach to Trapping Data Errors Design Error Handling Infrastructure Documenting Mappings Using Repository Reports Error Handling Strategies Using Shortcut Keys in PowerCenter Designer Object Management Creating Inventories of Reusable Objects & Mappings Operations

BP-1 BP-1 BP-1 BP-16 BP-16 BP-24 BP-29 BP-33 BP-36 BP-40 BP-46 BP-52 BP-57 BP-62 BP-67 BP-72 BP-75 BP-87 BP-87 BP-91 BP-94 BP-96 BP-107 BP-109 BP-109 BP-113

INFORMATICA CONFIDENTIAL

BEST PRACTICES

PAGE BP-i

Updating Repository Statistics Daily Operations Load Validation Third Party Scheduler Event Based Scheduling Repository Administration High Availability Performance Tuning Recommended Performance Tuning Procedures Performance Tuning Databases Performance Tuning UNIX Systems Performance Tuning Windows NT/2000 Systems Tuning Mappings for Better Performance Tuning Sessions for Better Performance Determining Bottlenecks Platform Configuration Advanced Client Configuration Options Advanced Server Configuration Options Platform Sizing Recovery Running Sessions in Recovery Mode Project Management Developing the Business Case Assessing the Business Case Defining and Prioritizing Requirements Developing a WBS Developing and Maintaining the Project Plan Managing the Project Lifecycle Security Configuring Security

BP-113 BP-117 BP-119 BP-122 BP-125 BP-126 BP-129 BP-131 BP-131 BP-133 BP-151 BP-157 BP-161 BP-170 BP-177 BP-182 BP-182 BP-184 BP-189 BP-193 BP-193 BP-199 BP-199 BP-201 BP-203 BP-205 BP-206 BP-208 BP-210 BP-210

PAGE BP-ii

BEST PRACTICES

INFORMATICA CONFIDENTIAL

Migration Procedures

Challenge To develop a migration strategy that ensures clean migration between development, test, QA, and production, thereby protecting the integrity of each of these environments as the system evolves. Description In every application deployment, a migration strategy must be formulated to ensure a clean migration between development, test, quality assurance, and production. The migration strategy is largely influenced by the technologies that are deployed to support the development and production environments. These technologies include the databases, the operating systems, and the available hardware. Informatica offers flexible migration techniques that can be adapted to fit the existing technology and architecture of various sites, rather than proposing a single fixed migration strategy. The means to migrate work from development to production depends largely on the repository environment, which is either: • • Standalone PowerCenter, or Distributed PowerCenter

This Best Practice describes several migration strategies, outlining the advantages and disadvantages of each. It also discusses an XML method provided in PowerCenter 5.1 to support migration in either a Standalone or a Distributed environment. Standalone PowerMart/PowerCenter In a standalone environment, all work is performed in a single Informatica repository that serves as the shared metadata store. In this standalone environment, segregating the workspaces ensures that the migration from development to production is seamless. Workspace segregation can be achieved by creating separate folders for each work area. For instance, we might build a single data mart for the finance division within a

INFORMATICA CONFIDENTIAL

BEST PRACTICES

PAGE BP-1

corporation. In this example, we would create a minimum of four folders to manage our metadata. The folders might look something like the following:

In this scenario, mappings are developed in the FINANCE_DEV folder. As development is completed on particular mappings, they will be copied one at a time to the FINANCE_TEST folder. New sessions will be created or copied for each mapping in the FINANCE_TEST folder. When unit testing has been completed successfully, the mappings are copied into the FINANCE_QA folder. This process continues until the mappings are integrated into the production schedule. At that point, new sessions will be created in the FINANCE_PROD folder, with the database connections adjusted to point to the production environment. Introducing shortcuts in a single standalone environment complicates the migration process, but offers an efficient method for centrally managing sources and targets. A common folder can be used for sharing reusable objects such as shared sources, target definitions, and reusable transformations. If a common folder is used, there should be one common folder for each environment (i.e., SHARED_DEV, SHARED_TEST, SHARED_QA, SHARED_PROD). Migration Example Process Copying the mappings into the next stage enables the user to promote the desired mapping to test, QA, or production at the lowest level of granularity. If the folder where the mapping is to be copied does not contain the referenced source/target tables or transformations, then these objects will automatically be copied along with the mapping. The advantage of this promotion strategy is that individual mappings can be promoted as soon as they are ready for production. However, because only one mapping at a time can be copied, promoting a large number of mappings into production would be very time consuming. Additional time is required to re-create or copy all sessions from scratch, especially if pre- or post-session scripts are used. On the initial move to production, if all mappings are completed, the entire FINANCE_QA folder could be copied and renamed to FINANCE_PROD. With this approach, it is not necessary to promote all mappings and sessions individually. After the initial migration, however, mappings will be promoted on a “case-by-case” basis.

PAGE BP-2

BEST PRACTICES

INFORMATICA CONFIDENTIAL

open it in the Designer and bring in the newly copied shortcut. Copy the mapping from Development into Test. skip to step 2 • • Create four common folders. However. 2.Follow these steps to copy a mapping from Development to Test: 1. • If copying the mapping. if any of the objects are active. If using shortcuts. Using the old shortcut as a model. link all of the output ports to the new shortcut. first delete the old shortcut before linking the output ports. Using the old shortcut as a model. Create or copy a session in the Server Manager to run the mapping (make sure the mapping exists in the current repository first). one for each migration stage COMMON_TEST. and drag and drop the mapping from the development folder into the test folder. Using the newly copied mapping. Copy the shortcut objects into the COMMON_TEST folder. COMMON_QA. if not using shortcuts. if not using shortcuts. COMMON_PROD). • In the PowerCenter Designer. skip to step 4: • • • • Open the mapping that uses shortcuts. 4. If using shortcuts. link all of the input ports to the new shortcut. open the appropriate test folder. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-3 . (COMMON_DEV. 3. first follow these substeps. follow the copy session wizard. follow these substeps.

5. QA. Implement appropriate security. change the owner of the Test/QA folders to a user in the Test/QA group. change the owner of the folders to a user in the Production group. This can have negative performance implications. they all reside on the same server.• If creating the mapping. enter all the appropriate information in the Session Wizard. In Test and Quality Assurance. If Development or Test loads are running simultaneously with PAGE BP-4 BEST PRACTICES INFORMATICA CONFIDENTIAL . Revoke all rights to Public other than Read for the Production folders. In Production. the owner of the folders should be a user in the development group. Test. Performance Implications in the Single Environment A disadvantage of the single environment approach is that even though the Development. such as: • • • • In Development. and Production “environments” are stored in separate folders.

. and then eventually into the Production environment. Because each environment is segregated from the others. For example. The first is that everything is moved at once (also an advantage). and Production. sequences. Everything will need to be set up correctly on the new server that will now host the repository. there are separate. With a fully distributed approach. moved into the Test repository. mappings. contending with the pre-scheduled Production runs. FINANCE_DEV. however. Distributed PowerCenter In a distributed environment. Test. The 10 unready mappings are moved into production along with the 40 production-ready maps. QA. and sessions. hardware and software) for Development. there may be 50 mappings in QA but only 40 of them are production-ready. and most Development and Test loads run during the day so this does not pose a problem. Another advantage is the ability to automate this process without having users perform this process. and FINANCE_PROD. the server machine may reach 100 percent utilization and Production performance will suffer. or Production. situations do arise where performance benchmarking with large volumes or other unusual circumstances can cause test loads to run overnight. The trouble with this is that everything is moved. parameters/variables.namely that maintenance is required to remove any unwanted or excess objects. QA. FINANCE_TEST. However. Production loads run late at night. For instance. including source and target tables. etc. work performed in Development cannot impact Test.Production loads. ready or not. Each repository has a similar name for the folders in the standalone environment. Often. separate repositories provide the same function as the separate folders in the standalone environment described previously. transformations. There are.e. The mappings are created in the Development repository. There are three main techniques to migrate from Development to Production. There are three ways to accomplish the Repository Copy method: INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-5 . The final advantage is that everything can be moved without breaking/corrupting any of the objects. independent environments (i. each involving some advantages and disadvantages: • • • Repository Copy Folder Copy Object Copy Repository Copy The main advantage to this approach is the ability to copy everything at once from one environment to another. This is the preferred method for handling Development to Production migrations. FINANCE_QA. three distinct disadvantages to the repository copy method. in our Finance example we would have four repositories. which leads to the second disadvantage -. Another disadvantage is the need to adjust server variables. database connections.

The following screen shot shows the dialog box used to input the new location information: To successfully perform the copy. The PMREP utilities can be utilized both from the Informatica Server and from any client machines connected to the server. go to the File menu in the Repository Manager and select Backup Repository. To restore the repository simply open the Repository Manager on the destination server and select Restore Repository from the File menu. This will create a .• • • Copying the Repository Repository Backup and Restore PMREP Copying the Repository The repository copy command is probably the easiest method of migration. the user must delete the current repository in the new location. For example. Then the Copy Repository routine must be run. To perform this function. PAGE BP-6 BEST PRACTICES INFORMATICA CONFIDENTIAL . Select the created . if a user was copying a repository from DEV to TEST. PMREP Using the PMREP commands is essentially the same as the Backup and Restore Repository method except that it is run from the command line. From there the user is prompted to choose the location to which the repository will be copied. since the Restore Repository option does not delete the current repository. be sure to first delete any matching destination repositories. To ensure success. To perform this one needs to go the file menu of the Repository Manager and select Copy Repository.REP file to automatically restore the repository in the destination server.REP file containing all repository information. then the TEST repository must first be deleted using the Delete option in the Repository Manager to create room for the new repository. Repository Backup and Restore The Backup and Restore Repository is another simple method of copying an entire repository.

etc: After following one of the above procedures to migrate into Production. Disable sessions that schedule mappings that are not ready for Production or simply delete the mappings and sessions. and then clearing the Enable checkbox under the General tab.The following table documents the available PMREP commands: The following is a sample of the command syntax used within a batch file to connect to and backup a repository. backup. restore. • • Disable the sessions in the Server manager by opening the session properties. follow these steps to convert the repository to Production: 1. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-7 . scripts can be written to be run on a daily basis to perform functions such as connect. Delete the sessions in the Server Manager and the mappings in the Designer. Using the code example below as a model.

The disadvantages of Folder Copy are: • • User needs to be logged into multiple environments simultaneously. ensure that the owner of the folders is a user in the Development group. Folder Copy Copying an entire folder allows you to quickly promote all of the objects in the Development folder to Test. 3. change the owner of the folders to a user in the Production group. for example. then after the folder is copied. follow these substeps. Modify the pre. open the session properties. developers (or the Repository Administrator) must manually delete these mappings from the new folder. change the owner of the Test/QA folders to a user in the Test/QA group. follow these steps: 1. If copying a folder.and post-session scripts. If using shortcuts. sequences or server variables. 4. If certain mappings are not ready. then the connect string will need to be modified appropriately. The repository is locked while Folder Copy is being performed. from QA to Production. Edit each database connection by changing the connect string to point to the production sources and targets. mappings. In Test and Quality Assurance. and from the General tab make the required changes to the pre. The advantages of Folder Copy are: • • • Easy to move the entire folder and all objects in it Detailed Wizard guides the user through the entire process There’s no need to update or alter any Database Connections. and so forth.and post-session commands as necessary. In Production. Modify the database connection strings to point to the Production sources and targets. reusable transformations. and sessions are promoted at once. If using lookup transformations in the mappings and the connect string is anything other than $SOURCE or $TARGET. select Database Connections from the Server Configuration menu. • • • In the Server Manager.2. everything in the folder must be ready to migrate forward. All source and target tables. Implement appropriate security. • In the Server Manager. Therefore. Revoke all rights to Public other than Read for the Production folders. such as: • • • • In Development. otherwise skip to step 2: PAGE BP-8 BEST PRACTICES INFORMATICA CONFIDENTIAL .

2. Drag and drop the folder onto the production repository icon within the Navigator tree structure. 4.• • • In each of the dedicated repositories. Copy the shortcut objects into the common folder in Production and make sure the shortcut has exactly the same name. drag and drop the folder icon just under the repository level. If a folder with that name already exists. create a common folder using exactly the same name and case as in the “source” repository. it must be renamed. (To copy the entire folder. Follow the Copy Folder Wizard steps. Point the folder to the correct shared folder if one is being used: INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-9 . Open and connect to either the Repository Manager or Designer.) 3.

Implement appropriate security: • • • • In Development.After performing the Folder Copy method. Modify the pre.and post-session commands as necessary: • In the Server Manager. ensure the owner of the folders is a user in the Development group. change the owner of the Test/QA folders to a user in the Test/QA group. In Test and Quality Assurance. change the owner of the folders to a user in the Production group. For additional information. and from the General tab make the required changes to the pre. Object Copy Copying mappings into the next stage within a networked environment has many of the same advantages and disadvantages as in the standalone environment. 2. In Production. Additional advantages and disadvantages of Object Copy in a distributed environment include: Advantages: • More granular control over objects PAGE BP-10 BEST PRACTICES INFORMATICA CONFIDENTIAL . Revoke all rights to Public other than Read for the Production folders. see the previous description of Object Copy for the standalone environment. be sure to remember the following steps: 1.and post-sessions scripts. open the session properties. but the process of handling shortcuts is simplified in the networked environment.

INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-11 . connect to both the QA and Production repositories and open the appropriate folders in each.Disadvantages: • • 1. 2. • • If copying the mapping follow the copy session wizard. Copy the mapping from quality assurance (QA) into production. otherwise skip to step 2: In each of the dedicated repositories. create a common folder with the exact same name and case. follow these substeps. Create or copy a session in the Server Manager to run the mapping (make sure the mapping exists in the current repository first). • • In the Designer. enter all the appropriate information in the Session Wizard. If creating the mapping. Copy the shortcuts into the common folder in Production making sure the shortcut has the exact same name. 3. • • Much more work to deploy an entire group of objects Shortcuts must exist prior to importing/copying mappings If using shortcuts. Drag and drop the mapping from QA into Production.

change the owner of the folders to a user in the Production group. and Production servers: For migrating from Development into Test. Informatica recommends using the Object Copy method. In Production. Recommendations Informatica recommends using the following process when running in a three-tiered environment with Development. This method gives you total granular control over the objects that are being moved. Test/QA. For recommendations on performing this copy procedure correctly. see the steps outlined in the Object Copy section. In Development.4. ensure the owner of the folders is a user in the Development group. Revoke all rights to Public other than Read for the Production folders. • • • • Implement appropriate security. change the owner of the Test/QA folders to a user in the Test/QA group. PAGE BP-12 BEST PRACTICES INFORMATICA CONFIDENTIAL . In Test and Quality Assurance. It ensures that the latest development maps can be moved over manually as they are completed.

When you imported that XML file back into your folder. targets. (Refer to the steps outlined in the Repository Copy section for recommendations to ensure that this process is successful. and sessions.When migrating from Test to Production. use one of the repository copy methods. copy that text. Once the XML file has been created. you would export that session to an XML file. that XML file can be changed with a text editor to allow more flexibility. if you had to copy one session many times. This method is more useful in the distributed environment because it allows for backup into an XML file to be moved across the network. there will be minimal or no changes required to sessions that are created or copied to the production server. you could edit that file to find everything within the <Session> tag. and paste that text within the XML file. as it allows you to copy sources. The following demonstrates the import/export functionality: 1.). two sessions will be created. The XML Object Copy Process works in a manner very similar to the Repository Copy backup and restore method. reusable transformations. Then. Objects are exported into an XML file: INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-13 . If similar server and database naming conventions are utilized. XML Object Copy Process Another method of copying objects in a distributed (or centralized) environment is to copy objects by utilizing PM/PC’s XML functionality. Informatica recommends using the Repository Copy method. mappings. After the Test code is cleared for production. Before performing this migration. You would then change the name of the session you just pasted to be unique. all code in the Test server should be frozen and tested. For example.

2. Sessions can be exported and imported into the Server Manager in the same way (the corresponding mappings must exist for this to work). PAGE BP-14 BEST PRACTICES INFORMATICA CONFIDENTIAL . Objects are imported into a repository from the corresponding XML file: 3.

INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-15 .

what is the impact of having multiple targets populated by a single map?) With PowerCenter. and Metadata. Not only is it easier to debug a mapping with a limited number of objects. consider writing to multiple disks or file systems simultaneously. is it more efficient to source from a flat file rather than a database?) In general. name. this Best Practice addresses some questions that are commonly raised by project teams. and document components of the analytic solution.Development FAQs Challenge Using the PowerCenter product suite to most effectively to develop. if there is an intent to perform intricate transformations before loading to target. including Scheduling. Refer to the product guides supplied with PowerCenter for additional information. Q: How does source format affect performance? (i. Description The following pages summarize some of the questions that typically arise during development and suggest potential resolutions. Q: What are some considerations when designing the mapping? (i..e. It provides answers in a number of areas. Server Administration. it may be advisable to first load the flat-file into a relational database. However. it is possible to design a mapping with multiple targets. but they can also be run concurrently and make use of more system resources.e. While the most effective use of PowerCenter depends on the specific situation. You can then load the targets in a specific order using Target Load Ordering. Fixed-width files are faster than delimited files because delimited files require extra parsing. The recommendation is to limit the amount of complex logic in a mapping. which allows the PowerCenter mappings to access the data in an optimized fashion by using filters and custom SQL SELECTs where appropriate. a flat file that is located on the server machine loads faster than a database located on the server machine. Backup Strategies. This minimizes disk seeks and applies to a PAGE BP-16 BEST PRACTICES INFORMATICA CONFIDENTIAL . When using multiple output files (targets).

consult your Database User Guide. Using the filter condition in the Source Qualifier to filter out the rows at the database level is a good way to increase the performance of the mapping. Q: What documentation is available for the error codes that appear within the error log files? Log file errors and descriptions appear in Appendix C of the PowerCenter User Guide. Log File Organization Q: Where is the best place to maintain Session Logs? One often-recommended location is the default /SessLogs/ folder in the Informatica directory. Scheduling Techniques Q: What are the benefits of using batches rather than sessions? Using a batch to group logical sessions minimizes the number of objects that must be managed to successfully load the warehouse. keeping all log files in the same directory. The most expensive use of the DTM is passing unnecessary data through the mapping. a hundred individual sessions can be logically grouped into twenty batches. For other database-specific errors. It is best to use filters as early as possible in the mapping to remove rows of data that are not needed.session writing to multiple targets. etc. and so on. regardless of the number of objects it takes to fulfill the requirement. Sequential batches help ensure that dependencies are met as needed. There are two types of batches: sequential and concurrent. The business requirement is always the first consideration. which simplifies the operations tasks associated with loading the targets. For example. o A sequential batch simply runs sessions one at a time. in a linear sequence. This is the SQL equivalent of the WHERE clause. or to stop on errors. It's also possible to set up conditions to run the next session only if the previous session was successful. and to multiple sessions running simultaneously. Error information also appears in the PowerCenter Help File within the PowerCenter client applications. Q: What are some considerations for determining how many objects and transformations to include in a single mapping? There are several items to consider when building a mapping. The Operations group can then work with twenty batches to load the warehouse. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-17 . For example. a sequential batch ensures that session1 runs before session2 when session2 is dependent on the load of session1.

o

A concurrent batch groups logical sessions together, like a sequential batch, but runs all the sessions at one time. This can reduce the load times into the warehouse, taking advantage of hardware platforms' Symmetric Multi-Processing (SMP) architecture. A new batch is sequential by default; to make it concurrent, explicitly select the Concurrent check box.

Other batch options, such as nesting batches within batches, can further reduce the complexity of loading the warehouse. However, this capability allows for the creation of very complex and flexible batch streams without the use of a third-party scheduler. Q: Assuming a batch failure, does PowerCenter allow restart from the point of failure? Yes. When a session or sessions in a batch fail, you can perform recovery to complete the batch. The steps to take vary depending on the type of batch: If the batch is sequential, you can recover data from the session that failed and run the remaining sessions in the batch. If a session within a concurrent batch fails, but the rest of the sessions complete successfully, you can recover data from the failed session targets to complete the batch. However, if all sessions in a concurrent batch fail, you might want to truncate all targets and run the batch again. Q: What guidelines exist regarding the execution of multiple concurrent sessions / batches within or across applications? Session/Batch Execution needs to be planned around two main constraints: • • Available system resources Memory and processors

The number of sessions that can run at one time depends on the number of processors available on the server. The load manager is always running as a process. As a general rule, a session will be compute-bound, meaning its throughput is limited by the availability of CPU cycles. Most sessions are transformation intensive, so the DTM always runs. Also, some sessions require more I/O, so they use less processor time. Generally, a session needs about 120 percent of a processor for the DTM, reader, and writer in total. For concurrent sessions: • • One session per processor is about right; you can run more, but all sessions will slow slightly. Remember that other processes may also run on the PowerCenter server machine; overloading a production machine will slow overall performance.

Even after available processors are determined, it is necessary to look at overall system resource usage. Determining memory usage is more difficult

PAGE BP-18

BEST PRACTICES

INFORMATICA CONFIDENTIAL

than the processors calculation; it tends to vary according to system load and number of Informatica sessions running. The first step is to estimate memory usage, accounting for: • • • Operating system kernel and miscellaneous processes Database engine Informatica Load Manager

Each session creates three processes: the Reader, Writer, and DTM. • • If multiple sessions run concurrently, each has three processes More memory is allocated for lookups, aggregates, ranks, and heterogeneous joins in addition to the shared memory segment.

At this point, you should have a good idea of what is left for concurrent sessions. It is important to arrange the production run to maximize use of this memory. Remember to account for sessions with large memory requirements; you may be able to run only one large session, or several small sessions concurrently. Load Order Dependencies are also an important consideration because they often create additional constraints. For example, load the dimensions first, then facts. Also, some sources may only be available at specific times, some network links may become saturated if overloaded, and some target tables may need to be available to end users earlier than others. Q: Is it possible to perform two "levels" of event notification? One at the application level, and another at the PowerCenter server level to notify the Server Administrator? The application level of event notification can be accomplished through postsession e-mail. Post-session e-mail allows you to create two different messages, one to be sent upon successful completion of the session, the other to be sent if the session fails. Messages can be a simple notification of session completion or failure, or a more complex notification containing specifics about the session. You can use the following variables in the text of your post-session e-mail: E-mail Variable %s %l %r %e Description Session name Total records loaded Total records rejected Session status

INFORMATICA CONFIDENTIAL

BEST PRACTICES

PAGE BP-19

%t

Table details, including read throughput in bytes/second and write throughput in rows/second Session start time Session completion time Session elapsed time (session completion time-session start time) Attaches the session log to the message Attaches the named file. The file must be local to the Informatica Server. The following are valid filenames: %a<c:\data\sales.txt> or %a</users/john/data/sales.txt> On Windows NT, you can attach a file of any type. On UNIX, you can only attach text files. If you attach a non-text file, the send might fail. Note: The filename cannot include the Greater Than character (>) or a line break.

%b %c %i %g %a<filename>

The PowerCenter Server on UNIX uses rmail to send post-session e-mail. The repository user who starts the PowerCenter server must have the rmail tool installed in the path in order to send e-mail. To verify the rmail tool is accessible: 1. Login to the UNIX system as the PowerCenter user who starts the PowerCenter Server. 2. Type rmail <fully qualified email address> at the prompt and press Enter. 3. Type . to indicate the end of the message and press Enter. 4. You should receive a blank e-mail from the PowerCenter user's e-mail account. If not, locate the directory where rmail resides and add that directory to the path. 5. When you have verified that rmail is installed correctly, you are ready to send post-session e-mail. The output should look like the following: Session complete. Session name: sInstrTest Total Rows Loaded = 1 Total Rows Rejected = 0 Completed

PAGE BP-20

BEST PRACTICES

INFORMATICA CONFIDENTIAL

Rows Loaded Status 1

Rows Rejected 0

Read Throughput (bytes/sec) 30

Write Throughput Table Name (rows/sec) 1 t_Q3_sales

No errors encountered. Start Time: Tue Sep 14 12:26:31 1999 Completion Time: Tue Sep 14 12:26:41 1999 Elapsed time: 0:00:10 (h:m:s) This information, or a subset, can also be sent to any text pager that accepts e-mail. Backup Strategy Recommendation Q: Can individual objects within a repository be restored from the back-up or from a prior version? At the present time, individual objects cannot be restored from a back-up using the PowerCenter Server Manager (i.e., you can only restore the entire repository). But, It is possible to restore the back-up repository into a different database and then manually copy the individual objects back into the main repository. Refer to Migration Procedures for details on promoting new or changed objects between development, test, QA, and production environments. Server Administration Q: What built-in functions, does PowerCenter provide to notify someone in the event that the server goes down, or some other significant event occurs? There are no built-in functions in the server to send notification if the server goes down. However, it is possible to implement a shell script that will sense whether the server is running or not. For example, the command "pmcmd pingserver" will give a return code or status which will tell you if the server is up and running. Using the results of this command as a basis, a complex notification script could be built. Q: What system resources should be monitored? What should be considered normal or acceptable server performance levels? The pmprocs utility, which is available for UNIX systems only, shows the currently executing PowerCenter processes. Pmprocs is a script that combines the ps and ipcs commands. It is available through Informatica Technical Support. The utility provides the following information: - CPID - Creator PID (process ID)

INFORMATICA CONFIDENTIAL

BEST PRACTICES

PAGE BP-21

Informatica and several key Business Intelligence (BI) vendors. Today. Rather. and MicroStrategy. are effectively using the MX views to report and query the Informatica metadata. The decision on how much metadata to create is often driven by project timelines. You can also drill down to the column level and give descriptions of the columns in a table if necessary. While it may be beneficial for a developer to enter detailed descriptions of each column. Use the pmserver. it is also very time consuming to do so. PAGE BP-22 BEST PRACTICES INFORMATICA CONFIDENTIAL . You can also use ps -ef | grep pmserver to see if the server process (the Load Manager) is running.Last PID that accessed the resource . including Brio. variable. All of these tools store. Q: What procedures exist for extracting metadata from the repository? Informatica offers an extremely rich suite of metadata-driven tools for data warehousing applications.) Q: What cleanup (if any) should be performed after a UNIX server crash? Or after an Oracle instance crash? If the UNIX server crashes. you should first check to see if the Repository Database is able to come back up successfully.shows slot in LM shared memory (See Chapter 16 in the PowerCenter Administrator's Guide for additional details. Metadata Q: What recommendations or considerations exist as to naming standards or repository administration for metadata that might be extracted from the PowerCenter repository and used in others? With PowerCenter.err log to check if the server has started correctly. All information about column size and scale. Therefore. datatypes. Informatica does not recommend accessing the repository directly. etc.. and primary keys are stored in the repository. this decision should be made on the basis of how much metadata will be required by the systems that use the metadata. then you should try to start the PowerCenter server.0 or 1 . expression. views have been created to provide access to the metadata stored in the repository. The motivation behind the original Metadata Exchange (MX) architecture was to provide an effective and easy-to-use interface to the repository.used to sync the reader and writer . etc. but the amount of metadata that you enter should be determined by the business requirements. and manage their metadata in Informatica's central repository. sources. transformations.Semaphores . retrieve. Cognos.LPID . Business Objects. targets. If this is the case. even for SELECT access. you can enter description information for all repository objects.

INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-23 .

data analysis system that profiles and DMValiData identifies inconsistencies between data and metadata. as Transformation Components. Description Informatica has several partners in the data cleansing arena. It also provides a way to reformat and summarize files.an effective. TM FirstLogic – FirstLogic offers direct interfaces to PowerCenter during the extract and load process as well as providing pre-data extraction data cleansing tools like DataRight and Merge/Purge. standardization. or incorrect. The challenge is therefore to cleanse data online. DMUtils . The partners and respective tools include the following: DataMentors . The online interface (ACE Library) integrates the TrueName Library and Merge/Purge Library of FirstLogic. cleansing. using the Informatica External Procedures protocol. It is primarily used as a query and reporting tool. the user community may lose faith in the entire warehouse’s data. at the point of entry into the data warehouse or operational data store (ODS). and matching of the name and address information during the PowerCenter ETL stage of building a data mart or data warehouse. Available tools are : • • • DMDataFuse . inconsistent. it is not unusual to discover that as many as half the records in a database contain some type of information that is incomplete. Thus. to ensure that the warehouse provides consistent and accurate data for business decision making. PAGE BP-24 BEST PRACTICES INFORMATICA CONFIDENTIAL . enhancement.Data Cleansing Challenge Accuracy is one of the biggest obstacles blocking the success of many data warehousing projects.Provides tools that are run before the data extraction and load process to clean source data. these components can be invoked for parsing.a data cleansing and householding system with the power to accurately standardize and match data. If users discover data inconsistencies. TM . However.a powerful non-compiled scripting language that operates on flat ASCII or delimited files.

which identifies business relationships (such as households) and duplications.. Delivery of this bridge was originally scheduled for May 2001. data elements) into a logical order. It offers data analysis and investigation. As a result. Data Discovery enables Datagration to search through a field of free form data and re-arrange the tokens (i. Vality – Provides a product called Integrity. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-25 . but no further information is available at this time. The four components are : • • • • Converter: data analysis and investigation module for discovering word patterns and phrases within free form text Parser: processing engine for data cleansing. flexible data quality system that can repair any type of data (in addition to its name and address) by incorporating custom business rules and logic. Vality is in the process of developing a "TX Integration" to PowerCenter.e. FirstLogic – ACE The following graphic illustrates a high level flow diagram of the data cleansing process. Datagration is an open. elementizing and standardizing customer data Geocoder: an Internationally-certified postal and census module for address verification and standardization Matcher: a module designed for relationship matching and record linking. reveals undocumented business practices. Integration Examples This following sections describe how to integrate two of the tools with PowerCenter. and discovers metadata/field content discrepancies. Trillium – Trillium’s eQuality customer information components (a web enabled tool) are integrated with Informatica’s Transformation Exchange modules and reside on the same server as Informatica’s transformation engine.Paladyne – The flagship product. and unique probabilistic and fuzzy matching capabilities. words. conditioning. Datagration's Data Discovery Message Gateway feature assesses data cleansing requirements using automated data discovery tools that identify data patterns. Datagration supports relational database systems and flat files as data sources and any application that runs in batch mode. Informatica users can invoke Trillium’s four data quality components through an easy-to-use graphical desktop object.

Use the Informatica Advanced External Transformation process to interface with the FirstLogic module by creating a “Matching Link” transformation. That process uses the Informatica Transformation Developer to create a new Advanced External Transformation, which incorporates the properties of the FirstLogic Matching Link files. Once a Matching Link transformation has been created in the Transformation Developer, users can incorporate that transformation into any of their project mappings: it's reusable from the repository. When an Informatica session starts, the transformation is initialized. The initialization sets up the address processing options, allocates memory, and opens the files for processing. This operation is only performed once. As each record is passed into the transformation it is parsed and standardized. Any output components are created and passed to the next transformation. When the session ends, the transformation is terminated. The memory is once again available and the directory files are closed. The available functions / processes are as follows. ACE Processing There are four ACE transformations available to choose from. They will parse, standardize and append address components using Firstlogic’s ACE Library. The transformation choice depends on the input record layout. A fourth transformation can provide optional components. This transformation must be attached to one of the three base transformations. The four transforms are: 1. ACE_discrete - where the input address data is presented in discrete fields 2. ACE_multiline - where the input address data is presented in multiple lines (1-6). 3. ACE_mixed - where the input data is presented with discrete city/state/zip and multiple address lines(1-6). 4. Optional transform – which is attached to one of the three base transforms and outputs the additional components of ACE for enhancement.

PAGE BP-26

BEST PRACTICES

INFORMATICA CONFIDENTIAL

All records input into the ACE transformation are returned as output. ACE returns Error/Status Code information during the processing of each address. This allows the end user to invoke additional rules before the final load is completed. TrueName Process TrueName mirrors the ACE transformation options with discrete, multi-line and mixed transformations. A fourth and optional transformation available in this process can be attached to one of the three transformations to provide genderization and match standards enhancements. TrueName will generate error and status codes. Similar to ACE, all records entered as input into the TrueName transformation can be used as output. Matching Process The matching process works through one transformation within the Informatica architecture. The input data is read into the Informatica data flow similar to a batch file. All records are read, the break groups created and, in the last step, matches are identified. Users set-up their own matching transformation through the PowerCenter Designer by creating an advanced external procedure transformation. Users are able to select which records are output from the matching transformations by editing the initialization properties of the transformation. All matching routines are predefined and, if necessary, the configuration files can be accessed for additional tuning. The five predefined matching scenarios include: individual, family, household (the only difference between household and family, is the household doesn't match on last name), firm individual, and firm. Keep in mind that the matching does not do any data parsing, this must be accomplished prior to using this transformation. As with ACE and TrueName, error and status codes are reported. Trillium Integration to Trillium’s data cleansing software is achieved through the Informatica Trillium Advanced External Procedures (AEP) interface. The AEP modules incorporate the following Trillium functional components. • Trillium Converter – The Trillium Converter facilitates data conversion such as EBCDIC to ASCII, integer to character, character length modification, literal constant and increasing values. It may also be used to create unique record identifiers, omit unwanted punctuation, or translate strings based on actual data or mask values. A user-customizable parameter file drives the conversion process. The Trillium Converter is a separate transformation that can be used standalone or in conjunction with the Trillium Parser module. Trillium Parser – The Trillium Parser identifies and/or verifies the components of free-floating or fixed field name and address data. The primary function of the Parser is to partition the input address records

INFORMATICA CONFIDENTIAL

BEST PRACTICES

PAGE BP-27

• •

into manageable components in preparation for postal and census geocoding. The parsing process is highly table- driven to allow for customization of name and address identification to specific requirements. Trillium Postal Geocoder – The Trillium Postal Geocoder matches an address database to the ZIP+4 database of the U.S. Postal Service (USPS). Trillium Census Geocoder – The Trillium Census Geocoder matches the address database to U.S. Census Bureau information.

Each record that passes through the Trillium Parser external module is first parsed and then, optionally, postal geocoded and census geocoded. The level of geocoding performed is determined by a user-definable initialization property. • Trillium Window Matcher – The Trillium Window Matcher allows the PowerCenter Server to invoke Trillium’s deduplication and house holding functionality. The Window Matcher is a flexible tool designed to compare records to determine the level of likeness between them. The result of the comparisons is considered a passed, a suspect, or a failed match depending upon the likeness of data elements in each record, as well as a scoring of their exceptions.

Input to the Trillium Window Matcher transformation is typically the sorted output of the Trillium Parser transformation. The options for sorting include: • • • Using the Informatica Aggregator transformation as a sort engine. Separate the mappings whenever a sort is required. The sort can be run as a pre/post session command between mappings. Pre/post sessions are configured in the Server Manager. Build a custom AEP Transformation to include in the mapping.

PAGE BP-28

BEST PRACTICES

INFORMATICA CONFIDENTIAL

Data Connectivity Using PowerConnect for BW Integration Server

Challenge Understanding PCISBW to load data into the SAP BW. Description PowerCenter supports SAP Business Information Warehouse (BW) as a warehouse target only. PowerCenter Integration Server for BW enables you to include SAP Business Information Warehouse targets in your data mart or data warehouse. PowerCenter uses SAP’s Business Application Program Interface (BAPI), SAP’s strategic technology for linking components into the Business Framework, to exchange metadata with BW. Key Differences of Using PowerCenter to Populate BW Instead of a RDBMS • BW uses the pull model.BW must request data from an external source system, which is PowerCenter before the source system can send data to BW. PowerCenter uses PCISBW to register with BW first, using SAP’s Remote Function Call (RFC) protocol. External source systems provide transfer structures to BW. Data is moved and transformed within BW from one or more transfer structures to a communication structure according to transfer rules. Both, transfer structures and transfer rules, must be defined in BW prior to use. Normally this is done from the BW side. An InfoCube is updated by one communication structure as defined by the update rules. Staging BAPIs (an API published and supported by SAP) is the native interface to communicate with BW. Three PowerCenter product suites use this API. PowerCenter Designer uses the Staging BAPIs to import metadata for the target transfer structures. PCISBW uses the Staging BAPIs to register with BW and receive requests to run sessions. PowerCenter Server uses the Staging BAPIs to perform metadata verification and load data into BW. Programs communicating with BW use the SAP standard saprfc.ini file to communicate with BW. The saprfc.ini file is similar to the tnsnames file in Oracle or the interface file in Sybase. The PowerCenter Designer reads metadata from BW and the PowerCenter Server writes data to BW.

INFORMATICA CONFIDENTIAL

BEST PRACTICES

PAGE BP-29

2) ODS only 3) InfoCubes then ODS and 4) InfoCubes and ODS in parallel. BW invokes the PowerCenter session when the InfoPackage is scheduled to run in BW.• • • • • • • BW requires that all metadata extensions be defined in the BW Administrator Workbench. For more details on installation and configuration refer to the Installation Guide. The BW administrator or project manager should tell you the name of the external source system and the InfoSource targets. Build the BW Components Step 1: Create an External Source System Step 2: Create an InfoSource Step 3: Assign an External Source System Step 4: Activate the InfoSources Hint: You do not normally need to create an external Source System or an InfoSources. Due to its use of the pull model. On NT you can have only one PCISBW. Install and Configure PowerCenter and PCISBW Components The PCISBW server must be installed in the same directory as the PowerCenter Server. When using IDOC.ini on both the PowerCenter Server and the PowerCenter Client). The definition must be imported to Designer. 3. Configure the saprfc. The methods have to be chosen in BW. you have four options for the data target when you execute the InfoPackage: 1) InfoCubes only. Loading into the ODS is the fastest since less processing is performed on the data as it is being loaded into BW. Informatica recommends installing PCISBW client tools in the same directory as the PowerCenter Client. An active structure is the target for PowerCenter mappings loading BW. 4. You need the same saprfc. 2. Key Steps To Load Data Into BW 1. There is no concept of update or deletes through the staging BAPIs. When using TRFC method. (Lots of customers choose this option) You can update the InfoCubes later. all of the processing required to move data from a transfer structure to an InfoCube (transfer structure to transfer rules to communication structure to update rules to InfoCubes) is done synchronously with the InfoPackage. BW supports two different methods for loading data: IDOC and TRFC (Transactional Remote Functional Call). BW only supports insertion of data into BW.ini file Required for PowerCenter and PCISBW to connect to BW. BW must control all scheduling. Start the PCISBW server PAGE BP-30 BEST PRACTICES INFORMATICA CONFIDENTIAL .

go to the “Selection 3rd Party Tab and click on the “Selection Refresh” button (symbol is a recycling sign) which then prompts you for the session name. 6. Used by PowerCenter Client and PowerCenter Server. Set RFC_INI environment variable for all Windows NT. cannot execute stored procedure in a BW target.Saprfc. Specifies the BW application server. • Do not use Notepad to edit this file. Type R. The client uses Type A for importing the transfer structure (table definition) from BW into the Designer. Windows 2000 and Windows 95/98 machines equal with saprfc. Create a Database connection Use DEST entry_for A_type of the saprfc.ini PowerCenter uses two types of entries to connect to BW through the saprfc. Use the DEST_for_A_type as connect string.ini.Start PCISBW server only after you start PowerCenter server and before you create InfoPackage in BW. You can only start a Session from BW (Scheduler in the Administrator Workbench of BW). open the Scheduler dialog box. Used by the PowerCenter Integration Server for BW.ini file. Pmbwserver [DEST_Entry_for_R_type] [repo_user][repo_passwd][port_for_PowerCenter_Server] Note: The & sign behind the start command doesn’t work when you start up the PCISBW in a Telnet session 5. To do this. can use only one transfer structure for each mapping.ini as the connect string in the PowerCenter Server Manager 7. Load data Create a session in PowerCenter and an InfoPackage in BW. Before you can start a session. Notepad can corrupt the saprfc. RFC_INI is used to locate the saprfc. Parameter and Connection information file . you have to enter the session_name into BW.ini file: • Type A. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-31 .ini file. Build mappings Import the InfoSource into PowerCenter Warehouse Designer and build a mapping using the InfoSource as a target. cannot partition pipelines with a BW target. The Server uses Type A for verify the tables and writing into BW. Restrictions on Mappings with BW InfoSource Targets • • • • You You You You can not use BW as a lookup table. Register the PCISBW as a RFC server at the SAP gateway so it acts as a listener. To start the session go to the last tab. It then can receive the request from BW to run a session on PowerCenter Server.

but the PCISBW Server attempts to insert all records.trc in the PowerCenter Server directory. BW supports only inserts. You cannot build update strategy in a mapping. In some case PCISBW will generate a file with extension *. Error Messages PCISBW writes error messages to the screen. even those marked for update or delete. Look for error messages there.• • You cannot copy fields that are prefaced with /BIC/ from the InfoSource definition into other transformations. It does not support updates or deletes. PAGE BP-32 BEST PRACTICES INFORMATICA CONFIDENTIAL . You can use Update Strategy transformation in a mapping.

PREDICT data and ADA-CMP data for ADABAS Physical file definitions (DDS’s) for AS/400 After the above information has been imported and saved in the datamaps. flat files. DDMs. as well as to relational sources.Data Connectivity using PowerConnect for Mainframe Challenge Accessing important. ADABAS. In addition. PowerCenter uses SQL to access the data – which it sees as relational tables at runtime. IMS and IDMS. which can directly import the following information. the PowerConnect client agent must be installed on the same machine as the PowerCenter client or server. legacy data sources residing on mainframes and AS/400 systems. without using FTP: • • • • • COBOL and PL/1 copybooks Database definitions (DBDs) for IMS Subschemas for IDMS FDTs. The data can also be compressed and encrypted as it is being moved. the mainframe or AS400 data is just a regular ODBC data source. The ODBC layer works for both Windows and UNIX. The PowerConnect client agent and listener work in tandem and. It is an agent-based piece of software infrastructure that must be installed on OS/390 or AS/400 as either a regular batch job or started task. using TCP/IP. via “datamaps”. The PowerConnect client agent and PowerCenter communicate via a thin ODBC layer. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-33 . such as VSAM. Description When integrated with PowerCenter. but difficult to deal with. called Navigator. PowerConnect for Mainframe/AS400 has a Windows design tool. so that as far as PowerCenter is concerned. such as DB2. PowerConnect for Mainframe and AS400 provides fast and seamless SQL access to non-relational sources. move the data at high-speed between the two platforms in either direction. without having to write complex extract programs.

PAGE BP-34 BEST PRACTICES INFORMATICA CONFIDENTIAL . Perform the Windows install.cfg) to add a node entry for communication between the client and the mainframe or AS/400. 3. Ping the mainframe or AS/400 from Windows to ensure connectivity.Some of the key capabilities of PowerConnect for Mainframe/AS400 include: • • • • • • • • • Full EBCDIC-ASCII conversion Multiple concurrent data movements Support of all binary mainframe datatypes (e. A relational table is created. The datamap is stored on the mainframe. 5. OCCURS DEPENDING ON. Start the Listener on the mainframe or the AS/400 system. 1. This is the logical view. This includes entering the Windows license key. 2. the process is as follows: 1.g. Create the datamap (give it a name). 2.cfg) to change various default settings. Installing PowerConnect for Mainframe/AS400 Note: Be sure to complete the Pre-Install Checklist (included at the end of this document) prior to performing the install. updating the configuration file (dbmover. adding the PowerConnect ODBC driver and setting up a client ODBC DSN. Specify the copybook name to be imported. ADABAS MU and PE Support for REDEFINES Date/time field masking Multiple views from single data source Bad data checking Data filtering Steps for Using the Navigator If your objective is to import a COBOL copybook from OS/390. Review and edit (if necessary) the default table created. packed decimal) Ability to handle complex data structures. 4. Perform the mainframe or AS/400 install. Perform a “row test” to source the data directly from OS/390. This includes entering the mainframe or AS/400 license key and updating the configuration file (dbmover. such as COBOL OCCURS. 4. Run the import process. This is the physical view. 3.

updating the configuration file (dbmover. In Designer. • • • • • • • INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-35 . 6. increase the default TIMEOUT setting in the PowerConnect configuration files (dbmover. which was created when PowerConnect was installed. adding the PowerConnect ODBC driver and setting up the server ODBC DSN. To ensure smooth integration. Access sample data in Navigator as a test. the statement must be qualified with the PowerConnect high-level qualifier (schema name).1800).ini file by adding this entry at the end of the ODBCDLL section: DETAIL=EXTODBC.1800. This should be of type ODBC. modify the Tablename prefix in the Source Options to include the PowerConnect high-level qualifier (schema name). edit the powermrt. Guidelines for Integrating PowerConnect for Mainframe/AS400 with PowerCenter • In Server Manager. apply the PowerCenter-PowerConnect for Mainframe/AS400 ODBC EBF.5. The DSN name and connect string should be the same as PowerConnect’s ODBC DSN. along with the PowerConnect ODBC DSN that was created when PowerConnect was installed.cfg) to (15. To handle large data sources. Since the Informatica server communicates with PowerConnect via ODBC. an ODBC license key is required. This includes entering the UNIX or NT license key. before importing a source from PowerConnect for the first time.DLL When creating sessions in the Server Manager. a database connection is required to allow the server to communicate with PowerConnect. Perform the UNIX or NT install.cfg) to change various default settings. The “import from database” option in Designer is needed to pull in sources from PowerConnect. If entering a custom SQL override in the Source Qualifier to filter PowerConnect data.

as opposed to a full data set. Queue Manager • • • Informatica connects to Queue Manager to send and receive messages. (2) Message Queue and (3) MQSeries Message. and controls queue operation. MQSeries Architecture MQSeries architecture has three parts: (1) Queue Manager. MQSeries Message has two components: PAGE BP-36 BEST PRACTICES INFORMATICA CONFIDENTIAL . Every message queue belongs to a Queue Manager. this is defined by the application. Applications can also request data using a ‘request message’ on a message queue. Because no open connections are needed between systems. creates queues. You must use actual server manager session to debug a queue mapping. Not Available to PowerCenter when using MQSeries • • • No Lookup on MQSeries sources. they can run independently of one another. and Rank transformations because they will only be performed on one queue. MQSeries enforces No Structure on the content or format of the message. Queue Manager administers queues. No Debug ‘Sessions’. Message Queue is a destination to which messages can be sent. Certain considerations also necessary when using Aggregators. Joiners.Data Connectivity using PowerConnect for MQSeries Challenge Understanding how to use MQSeries Applications in PowerCenter mappings. Description MQSeries Applications communicate by sending each other messages rather than calling each other directly.

A data component. You cannot use a MQ SQ to join two MQ sources. normal. COBOL). Flat File) or Normalizer (COBOL) is required if the data is not in binary. Filter Data – set filter conditions to filter messages using message header ports. MQ SQ – Must be used to read data from an MQ source. and control syncpoint queue clean-up. MSGID is the primary key.000.this is necessary if the file is not binary. You can create a session with an MQSeries mapping using the Session Wizard in the Server Manager. When extracting from a queue you need to use either of two Source Qualifiers: MQ Source Qualifier (MQ SQ) or Associated Source Qualifier (SQ). control incremental extraction. Set Tracing Level .’ Extraction from a Queue In order for PowerCenter to extract from a queue. XML.. Loading to a Queue There are two types of MQ Targets that can be used in a mapping: Static MQ Targets and Dynamic MQ Targets.verbose. Use mapping parameters and variables Associated SQ – either an Associated SQ (XML. Note that certain message headers in a MQSeries message require a predefined set of values assigned by IBM. MQ SQ can perform the following tasks: • • • • • Select Associated Source Qualifier . INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-37 . • Creating and Configuring MQSeries Sessions After you create mappings in the Designer. If an Associated SQ is used. the queue must be in a form of COBOL. XML. then make all adjustments in the session when using MQ Series. Design the mapping as if it were not using MQ Series. control end of file. Once the code is working correctly. Flat File or Binary. (??CORRECT INTERPRETATION??) Use the target definition specific to the format of the message data (i. etc.• • A header. MQ SQ is predefined and comes with 29 message headed fields. you can create and configure sessions in the Server Manager. which contains data about the queue. • Static MQ Targets – Does not load data to the message header fields. used for binary. flat file. which contains the application data or the ‘message body. Set Message Data Size – default 64.e. design the mapping as if it were not using MQ Series. Dynamic – Used for binary targets only and when loading data to a message header. test by actually pulling data from the queue. Only one type of MQ Target can be used in a single mapping. then add the MQ Source and Source Qualifier after the mapping logic has been tested.

MQSTR). the Source Type is set to the following: • • Heterogeneous when there is an associated source definition in the mapping. or COBOL datatypes associated with an MQSeries message data. and click OK. XML. • PAGE BP-38 BEST PRACTICES INFORMATICA CONFIDENTIAL . IBM MQSeries datatypes appear in the MQSeries source and target definitions in a mapping. • And the number of rows per message(only applies to flat file MQ Targets). select the MQ connection to use for the source message queue. • If you load data to a dynamic MQ target. Transformation datatypes. select File Target type from the list. Once this is done. Transformation datatypes are generic datatypes that PowerCenter uses during the transformation process. Configuring MQSeries Targets For Static MQSeries Targets.Configuring MQSeries Sources MQSeries mappings cannot be partitioned if an associated source qualifier is used. When the target is an XML file or XML message data for a target message queue. They appear in all the transformations in the mapping. and the message data is in flat file. Native datatypes also appear in flat file and XML target definitions in the mapping. the target type is automatically set to XML. This indicates that the source data is coming from an MQ source. You can alternate between the two pages to set configurations for each. For MQ Series sources. click Edit Object Properties and enter: • The Connection name of the target message Queue. Native datatypes appear in flat file. COBOL or XML format. Message Queue when there is no associated source definition in the mapping. • Be sure to select the MQ checkbox in Target Options for the Associated file type. • On the MQSeries page. • Enter the Format of the Message Data in the Target Queue (ex. Flat file. Native datatypes. the target type is automatically set to Message Queue. Appendix Information PowerCenter uses the following datatypes in MQSeries mappings: • • IBM MQSeries datatypes. Note that there are two pages on the Source Options dialog: XML and MQSeries. XML and COBOL source definitions.

IBM MQSeries Datatypes MQSeries Datatypes MQBYTE MQCHAR MQLONG Transformation Datatypes BINARY STRING INTEGER INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-39 .

Extracts data during a session by directly running against the physical database tables using PowerCenter server. On UNIX. Extracts data from PeopleSoft systems without compromising existing PeopleSoft security features. PeopleSoft saves metadata in tables that provide a description and logical view of data stored in underlying physical database table. to maintain consistent. Log onto the Server machine on Windows NT/2000 or UNIX and run the setup program to select and install the PowerConnect for PeopleSoft Server. make sure to set up the PATH environment variable to include current directory. PowerConnect for PeopleSoft uses SQL to communicate with the database server. Also. To begin. Description PowerConnect for PeopleSoft supports extraction from PeopleSoft systems. both the PowerCenter Client and Server have to be set up and configured. Certain drivers that enable PowerCenter to extract source data from PeopleSoft systems also need to be installed. reusable metadata across various systems and to understand the process for extracting data and metadata from PeopleSoft sources without having to write and sustain complex SQR extract programs. PowerConnect for PeopleSoft: • • • Imports PeopleSoft source definition metadata via PowerCenter Designer using ODBC to connect to PeopleSoft tables. Installing PowerConnect for PeopleSoft Installation of PowerConnect for PeopleSoft is a multi-step process. The overall process involves: Installing PowerConnect for PeopleSoft for the PowerCenter Server: • • Installation is simple like other Informatica products. PAGE BP-40 BEST PRACTICES INFORMATICA CONFIDENTIAL .Data Connectivity using PowerConnect for PeopleSoft Challenge To maintain data integrity by sourcing/targeting transactional PeopleSoft systems.

• • SQL table. Key columns contain duplicate values. PowerConnect for PeopleSoft also imports the metadata attached to those PeopleSoft structures. precision. departments 10700 and 10800 report to the same manager.Installing PowerConnect for PeopleSoft for the PowerCenter Client: • • Run the setup program and select PowerConnect for PeopleSoft client from the setup list. A tree defines the summarization rules for a database field. data for the PeopleSoft records AE_REQUEST is saved in the PS_AE_REQUEST database table. PowerConnect for PeopleSoft helps in importing from the following PeopleSoft records. PeopleSoft Trees A PeopleSoft tree is an object that defines the groupings and hierarchical relationships between the values of a database field. department 20200 is INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-41 . Has one-to-one relationship with underlying physical tables. The Designer uses the PS source name as the name of the source definition. For example. It specifies how the values of a database file are grouped together for purposes of reporting or for security access. You can use the Tree Manager to define the organizational hierarchy that specifies how each department relates to the other departments. Importing Sources PowerConnect for PeopleSoft aids data integrity by sourcing/targeting transactional PeopleSoft systems and by maintaining reusable consistent metadata across various systems. SQL view. the Designer imports both the PeopleSoft source name and the underlying database table name. For example. PowerConnect for PeopleSoft extracts source data from two types of PeopleSoft objects: • • Records Trees PeopleSoft Records A PeopleSoft record is a table-like structure that contains columns with defined datatypes. Client installation wizard points to the PowerCenter Client directory for the driver installation as a default. The PowerCenter Server uses the underlying database table name to extract source data. with the option to change the location. scale and keys. For example. PS_Record_Name. While importing the PeopleSoft objects. Provides an alternative view of information in one or more database tables. the values of the DEPTID field identify individual departments in your organization. When you import a PeopleSoft record. PeopleSoft names the underlying database tables after the records.

Node Oriented trees: In a node-oriented tree. which organize record definitions for PeopleSoft Query security. and each subsequent level defines a higher level grouping of the tree nodes. Types of Trees The Tree Manager enables you to create many kinds of trees for a variety of purposes. PowerConnect for PeopleSoft extracts data from loose-level and strict level summary trees. in which database field values appear as tree nodes. The next level is made up of tree nodes that group together the detail values. PowerConnect for PeopleSoft extracts data from the following PeopleSoft tree structure types: Detail Trees: In the most basic type of tree. but tree nodes from an existing detail tree. but all trees fall into these major types: • • • • Detail trees. but children can/do exist. the detail values aren't values from a database field. you build a treethat mirrors the hierarchy. Winter Trees: Extracts data from loose-level and strict level node-oriented trees.part of a different division. The Departmental Security tree in PeopleSoft HRMS is a good example of a node-oriented tree. PeopleSoft records are grouped into logical groups. the tree nodes represent the data values from the database field. which provide an alternative way to group nodes from an existing detail tree. the "lowest" level is the level farthest to the right in the Tree Manager window. Node-oriented trees. PowerConnect for PeopleSoft extracts data from loose-level and strictlevel detail trees with static detail ranges. PAGE BP-42 BEST PRACTICES INFORMATICA CONFIDENTIAL . This kind of tree is called a detail tree. This way. Flattening trees When you extract data from a PeopleSoft tree. Query access trees. and holds detail values. without duplicating the entire tree structure. The tree groups the nodes from a specific level in the detail tree differently from the higher levels in the detail tree itself. There are no branches in query trees. Winter trees contain no details ranges. In other words. the PowerCenter Server denormalizes the tree structure. and so on. It uses either of the following methods to denormalize trees. Summary Trees: In a summary tree. a query written by a certain logged in user within a group can only access the rows that are part of the records that are assigned to the group the user has access to. Query access trees: are used to maintain security within the PeopleSoft implementation. in which database field values appear as detail values. Summary trees. which are represented as nodes on the tree.

When creating an ODBC data source. Create mapping 3. Note: If PeopleSoft already establishes database connection names. You can use vertical flattening can be used with both strict-level and loose-level trees. You need a user with read access to PeopleSoft system to access the PeopleSoft physical and metadata tables via an ODBC connection. PowerCenter Client and Server require a database username and password. configure the data source to connect to the underlying database for the PeopleSoft system. create an ODBC data source for each PeopleSoft system you want to access. Winter and Summary Trees Detail. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-43 . if PeopleSoft system resides on Oracle database. Use the Sources-Import command in PowerCenter Designer’s Source Analyzer tools to import PeopleSoft records and strict-level trees. you need to import its source definition. To import a PeopleSoft source definition. configure an ODBC data source to connect to the Oracle database. You can either create separate users for metadata and source extraction or alternatively use one for both. For example. Winter and Summary Trees Detail. use the PeopleSoft database connection names. Vertical flattening: The PowerCenter Server creates a row for each node or detail range represented in the tree.• • Horizontal flattening: The PowerCenter Server creates a single row for each final branch node or detail range in the tree. Extracting data from PeopleSoft is a three-step process: 1. Winter and Summary Trees Extracting Data from PeopleSoft PowerConnect for PeopleSoft extracts data from PeopleSoft systems without compromising existing PeopleSoft security To access PeopleSoft metadata and data. Create and run a session 1. You can use the database system names for ODBC names. Import or create source definition 2. Importing or Creating Source Definitions Before extracting data from a source. Flattening Method Horizontal Vertical Vertical only Tree Structure Metadata Extraction Method Import Source definition Create Source definition Create Source definition Tree Levels Strict-level tree Strict-level tree Loose-level tree Detail. You can only use horizontal flattening with strict level trees.

Note: PowerConnect for PeopleSoft works with all versions of PeopleSoft systems. When using the default join option between two PeopleSoft tables. there are certain tables that are stored on the database without that prefix. the Navigator displays and organizes sources by the PeopleSoft record or tree name by default. Take care when using user-defined primary-foreign key relationships with trees. An ERP Source Qualifier like the Source Qualifier allows you to use user-defined joins and filters. PeopleSoft etc. 3. so an override and a user-defined join will need to be made to correct this. PeopleTools based applications are table-based systems. select PeopleSoft as the source database type and then select a PeopleSoft database connection as source database. PeopleTools Tables contain information that you define using PeopleTools. When you configure the session. Creating and Running a Session You need a valid mapping. Panels tab.After you import or create a PeopleSoft record or tree. since changes made within Tree Manager may alter such relationships. Panels are referred to as Pages. Denormalization of the tables that made up the tree will be changed. However. enter the table owner name in the session as a source table prefix. Create a Mapping After you import or create the source definition. A database for a PeopleTools application contains three major sets of tables: • • • System Catalog Tables store physical attributes of tables and views. registered PowerCenter Server. In PeopleSoft 8. 2. An ERP Source Qualifier is used for all ERP sources like SAP. Importing Records You can import records from two tabs in the Import from PeopleSoft dialog box: • • Records tab. If the database user is not the owner of the source tables. and a Server Manager database connection to create a session. PowerConnect for PeopleSoft uses the Panels tab to import PeopleSoft 8 Pages. Application Data Tables house the actual data your users will enter and access through PeopleSoft application windows and panels. PAGE BP-44 BEST PRACTICES INFORMATICA CONFIDENTIAL . you connect to an ERP Source Qualifier to represent the records the PowerCenter Server queries from a PeopleSoft source. so simply altering the primary-foreign key relationship within Source Analyzer can be dangerous and it is advisable to re-import the whole tree. the query created will automatically append a PS_ prefix to the PeopleSoft tables. which your database management system uses to optimize performance.

Note: If the mapping contains a Source or ERP Qualifier with a SQL Override. you can partition the sources to improve session performance. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-45 . performing code page translations when necessary. If you need to extract large amount of source data. the PowerCenter Server ignores the table name prefix setting for all connected sources. Note: You cannot partition an ERP Source Qualifier for PeopleSoft when it is connected to or associated with a PeopleSoft tree. PowerCenter uses SQL to extract data directly from the physical database tables.

This information is stored on the PowerCenter Server in a configuration file PAGE BP-46 BEST PRACTICES INFORMATICA CONFIDENTIAL . analytic applications. a language proprietary to SAP. hierarchies(Uniform & Non Uniform). Sales and Distribution. The database server stores the physical tables in the R/3 system. and run sessions to load SAP R/3 data into data warehouse. or ABAP). A transparent table definition on the application server is represented by a single physical table on the database server. extract data from SAP R/3. while the application server stores the logical tables. Other interfaces between the two include: • Common Program Interface-Communications (CPI-C). Description SAP R/3 is a software system that integrates multiple business applications. Communication Interfaces TCP/IP is the native communication interface between PowerCenter and SAP R/3. All of this is accomplished without writing ABAP code. Pool and cluster tables are logical definitions on the application server that do not have a one-to-one relationship with a physical table on the database server. pool tables. build mappings. PowerConnect for SAP R/3 provides the ability to integrate SAP R/3 data into data warehouses. cluster tables. and Human Resources. The R/3 system is programmed in Advance Business Application Programming-Fourth Generation (ABAP/4. CPI-C communication protocol enables online data exchange and data conversion between R/3 system and PowerCenter . and other applications.Data Connectivity using PowerConnect for SAP Challenge Understanding how to install PowerConnect for SAP R/3. SAP IDOCs and ABAP function modules. Materials Management. such as Financial Accounting. SAP R/3 requires information such as the host name of the application server and SAP gateway. PowerConnect extracts data from transparent tables. To initialize CPI-C communication with PowerCenter.

To execute remote calls from PowerCenter. you can customize properties of the ABAP program that the R/3 server uses to extract source data. Note: if the ABAP programs are installed in the $TMP class then they cannot be transported from development to production. PowerCenter makes remote function calls when importing source definitions. Extract data to buffers. The PowerCenter server accesses the buffers through CPI-C. the SAP protocol for program-toprogram communication. filters.• named sideinfo. Two ABAP programs can be installed for each mapping: • • File mode. When creating a mapping using an R/3 source definition. Generate and install ABAP program. you must use an ERP Source Qualifier. Create a mapping. Transport system. Extraction Process R/3 source definitions can be imported from the logical tables using RFC protocol. The PowerCenter Server accesses the file through FTP or NFS mount. and the service name and gateway on the application server. The Designer calls a function in the R/3 system to import source definitions. This information is stored on the PowerCenter Client and PowerCenter Server in a configuration file named saprfc. 2. Stream Mode. Extracting data from R/3 is a four-step process: 1. Import source definitions. and SAP functions to customize the ABAP program. RFC is the remote communication protocol used by SAP and is based on RPC (Remote Procedure Call). and running file mode sessions. The PowerCenter server uses parameters in the sideinfo file to connect to R/3 system when running the stream mode sessions. Transport ABAP programs from development to production. ABAP program variables. Designer connects to the R/3 application server using RFC. The transport system in SAP is a mechanism to transfer objects developed on one system to another system.ini. installing ABAP program. Remote Function Call (RFC). In the ERP Source Qualifier. SAP R/3 requires information such as the connection type. ABAP code blocks. Extract data to file. You can also use joins. 3. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-47 . There are two situations when transport system is needed: • • PowerConnect for SAP R/3 installation.

File Mode. PAGE BP-48 BEST PRACTICES INFORMATICA CONFIDENTIAL . the PowerCenter Server accesses the file through FTP or NFS mount and continues processing the session. When the session runs. • Installation and Configuration Steps For SAP R/3 The R/3 system needs development objects and user profiles established to communicate with PowerCenter. the session must be configured to access the file through NFS mount or FTP. When running a session in file mode. In stream mode. the program streams the data to the PowerCenter Server using CPI-C. The program extracts source data and loads it into the buffers. When the file is complete. PowerCenter calls these objects each time it makes a request to the R/3 system. Run transport program that generate unique Ids.4. PowerCenter Server can process data when it is received. The program extracts source data and loads it into the file. the installed ABAP program creates a file on the application server. When a buffer fills. With this method. • Create and Run Session. Preparing R/3 for integration involves the following tasks: • • Transport the development objects on the PowerCenter CD to R/3. (File or Stream mode) Stream Mode. the installed ABAP program creates buffers on the application server.

Preparing PowerCenter for integration involves the following tasks: • • • • Run installation programs on PowerCenter Server and Client machines.ini • • • • DEST – logical name of the R/3 system TYPE – set to “A” to indicate connection to specific R/3 system. Configure the connection files: The sideinfo file on the PowerCenter Server allows PowerCenter to initiate CPI-C with the R/3 system. For PowerCenter The PowerCenter Server and Client need drivers and connection files to communicate with SAP R/3. GWSERV – set to sapgw<system number> PROTOCOL – set to “I” for TCP/IP connection. Configure Connections to run Sessions Configure database connections in the Server Manager to access the SAP R/3 system when running a session. it is located in /etc • • sapdp<system number> <port# of dispatcher service>/TCP sapgw<system number> <port# of gateway service>/TCP The system number and port numbers are provided by the BASIS administrator. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-49 . ASHOST – host name of the SAP R/3 application server. it is located in \winnt\system32\drivers\etc On UNIX. Required Parameters for sideinfo • • • • • • DEST – logical name of the R/3 system LU – host name of the SAP application server machine TP – set to sapdp<system number> GWHOST – host name of the SAP gateway machine. SYSNR – system number of the SAP R/3 application server. Configuring the Services File On NT. Create a development class for the ABAP programs that PowerCenter installs on the SAP R/3 system.ini file on the PowerCenter Client and Server allows PowerCenter to connect to the R/3 system as an RFC client. Required Parameters for saprfc. Configure FTP connection to access staging file through FTP. The saprfc.• • Establish profiles in the R/3 system for PowerCenter users.

Use of static filters to reduce return rows.Steps to Configure PowerConnect on PowerCenter 1. 6. The transport process creates a development class called ZERP. production program files. dev4x. Creation of ABAP Program variables to represent SAP R/3 structures. Configure the saprfc.ini file. The installation CD includes devinit. 4. 2. Configure the FTP connection to access staging files through FTP. If you use R/3 source to create target definitions in the Warehouse Designer. Insert ABAP Code Block to add more functionality to the ABAP program flow. If your mapping has hierarchy definitions only. The R/3 administration needs to create authorization. Use a text editor. Key Capabilities of PowerConnect for SAP R/3 Some key capabilities of PowerConnect for SAP R/3 include: • • • • • • Import SAP function in the Source Analyzer.x system. and running sessions. edit the keys in the target definition before you build the physical targets. Use of outer join when two or more sources are joined in the ERP Source Qualifier. To avoid these errors. do not install the dev3x transport on a 4. Configure the database connection to run session. Install PowerConnect for SAP R/3 on PowerCenter. dev3x. or dev4x transport on a 3. to transport these objects files on the R/3 system. For example: qualifying table = table1field1 = table2-field2 where the qualifying table is the “last” table in the condition based on the join order. 5. profiles and userids for PowerCenter users. Be sure to note the following considerations regarding SAP R/3: You must have proper authorization on the R/3 system to perform integrated tasks. filters. SAP functions and code blocks. you cannot install the ABAP program. installing programs. structure fields or values in the ABAP program Removal of ABAP program Information from SAP R/3 and the repository when a folder is deleted. (MARA = MARA-MATNR = ‘189’) Customization of the ABAP program flow with joins. 3. you may encounter key constraint errors when you load the data warehouse. • • • • • • • • PAGE BP-50 BEST PRACTICES INFORMATICA CONFIDENTIAL . The R/3 system administrator must use the transport control program tp import. such as WordPad.x system. Import IDOCS. To avoid problems extracting metadata.ini Set the RFC_INI environment variable. Do not use Notepad to edit saprfc. R/3 does not always maintain referential integrity between primary key and foreign key relationship. Configure the sideinfo file.

you have to add that parameter as a string value to the key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\PowerMa rt\Parameters\MiscInfo PowerCenter has the ability to generate the ABAP code for the mapping. you should be able to just switch your mapping to point to either the development or production instance at the session level. You cannot use dynamic filters on IDOC source definitions in the ABAP program flow. So for migration purposes. The transport must need to be created manually within SAP and then transported to the Production environment Given that the development and production SAP systems are identical. When this ABAP code is generated however. it treats it as VARCHAR data and trims the trailing blanks.• • • • • • • • • Do not use the Select Distinct option for LCHR when the length is greater than 2000 and the underlying database is Oracle. If you are upgrading and your mappings use the blanks to compare R/3 data with other data. all you need to do is change the database connections at the session level. This causes the session to fail You cannot generate and install ABAP programs from mapping shortcuts. If a mapping contains both hierarchies and tables. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-51 . You cannot use an ABAP code block.cfg If PowerCenter server is on NT/2000. you may not want the PowerCenter Server to trim the trailing blanks. The PowerCenter server also trims trailing blanks for CUKY and UNIT data. depending on which environment you’re in. To avoid trimming the trailing blanks. This allows you to compare R/3 data with other source data without having use the RTRIM function. you must generate the ABAP program using file mode. add the flag: AllowTrailingBlanksForSAPCHAR=Yes in the pmserver. When the PowerCenter extracts CHAR data from SAP R/3. an ABAP program variable and a source filter if the ABAP program flow contains a hierarchy and no other sources. it does not automatically create a transport for the ABAP code that it just generated. SAP R/3 stores all CHAR data with trailing blanks.

Slowly changing dimensions– Informatica Wizards for generic mappings (a good start to an incremental load strategy). making the process of loading into the warehouse without compromising its functionality increasingly difficult. reloading. it is important to understand the impact of a suitable incremental load strategy. updates and delete. In this scenario.Records that include columns that specify the intention of the record to be populated into the warehouse. Date stamped data . PAGE BP-52 BEST PRACTICES INFORMATICA CONFIDENTIAL .Data is organized by timestamps. The design should allow data to be incrementally added to the data warehouse with minimal impact to the overall system.Records supplied by the source system include only new or changed records. Data will be loaded into the warehouse based upon the last processing date or the effective date range. Records can be selected based upon this flag to all for inserts. The following pages describe several possible load strategies. Description As time windows shrink and data volumes increase. all records are generally inserted or updated into the data warehouse.Incremental Loads Challenge Data warehousing incorporates large volumes of data. Record Indicator or Flags . History tracking–keeping track of what has been loaded and when. Source Analysis Data sources typically fall into the following possible scenarios: • • • Delta Records . and unloading data. Considerations • • • • Incremental Aggregation –loading deltas into an aggregate table. Error-un/loading data– strategies for recovering. The goal is to create a load strategy that will minimize downtime for the warehouse and allow quick and robust data management.

Joins of Sources to Targets. Take care to ensure that the record exists for updates or deletes or the record can be successfully inserted. Determine if the record exists in the target table. No Key values present . INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-53 . lookup the keys or critical columns in the target relational database.• • Key values are present . keys or surrogate keys. Record indicators can be beneficial when lookups into the target are not necessary.Surrogate keys will be created and all data will be inserted into the warehouse based upon validity of the records. Keep in mind the caches and indexing possibilities 3. Generate a log table of records that have been already inserted into the target system. Here are some considerations: • Compare with the target table. Lookup on target. For example. insert the record as a new row. depending on the need and volume. Identify Which Records Need to be Compared Once the sources are identified. When using joiner transformations. data must be checked against what has already been entered into the warehouse.When only key values are present. All values must be checked before entering the warehouse. inserted as a new record. 2. Using the lookup transformation. More design effort may be needed to manage errors in these situations. This occurs in cases of delta loads. There is no additional overhead produced in moving these sources into the warehouse. Loading Method Data can be loaded directly from these locations into the data warehouse. If it does exist. If the record does not exist. Records are directly joined to the target using Source Qualifier join conditions or using joiner transformations after the source qualifiers (for heterogeneous sources). Load table log. Source Based Load Strategies Complete Incremental Loads in a Single File/Table The simplest method of incremental loads is from flat files or a database in which all records will be loaded. You can use this table for comparison with lookups or joins. • Determine the Method of Comparison 1. or removed (deleted from target or filtered out and not added to the warehouse). it is necessary to determine which records will be entered into the warehouse and how. with no overhead on processing of the sources or sorting the source records. Record indicators. store keys in the a separate table and compare source records against this log table to determine load strategy. This particular strategy requires bulk loads into the warehouse. timestamps. determine if the record needs to be updated. take care to ensure the data volumes are manageable.

the records can be selected based on this effective date and only those records past a certain date will be loaded into the warehouse. Non-relational data can be filtered as records are loaded based upon the effective dates or sequenced keys. For example. To compare the effective dates. Target Based Load Strategies Load Directly into the Target Loading directly into the target is possible when the data will be bulk loaded. Placing the load strategy into the ETL component is much more flexible and controllable by the ETL developers and metadata. The incremental load can be determined by dates greater than the previous load date or data that has an effective key greater than the last key processed. For detailed instruction on how to select dates. The mapping will be responsible for error control. Load Method It may be possible to do a join with the target tables in which new data can be selected and loaded into the target. The alternative is to use control tables to store the date and update the control table after each load. If they exist. alternate keys etc can be used to determine if they have already been entered into the data warehouse. you can also check to see if you need to update these records or discard the source record. recovery and update strategy. refer to Best Practice: Variable and Mapping Parameters. Changed Data based on Keys or Record Information Data that is uniquely identified by keys can be selected based upon selection criteria. Load into Flat Files and Bulk Load using an External Loader PAGE BP-54 BEST PRACTICES INFORMATICA CONFIDENTIAL .Date Stamped Data This method involves data that has been stamped using effective dates or sequences. Loading Method With the use of relational sources. A router transformation or a filter can be placed after the source qualifier to remove old records. It may also be feasible to lookup in the target to see if the data exists or not. records that contain key information such as primary keys. you can use mapping variables to provide the previous date processed. Views can also be created to perform the selection criteria so the processing will not have to be incorporated into the mappings.

in this case. Using Mapping Variables and Parameter Files A mapping variable can be used to perform incremental loading. the databases are switched. The mapping variable is used in the join condition in order to select only the new data that has been entered based on the create_date or the modify_date. After data has been loaded. For the Aggregation option.The mapping will load data directly into flat files. Here are the steps involved in this method: Step 1: Create Mapping Variable In the Informatica Designer. The date must follow one of these formats: • • • • MM/DD/RR MM/DD/RR HH24:MI:SS MM/DD/YYYY MM/DD/YYYY HH24:MI:SS Step 2: Use the Mapping Variable in the Source Qualifier The select statement will look like the following: Select * from tableA Where CREATE_DATE > to_date('$$INCREMENT_DATE'. whichever date can be used to identify a newly inserted record. 'MM-DD-YYYY HH24:MI:SS') Step 3: Use the Mapping Variable in an Expression INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-55 . then select Parameters and Values. making the mirror the active database and the active as the mirror. This method reduces the load times (with less downtime for the data warehouse) and also provide a means of maintaining a history of data being loaded into the target. An external loader can be invoked at that point to bulk load the data into the target. This is a very important issue that everyone should understand.. This is the date at which the load should start. select MAX. The source system must have a reliable date to use. Load into a Mirror Database The data will be loaded into a mirror database to avoid down time of the active data warehouse. Typically this method is only used for updates into the warehouse. make your variable a date/time. go to the menu and select Mappings. state your initial value. In the same screen. with the mapping designer open. Name the variable and.

After the mapping completes. You can view the value of the mapping variable in the session log file. The value of the mapping variable and incremental loading is that it allows the session to use only the new rows of data. then the variable gets that value. PAGE BP-56 BEST PRACTICES INFORMATICA CONFIDENTIAL . So if one row comes through with 9/1/2001.CREATE_DATE) CREATE_DATE is the date for which you would like to store the maximum value.For the purpose of this example. In the expression create a variable port and use the SETMAXVARIABLE variable function and do the following: SETMAXVARIABLE($$INCREMENT_DATE. use an expression to work with the variable functions to set and use the mapping variable. then 9/1/2001 is preserved. If all subsequent rows are LESS than that. No table is needed to store the max(date)since the variable takes care of it. that is the PERSISTENT value stored in the repository for the next run of your session. You can use the variable functions in the following transformations: • • • • Expression Filter Router Update Strategy The variable constantly holds (per row) the max value between source and variable.

Only connect what is used. particularly in the Source Qualifier. Description Although PowerCenter environments vary widely.Mapping Design Challenge Use the PowerCenter tool suite to create an efficient execution environment. use many times. 3.. Use mapplets to leverage the work of critical developers and minimize mistakes when performing similar functions. if you exchange transformations (e. 2. and set a True/False flag. most sessions and/or mappings can benefit from the implementation of common objects and optimization procedures. General Suggestions for Optimizing 1. Consider more shared memory for large number of transformations. use variables to calculate a value used several times.g. Calculate once. a Source Qualifier). Calculate it once in an expression. Session shared memory between 12M and 40MB should suffice. • • Delete unnecessary links between transformations to minimize the amount of data moved. Within an expression. • • • Avoid calculating or testing the same value over and over. Follow these procedures and rules of thumb when creating mappings to help ensure optimization. Reduce the number of transformations • • There is always overhead involved in moving data between transformations. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-57 . This is also helpful for maintenance.

The rule of thumb is not to cache any table over 500. the server reads the source for each Source Qualifier. This is only true if the standard row byte count is 1. This typically improves performance by 10-20%. a 2. Lookup Transformation Optimizing Tips • • When your source is large. one for delete and one for update/insert). cache lookup table columns for those lookup tables of 500. If you have different Source Qualifiers for the same source (e. 5. • • 9.024 or less.g. Remove or reduce field-level stored procedures. Watch the data types. If the row byte count is more than 1. • • Single-pass reading is the server’s ability to use one Source Qualifier to populate multiple targets. • • • The engine automatically converts compatible types. Sometimes conversion is excessive.. then the 500k rows will have to be adjusted down as the number of bytes increase (i.000 rows or less. • • • Delete unused ports particularly in Source Qualifier and Lookups. For any additional Source Qualifier. Facilitate reuse. and happens on every transformation. aggregators as close to source as possible).. use tracing levels to identify which transformation is causing the bottleneck (use the Test Load option in session properties). When DTM bottlenecks are identified and session optimization has not helped. Only manipulate data that needs to be moved and transformed. 6. the server reads this source.048 byte row PAGE BP-58 BEST PRACTICES INFORMATICA CONFIDENTIAL . Select appropriate driving/master table while using joins. Use variables. 7. If you use field-level stored procedures. Utilize single-pass reads. 8.000 rows.4.024.e. PowerMart has to make a call to that stored procedure for every row so performance will be slow. Use mapplets to encapsulate multiple reusable transformations. The table with the lesser number of rows should be the driving/master table..e. • • • Plan for reusable transformations upfront. placing filters. Minimize data type changes between transformations by planning data flow prior to developing the mapping. Reducing the number of records used throughout the mapping provides better performance Use active transformations that reduce the number of records as early in the mapping as possible (i.

replace with string. 12.. 13. so the lookup table will not be cached in this case).e. Use Flat Files Using flat files located on the server machine loads faster than a database located in the server machine. • • • • • • 14. If caching lookups and performance is poor. If working with data that is not able to return sorted data (e. consider replacing with an unconnected.• • • • can drop the cache row count to 250K – 300K.g. • • • Operations and Expression Optimizing Tips Numeric operations are faster than string operations. || vs. When using a Lookup Table Transformation. less than 5. 10. Cache only lookup tables if the number of lookup calls is more than 10-20% of the lookup table rows. For fewer number of lookup calls. improve lookup performance by placing all conditions that use the equality operator ‘=’ first in the list of conditions under the condition tab.. Avoid date comparisons in lookup.e. Fixed-width files are faster to load than delimited files because delimited files require extra parsing. Test expression timing by replacing with constant. Replace Aggregate Transformation object with an Expression Transformation object and an Update Strategy Transformation for certain types of Aggregations. cache for more than 5-10 lookup calls. Operators are faster than functions (i. 15. Examine mappings via Repository Reporting. Optimize IIF expressions. Web Logs) consider using the Sorter Advanced External Procedure. Suggestions for Using Mapplets INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-59 . 11. CONCAT). 11.000 rows. For small lookup tables. Optimize char-varchar comparisons (i. Replace lookup with decode or IIF (for small sets of values). uncached lookup Review complex expressions. If processing intricate transformations. which allows the PowerCenter mappings to access the data in an optimized fashion by using filters and custom SQL Selects where appropriate. trim spaces before comparing). Minimize aggregate function calls.. do not cache if the number of lookup table rows is big. consider loading first to a source flat file into a relational database.

and configure transformations to complete the desired transformation logic. Each port in an Output transformation connected to another transformation in the mapplet becomes a mapplet output port. There are several unsupported transformations that should not be used in a mapplet. if you have several fact tables that require a series of dimension keys. normalizer. A mapplet can be active or passive depending on the transformations in the mapplet. and PowerMart 3. When you use the mapplet in a mapping. the mapplet provides source data for the mapping and is the first object in the mapping data flow. you can use it in a mapping to represent the transformations within the mapplet. all changes made to the parent mapplet logic are inherited by every ‘child’ instance of the mapplet. joiner. target definitions. After you save a mapplet. Create a mapplet when you want to use a standardized set of transformation logic in several mappings. nonreusable sequence generator. 3. passing data through each transformation in the mapplet as designed. Use one or more source definitions connected to a Source Qualifier or ERP Source Qualifier transformation. Passive mapplets only contain passive transformations. connect. create mapplet output ports. Being aware of this property when using mapplets can save time when debugging invalid mappings. data passes through the mapplet as part of the mapping data flow. 4. Use a mapplet Input transformation to define input ports. To create a mapplet. Active mapplets contain at least one active transformation. Do not reuse mapplets if you only need one or two transformations of the mapplet while all other calculated ports and transformations are obsolete 6. pre. rather than recreate the same lookup logic in each mapping. To pass data out of a mapplet. it expands the mapplet. All uses of a mapplet are all tied to the ‘parent’ mapplet. You can then use the mapplet in each fact table mapping. Source data for a mapplet can originate from one of two places: • Sources within the mapplet. these include: COBOL source definitions. you can create a mapplet containing a series of Lookup transformations to find each dimension key. Sources outside the mapplet. PAGE BP-60 BEST PRACTICES INFORMATICA CONFIDENTIAL . you use an instance of the mapplet. 1. Hence. 2. When you use a mapplet in a mapping. It allows you to reuse transformation logic and can contain as many transformations as necessary.A mapplet is a reusable object that represents a set of transformations. When you use the mapplet in a mapping. add. The server then runs the session as it would any other session. When the server runs a session using a mapplet. For example.5 style lookup functions 5.or post-session stored procedures. • 7.

• • Active mapplets with more than one Output transformations. Passive mapplets with more than one Output transformations. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-61 . You cannot use only one data flow of the mapplet in a mapping. This means you cannot use only one data flow of the mapplet in a mapping. Reduce to one Output Transformation otherwise you need one target in the mapping for each Output transformation in the mapplet. You need one target in the mapping for each Output transformation in the mapplet.

0. The architecture of the Metadata Reporter is web-based. etc. Informatica PowerCenter contains a Metadata Reporter. The Metadata Reporter allows report access to every Informatica object stored in the repository. • Metadata Reporter The need for the Informatica Metadata Reporter arose from the number of clients requesting custom and complete metadata reports from their repositories. Also. Because Informatica does not support or recommend direct reporting access to the repository. etc. sources. The Metadata Reporter is a web-based application that allows you to run reports against the repository metadata. and primary keys are stored in the repository. The decision on how much metadata to create is often driven by project timelines.Metadata Reporting and Sharing Challenge Using Informatica’s suite of metadata tools effectively in the design of the end-user analysis application. with an Internet PAGE BP-62 BEST PRACTICES INFORMATICA CONFIDENTIAL . targets. even for Select only queries. Description information can be entered for all repository objects. The amount of metadata that is entered is dependent on the business requirements. variable. it will also require a substantial amount of time to do so. the second way of repository metadata reporting is through the use of views written using Metadata Exchange (MX). While it may be beneficial for a developer to enter detailed descriptions of each column. • Effective with the release of version 5. These views can be found in the Informatica Metadata Exchange (MX) Cookbook. all information about column size and scale. expression. transformations. Description The levels of metadata available in the Informatica tool suite are quite extensive. data types. this decision should be made on the basis of how much metadata will be required by the systems that use the metadata. You also can drill down to the column level and give descriptions of the columns in a table if necessary. Therefore. Informatica offers two recommended ways for accessing the repository metadata.

The reports provide information about all types of metadata objects. You can run reports on any repository.3 with Jserv 1. The name of any metadata object that displays on a report links to an associated report. The Metadata Reporter connects to your Informatica repository using JDBC drivers. even without the other Informatica Client tools being installed on that computer. The Metadata Reporter is easily accessible. As you view a report.1 or higher Apache 1.1 Jrun 2. You do not need direct access to the repository database.jdbc:odbc:<data_source_name>) Although the Repository Manager provides a number of Crystal Reports.browser front end. you can generate reports from any machine that has access to the web server where the Metadata Reporter is installed. Syntax . The currently supported web servers are: • • • iPlanet 4.3 (Note: The Metadata Reporter will not run directly on Microsoft IIS because IIS does not directly support servlets. Because the Metadata Reporter is web-based.3. (Note: You can also use the JDBC to ODBC bridge to connect to the repository. You can install the Metadata Reporter on a server running either UNIX or Windows that contains a supported web server. you can generate reports for objects on which you need more information. Ex. The Metadata Reporter contains servlets that must be installed on a web server that runs the Java Virtual Machine and supports the Java Servlet API. The reports are as follows: • • • • Batch Report Executed Session Report Executed Session Report by Date Invalid Mappings Report INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-63 . Make sure the proper JDBC drivers are installed for your database platform. The Metadata Reporter allows you to go easily from one report to another.) The Metadata Reporter is accessible from any computer with a browser that has access to the web server where the Metadata Reporter is installed. your sources or targets or PowerMart or PowerCenter The reports in the Metadata Reporter are customizable. the Metadata Reporter has several benefits: • The Metadata Reporter is comprehensive. The Metadata Reporter allows you to set parameters for the metadata objects to include in the report. • • • The Metadata Reporter provides 15 standard reports that can be customized with the use of parameters and wildcards.

Although SQL provides a powerful mechanism for accessing and manipulating records of data in a relational paradigm. and MicroStrategy. especially multidimensional models for OLAP. MX2 is implemented in C++ and offers an advanced object-based API for accessing and manipulating the PowerCenter Repository from various programming languages. Today. Business Objects. and various relationships. The primary requirements and features of MX2 are: Incorporation of object technology in a COM-based API. Although the overall motivation for creating the second generation of MX remains consistent with the original intent. Extensive metadata content. thus leading to the development of a self-contained API Software Development Kit that can be used independently of the client or server products. The result was a set of relational views that encapsulated the underlying repository tables while exposing the metadata in several categories that were more suitable for external parties. One of the key advantages of MX views is that they are part of the repository database and thus could be used independent of any of the Informatica’s software products. Informatica currently supports the second generation of Metadata Exchange called MX2. it’s not suitable for procedural programming tasks that can be achieved by C. Cognos. or Visual Basic. The same requirement also holds for MX2. PAGE BP-64 BEST PRACTICES INFORMATICA CONFIDENTIAL . Java. Metadata Exchange: The Second Generation (MX2) The MX architecture was intended primarily for Business Intelligence (BI) vendors who wanted to create a PowerCenter-based data warehouse and then display the warehouse metadata through their own products. such as hierarchies. A number of BI tools and upstream data warehouse modeling tools require complex multidimensional metadata. are effectively using the MX views to report and query the Informatica metadata. Self-contained Software Development Kit (SDK). the increasing popularity and use of object-oriented software tools require interfaces that can fully take advantage of the object technology. consult the Metadata Reporter Guide included in your PowerCenter Documentation. the requirements and objectives of MX2 supersede those of MX. Furthermore. Informatica and several key vendors. levels. C++.• • • • • • • • • • • Job Report Lookup Table Dependency Report Mapping Report Mapplet Report Object to Mapping/Mapplet Dependency Report Session Report Shortcut Report Source Schema Report Source to Target Dependency Report Target Schema Report Transformation Report For a detailed description of how to run these reports. including Brio.

Complete encapsulation of the underlying repository organization by means of an API. Support for Microsoft’s UML-based Open Information Model (OIM). One of the main challenges with MX views and the interfaces that access the repository tables is that they are directly exposed to any schema changes of the underlying repository database. Therefore. Ability to write (push) metadata into the repository. Integration with third-party tools.This type of metadata was specifically designed and implemented in the repository to accommodate the needs of our partners by means of the new MX2 interfaces. thus providing an easier mechanism for managing schema evolution. based on the standard Unified Modeling Language (UML). Because of the limitations associated with relational views. As a result. Informatica has worked in close cooperation with Microsoft to ensure that the logical object model of MX2 remains consistent with the data warehousing components of the Microsoft Repository. With the advent of the Internet and distributed computing. maintenance of the MX views and direct interfaces becomes a major undertaking with every major upgrade of the repository. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-65 . The object-based technology used in MX2 provides the infrastructure needed to implement automatic metadata synchronization and change propagation across different tools that access the Informatica Repository. multi-tier architectures are becoming more widely accepted for accessing and managing metadata and data. MX2 alleviates this problem by offering a set of object-based APIs that are abstracted away from the details of the underlying relational tables. The Microsoft Repository and its OIM schema. MX2 interfaces comply with Microsoft’s Component Object Model (COM) interoperability protocol. MX2 offers the object-based interfaces needed to develop more sophisticated procedural programs that can tightly integrate the repository with the third-party data warehouse modeling and query/reporting tools. Interoperability with other COM-based programs and repository interfaces. As a result. This also facilitates robust metadata exchange with the Microsoft Repository and other software that support this repository. MX could not be used for writing or updating metadata in the Informatica repository. such tasks could only be accomplished by directly manipulating the repository’s relational tables. any existing or future program that is COMcompliant can seamlessly interface with the Informatica Repository by means of MX2. The object-based technology of MX2 supports a multi-tier architecture so that a future Informatica Repository Server could be accessed from a variety of thin client programs running on different operating systems. Framework to support a component-based repository in a multi-tier architecture. could become a de facto general-purpose repository standard. The MX2 interfaces provide metadata write capabilities along with the appropriate verification and validation features to ensure the integrity of the metadata in the repository. Synchronization of metadata based on changes from up-stream and downstream tools. synchronizing changes and updates ensures the validity and integrity of the metadata. Given that metadata will reside in different databases and files in a distributed software environment.

but also leverages the existing C++ object model to provide an open. The MX2 COM APIs support the PowerCenter XML Import/Export feature and provide a COM based programming interface in which to import and export repository objects.MX2 Architecture MX2 provides a set of COM-based programming interfaces on top of the C++ object model used by the client tools to access and manipulate the underlying repository. 98. After the successful installation of MX2. PAGE BP-66 BEST PRACTICES INFORMATICA CONFIDENTIAL . This architecture not only encapsulates the physical repository structure. extensible API based on the standard COM protocol. its interfaces are automatically registered and available to any software through standard COM programming techniques. or Windows NT using the install program provided with its SDK. MX2 can be automatically installed on Windows 95.

Description Repository Naming Conventions Although naming conventions are important for all repository and database objects.g. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-67 . given that multiple development groups need to use the PowerCenter server. the suggestions in this document focus on the former. Company_Department_Project-Name_Prod) is appropriate if multiple repositories are expected for various projects and/or departments.Naming Conventions Challenge Choosing a good naming standard for the repository and adhering to it. Choosing a convention and sticking with it is the key point . Q: What naming convention is recommended for Repository Folders? • Something specific (e.and sometimes the most difficult in determining naming conventions. Whenever an object is shared between projects.. the object should be stored in a shared work area so each of the individual projects can utilize a shortcut to the object. Mappings are listed in alphabetical order. It is important to note that having a good naming convention will help facilitate a smooth migration and improve readability for anyone reviewing the processes. and each group works independently? • One consideration for naming conventions is how to segregate different projects and data mart objects from one another. FAQs The following paragraphs present some of the questions that typically arise in naming repositories and suggest answers: Q: What are the implications of numerous repositories or numerous folders within a repository.

.Note that incorporating functions in the object name makes the name more descriptive at a higher level. INSERT_EMPLOYEE or UPDATE_EMPLOYEE) nrm_TargetTableName(s) that leverages the expression and/or a name that describes the processing being done. lookups.. mappings. Use descriptive names cautiously and at a high enough level. etc. The drawback is that when an object needs to be modified to incorporate some other business logic. Transformation Objects Naming Convention Advanced External aep_ProcedureName Procedure Transform: Aggregator Transform: agg_TargetTableName(s) that leverages the expression and/or a name that describes the processing being done. The following tables illustrate some naming conventions for transformation objects (e. Rank Transform: rnk_TargetTableName(s) that leverages the expression and/or a name that describes the processing being done.). sources. etc.) and repository objects (e. targets. Expression Transform: exp_TargetTableName(s) that leverages the expression and/or a name that describes the processing being done. Normalizer Transform: Sequence Generator: Source Qualifier Transform: Stored Procedure Update Strategy Repository Objects Mapping Name: Session Name: Batch Names: PAGE BP-68 BEST PRACTICES INFORMATICA CONFIDENTIAL . External Procedure ext_ProcedureName Transform: Filter Transform: fil_TargetTableName(s) that leverages the expression and/or a name that describes the processing being done. seq_Function sq_SourceTable1_SourceTable2 SpStoredProcedureName UpdTargetTableName(s) that leverages the expression and/or a name that describes the procession being done Naming Convention m_TargetTable1_TargetTable2 s_MappingName bs_BatchName for a sequential batch and bc_BatchName for a concurrent batch. Router: rtr_TARGETTABLE that leverages the expression and/or a name that describes the processing being done Group Name: Function_TargetTableName(s) (e. joiners.g.g. the name no longer accurately describes the object. It is not advisable to rename an object that is currently being used in a production environment.g. Joiner Transform: jnr_SourceTable/FileName1_ SourceTable/FileName2 Lookup Transform: lkp_LookupTableName Mapplet: mplt_Description Mapping Variable: $$Function or Process that is being done Mapping Parameter: $$Function or Process that is being done Normalizer Transform: nrm_TargetTableName(s) that leverages the expression and/or a name that describes the processing being done. sessions.

Batch Names Batch names follow basically the same rules as the session names. A prefix. the tables should be named as follows: • • • • CUSTOMER_DIM_UPD CUSTOMER_DIM_INS CUSTOMER_DIM_DEL CUSTOMER_DIM_REJ Port Names Ports names should remain the same as the source unless some other action is performed on the port. usually because of different actions. It is a good idea to prefix generated output ports. Batch init_load incr_load wkly mtly Session Postfixes Initial Load indicates this session should only be used one time to load initial data to the targets. subject area. targets should be named according to the action being executed on that target. or some combination of these. For variables inside a transformation. When looking at a session run. if a mapping has four instances of CUSTOMER_DIM table according to update strategy (Update. the port should be prefixed with “IN_”. you should use the prefix 'var_' plus a meaningful name. When you bring a source port into a lookup or expression. Target Table Names There are often several instances of the same target. This helps trace the port value throughout the mapping as it may travel through many other transformations. etc. Insert. To make observing a session run easier. such as 'b_' should be used and there should be a suffix indicating if the batch is serial or concurrent. Reject. there will be the several instances with own successful rows.Folder Name Folder names should logically group sessions and mappings. Incremental Load is a update of the target and normally run periodically indicates a weekly run of this session / batches indicates a monthly run of this session / batches Shared Objects INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-69 . promotion group. The grouping can be based on project. failed rows. In that case. For example. This will help the user immediately identify the ports that are being inputted without having to line up the ports with the input checkbox. Delete). the port should be prefixed with the appropriate name.

you are working in. based on which repository environment. transformations. So. to prod. TableA is uniquely identified as Name0. Also. The result is that the repository may refer to the same object by multiple names. For example. the session will write to the QA DW database because you are in the QA repository. and mapplets. SC_DUAL. When you use Copy Folder. it is also copied. targets. TableA is uniquely identified as Name1. As you migrate objects from dev.Any object within a folder can be shared. you can place the object in a shared folder. testers. Database Connection names must be very generic to be understandable and enable a smooth migration. For example. If you have an object that you want to use in several mappings or across multiple folders. The DBDS is the same name as the ODBC DSN since the PowerCenter Client talks to all databases through ODBC. machine1 has ODBS DSN Name0 that points to database1. If ODBC DSNs are different across multiple machines. ODBC Data Source Names Set up all Open Database Connectivity (ODBC) data source names (DSNs) the same way on all client machines.TableA in the repository. PowerCenter uniquely identifies a source by its Database Data Source (DBDS) and its name. Using this convention will allow for easier migration if you choose to use the Copy Folder method. To share objects in a folder. TableA gets analyzed in on machine 2. session information is also copied. These objects are sources. You can then use the object in other folders by creating a shortcut to the object in this case the naming convention is ‘SC_’ for instance SC_mltCREATION_SESSION. they will eventually wind up in your QA. Using a convention like User1_DW allows you to know who the session is logging in as and to what database. do not call it dev_db01. mappings. to test. and even in your PAGE BP-70 BEST PRACTICES INFORMATICA CONFIDENTIAL . refrain from using environment tokens in the ODBC DSN. ODBC database names should clearly describe the database they reference to ensure that users do not incorrectly point sessions to the wrong databases. there is a risk of analyzing the same table using different names. Once the folder is shared. the folder must be designated as shared. and potentially end users. You should know which DW database. the users are allowed to create shortcuts to objects in the folder. if you use connections with names like Dev_DW in your development repository. Database Connection Information A good convention for database connection information is UserName_ConnectString. like an Expression transformation that calculates sales tax. Machine2 has ODBS DSN Name1 that points to database1. For example. Be careful not to include machine names or environment tokens in the Database Connection Name.TableA in the repository. TableA gets analyzed in on machine 1. creating confusion for developers. if you are creating a session in your QA repository using connection User1_DW. If the Database Connection information does not already exist in the folder you are copying to. you are likely to wind up with source objects called dev_db01 in the production repository.

when you copy a folder from Dev to QA. Instead. if you have a User1_DW connection in each of your three environments. passwords. and possibly even connect strings. your sessions will automatically hook up to the connection that already exists in the QA repository. user names. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-71 . Manual intervention would then be necessary to change connection names.Production repository as you migrate folders. your sessions are ready to go into the QA repository with no manual intervention required. Now.

These strategies take advantage of the enhanced partitioning capabilities in PowerCenter 5. and by coordinating the interaction between sessions. and CPUs. COBOL and standard flat files. The column “ID” displays the percentage utilization of CPU idling during the specified interval without any I/O wait. These considerations include source and target database setup. First.1. Besides hardware. see the Partitioning Rules and Validation section of the Designer Help). When these factors have been considered and a partitioned strategy has been selected. there are several other factors to consider when determining if a session is an ideal candidate for partitioning. parallel execution may impair performance on over-utilized systems or systems with smaller I/O capacity. the iterative process of adding partitions can begin. However. partitions. Follow these three steps when partitioning your session. 1. Parallel execution benefits systems that have the following characteristics: • Under utilized or intermittently used CPUs.Session and Data Partitioning Challenge Improving performance by identifying strategies for partitioning relational tables. Continue adding partitions to the session until the desired performance threshold is met or degradation in performance is observed. XML. target type. If there are CPU cycles PAGE BP-72 BEST PRACTICES INFORMATICA CONFIDENTIAL . To determine if this is the case. (The Designer client tool is used to implement session partitioning. it may be possible to improve performance through parallel execution of the Informatica server engine. determine if you should partition your session. Description On hardware systems that are under-utilized. and mapping design. check the CPU usage of your machine: UNIX–type VMSTAT 1 10 on the command line.

PI displays number of pages swapped in from the page space during the specified interval. As with any session. For a session with n partitions. and note your session settings before you add each partition. and the setup of tablespaces. Set cached values for Sequence Generator. • • • Add one partition at a time. PO displays the number of pages swapped out to the page space during the specified interval. using an external loader may increase session performance. If the session is paging. A notable increase in performance can also be realized when the actual source and target tables are partitioned. To determine if the session is paging. this value should be at least n times the original value for the non-partitioned session. Set DTM Buffer Memory. Work with the DBA to discuss the partitioning of source and target tables. If these values indicate that paging is occurring. Check to see that you’re using as much memory as you can. If you must set this value to a value greater than zero. Partition tables. it may be necessary to allocate more memory. NT – check the task manager performance tab. The column “%idle” displays the total percentage of the time that the CPU spends idling (i. The source data should be partitioned into equal sized chunks for each partition. Sufficient memory. you will receive a memory allocation error. For a session with n partitions. • NT – check the task manager performance tab. add one partition at a time. To determine the I/O statistics: UNIX– type IOSTAT on the command line. The column “%IOWAIT” displays the percentage of CPU time spent idling while waiting for I/O requests. If too much memory is allocated to your session. increase the memory. follow these steps: UNIX – type VMSTAT 1 10 on the command line. Consider Using External Loader. there should be no need to use the “Number of Cached Values” property of the sequence generator. Sufficient I/O. if possible.available (twenty percent or more idle time) then this session’s performance may be improved by adding a partition. see the Session and Server Guide for further directions on setting up partitioned sessions. To best monitor performance. make sure it is at least n times the original value for the non-partitioned session.. You can only use • • • INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-73 . The following are selected hints for session setup. Partition the source data evenly.e.) • NT – check the task manager performance tab. 2. the unused capacity of the CPU. The next step is to set up the partition.

When you partition a session and there are cached lookups. in order to reduce network overhead and delay. add another partition. PAGE BP-74 BEST PRACTICES INFORMATICA CONFIDENTIAL . Source files are located on the same physical machine as the PMServer process when partitioning flat files. All possible constraints are dropped or disabled on relational targets. All possible indexes are dropped or disabled on relational targets. COBOL and XML. the Informatica Server creates one memory cache for each partition and one disk cache for each transformation.Oracle external loaders for partitioning. • • • • • • • Indexing has been implemented on the partition key when using a relational source. When you partition a source that uses a static lookup cache. Assumptions The following assumptions pertain to the source and target systems of a session that is a candidate for partitioning. Check the session statistics to see if you have increased the write throughput. These conditions can help to maximize the benefits that can be achieved through partitioning. causing degradation in performance. 3. If the session performance is improved and the session meets the requirements of step 1. to reduce network overhead and delay. The third step is to monitor the session to see if the partition is degrading or improving session performance. Oracle External Loaders are utilized whenever possible (Parallel Mode). Target files are written to same physical machine that hosts the PMServer process. Refer to the Session and Server Guide for more information on using and setting up the Oracle external loader for partitioning. If the memory is not bumped up. • • Write throughput. Paging. Table Spaces and Database Partitions are properly managed on the target system. you must make sure that DTM memory is increased to handle the lookup caches. the system may start paging to disk. Therefore. the memory requirements will grow for each partition. Check to see if the session is now causing the system to page.

Log Files.Using Parameters.x has made variables and parameters available across the entire mapping rather than for a specific transformation object. by definition. these values can change from session-run to session-run. it provides built-in parameters for use within Server Manager. aggregation type. Transformation variables were defined as variable ports in a transformation and could only be used in that specific Transformation object (e. Informatica added four functions to affect change to mapping variables: • • • • SetVariable SetMaxVariable SetMinVariable SetCountVariable INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-75 . etc. precision and scale. data type. In addition.. Target Files.g. initial value. you use the pop-up window to create a variable by specifying its name. variables. This is similar to creating a port in most transformations.x. Variables and Parameter Files Challenge Understanding how parameters. are objects that can change value dynamically. After mapping variables are selected. PowerCenter 5. global parameters defined within Server Manager would affect the subdirectories for Source Files. Using parameter files. Description Prior to the release of PowerCenter 5. Variables. the only variables inherent to the product were defined to specific transformations and to those Server variables that were global in nature. Similarly. and parameter files work and using them for maximum efficiency. Expression. Mapping Variables You declare mapping variables in PowerCenter Designer using the menu option Mappings -> Parameters and Variables. Aggregator and Rank Transformations).

For example. 3. This value is also used if the stored repository value is deleted. The start value can be a value defined in the parameter file for the variable. the value used is based on: • • • Value in session parameter file Initial value Default value PAGE BP-76 BEST PRACTICES INFORMATICA CONFIDENTIAL . Is configured for a test load. the value stored in the repository would be the max value across ALL session runs until the value is deleted. Value in session parameter file Value saved in the repository Initial value Default value Mapping Parameters and Variables Since parameter values do not change over the course of the session run. 2. Initial Value This value is used during the first session run when there is no corresponding and overriding parameter file. Name The name of the variable should be descriptive and be preceded by ‘$$’ (so that it is easily identifiable as a variable). A typical variable name is: $$Procedure_Start_Date.A mapping variable can store the last value from a session run in the repository to be used as the starting value for the next session run. 4. Variable values are not stored in the repository when the session: • • • • Fails to complete. then a data type specific default value is used. a user-defined initial value for the variable. with an aggregation type of Max. Order of Evaluation The start value is the value of the variable at the start of the session. Aggregation Type This entry creates specific functionality for the variable and determines how it stores data. If no initial value is identified. Runs in debug mode and is configured to discard session output. a value saved in the repository from the previous run of the session. Is a debug session. or the default value based on the variable data type. The PowerCenter Server looks for the start value in the following order: 1.

Parameter files do not globally assign values. either within the session properties. Parameter Files Parameter files can be used to override values of mapping variables or mapping parameters. with each section defined within brackets as FOLDER. and source filter sections. Some parameter file examples: [USER1. Parameters or variables must be defined in the mapping to be used. A line can be ‘REMed’ out by placing a semicolon at the beginning. The naming is case sensitive.s_test_var1] $$PMSuccessEmailUser=XXX@informatica. or to define Server-specific values for a session run. at the outer-most batch a session resides in.Once defined. or as a parameter value when utilizing PMCMD command. user-defined join.SESSION_NAME. mapping parameters and variables can be used in the Expression Editor section of the following transformations: • • • • Expression Filter Router Update Strategy Mapping parameters and variables also can be used within the Source Qualifier in the SQL query.txt Parameter) Database Connection $DBConnection_Target Sales (database (Session Parameter) connection) Session Log File (Session $PMSessionLogFile d:/session INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-77 .$$Help_User A parameter file is declared for use by a session. Parameter files have a very simple and defined format. they are divided into session-specific sections.s_m_subscriberstatus_load] $$Post_Date_Var=10/04/2001 [USER1.com . The following parameters and variables can be defined or overridden within the parameter file: Parameter & Variable Type Parameter & Variable Name Desired Definition String Mapping Parameter $$State MA Datetime Mapping Variable $$Time 10/1/2000 00:00:00 Source File (Session $InputFile1 Sales.

and a parameter file for restarting. Sample Solution Create a mapping with source and target objects. The following example uses a mapping variable.txt Lookup SQL Override. From the menu create a new mapping variable named $$Post_Date with the following attributes: • • • • TYPE – Variable DATATYPE – Date/Time AGGREGATION TYPE – MAX INITIAL VALUE – 01/01/1900 PAGE BP-78 BEST PRACTICES INFORMATICA CONFIDENTIAL . Scenario Company X wants to start with an initial load of all data but wants subsequent process runs to select only new information. Schema/Owner names within Target Objects/Session Properties. an expression transformation object. Lookup Location (Connection String). Process will run once every twenty-four hours. The environment data has an inherent Post_Date that is defined within a column named Date_Entered that can be used.Parameter) Parameters and variables cannot be used in the following: • • • logs/firstrun. Example: Variables and Parameters in an Incremental Strategy Variables and parameters can enhance incremental strategies.

g. An output port named Post_Date is created with data type of date/time.--)).Note that there is no need to encapsulate the INITIAL VALUE with quotation marks.'MM/DD/YYYY HH24:MI:SS') Also note that the initial value 01/01/1900 will be expanded by the PowerCenter Server to 01/01/1900 00:00:00. The next step is to $$Post_Date and Date_Entered to an Expression transformation. Within the Source Qualifier Transformation. use the following in the Source_Filter Attribute: DATE_ENTERED > to_Date(' $$Post_Date'. it is necessary to use the native RDBMS function to convert (e. This is where the function for setting the variable will reside. TO DATE(--.. if this value is used within the Source Qualifier SQL. In the expression code section place the following function: SETMAXVARIABLE($$Post_Date.DATE_ENTERED) The function evaluates each value for DATE_ENTERED and updates the variable with the Max value to be passed forward. For example: DATE_ENTERED 9/1/2000 10/30/2001 9/2/2000 Resultant POST_DATE 9/1/2000 10/30/2001 10/30/2001 Consider the following with regard to the functionality: INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-79 . hence the need to convert the parameter to a date time. However.

If the intent is to store the original Date_Entered per row and not the evaluated date value. That way the dates are processed and set in order and data is preserved. The next time this session is run. To view the current value for a particular variable associated with the session. 3. As data flows through the mapping. 2. right-click on the session and choose View Persistent Values. only sources where Date_Entered > 02/03/1998 will be processed. Upon successful completion of the session.e. In this case. It need not go to the target.1. the port must be connected to a downstream object. then add an ORDER BY clause to the Source Qualifier. make the session Data Driven and add an Update Strategy after the transformation containing the SETMAXVARIABLE function. Treat Rows As is set to Update in the session properties) the function will not work. but before the Target. In order for the function to assign a value and ultimately store it in the repository. based on the variable in the Source Qualifier Filter. the variable is updated in the Repository for use in the next session run. the Max Date_Entered was 02/03/1998. In order for the function to work correctly. PAGE BP-80 BEST PRACTICES INFORMATICA CONFIDENTIAL . the variable gets updated to the Max Date_Entered it encounters. The following graphic shows that after the initial run. The reason is that that memory will not be instantiated unless it is used in a downstream transformation object.. the rows have to be marked for insert. If the mapping is an update only mapping (i. The first time this mapping is run the SQL will select from the source where Date_Entered is > 01/01/1900 providing an initial load. but it must go to another Expression Transformation.

There are two basic ways to accomplish this: • Create a generic parameter file. This will delete the stored value from the Repository. A session may (or may not) have a variable. If a session run is needed for a specific date. after the initial session is run the parameter file contents may look like: INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-81 . either change. and the parameter file need not have variables and parameters defined for every session ‘using’ the parameter file. Run PMCMD for that session but declare the specific parameter file within the PMCMD command. and point all sessions to that parameter file. view the persistent value from Server Manager (see graphic above) and press Delete Values. causing the Order of Evaluation to use the Initial Value declared from the mapping. In this example. uncomment or delete the variable in the parameter file. use a parameter file. To override the variable.Resetting or Overriding Persistent Values To reset the persistent value to the initial value declared in the mapping. • Parameter files can be declared in Session Properties under the Log & Error Handling Tab. place it on the server.

g. regardless of differing environmental definitions (e. are required in a multiple database environment.s_Incremental] . schemas. run another script to reset the parameter file. DB Instance ORC1 ORC99 HALC UGLY GORF Schema aardso environ hitme snakepit gmer Table orders orders order_done orders orders User Sam Help Hi Punch Brer Password max me Lois Judy Rabbit Each sales order table has a different name. then a simple Perl Script can update the parameter file to: [Test. NULL.4) (28) PAGE BP-82 BEST PRACTICES INFORMATICA CONFIDENTIAL . NULL.$$Post_Date= By using the semicolon. user/logins). instances. NULL. the variable override is ignored and the Initial Value or Stored Value is used. the order of evaluation looks to the parameter file first. All instances have a common table definition for sales orders. sees a valid variable and value and uses that value for the session run. Scenario Company X maintains five Oracle database instances. NULL.s_Incremental] $$Post_Date=04/21/2001 Upon running the sessions. NULL (28) (28) (5. Example: Using Session and Mapping Parameters in Multiple Database Environments Reusable mappings that can source a common table definition across multiple databases. but the same definition: ORDER_ID DATE_ENTERED DATE_PROMISED DATE_SHIPPED EMPLOYEE_ID CUSTOMER_ID SALES_TAX_RATE STORE_ID Sample Solution NUMBER DATE DATE DATE NUMBER NUMBER NUMBER NUMBER (28) NOT NOT NOT NOT NOT NOT NOT NOT NULL. If. NULL. After successful completion. in the subsequent run. the data processing date needs to be set to a specific date (for example: 04/21/2001).[Test. but each instance has a unique instance name. NULL. schema and login.

In this example. the initial value is not required as this solution will use parameter files. Also. Open the source qualifier and use the mapping parameter in the SQL Override as shown in the following graphic. Then create a Mapping Parameter named $$Source_Schema_Table with the following attributes: Note that the parameter attributes vary based on the specific environment. the strings are named according to the DB Instance name. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-83 . create multiple connection strings. Using Designer create the mapping that sources the commonly defined table.Using Server Manager.

Open the Expression Editor and select Generate SQL. create a session based on this mapping. Override the table names in the SQL statement with the mapping parameter. Within the Source Database connection.orders $DBConnection_Source= ORC1 Parmfile2. drop down place the following parameter: $DBConnection_SourcePoint the target to the corresponding target and finish. Using Server Manager. In this example. Parmfile1. there will be five separate parameter files.s_Incremental_SOURCE_CHANGES] PAGE BP-84 BEST PRACTICES INFORMATICA CONFIDENTIAL . Now create the parameter file. The generated SQL statement will show the columns.txt [Test.s_Incremental_SOURCE_CHANGES] $$Source_Schema_Table=aardso.txt [Test.

txt ‘ 1 1 INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-85 .0.orders $DBConnection_Source= GORF Use PMCMD to run the five sessions in parallel.s_Incremental_SOURCE_CHANGES] $$Source_Schema_Table=hitme.s_Incremental_SOURCE_CHANGES] $$Source_Schema_Table= gmer.txt [Test.s_Incremental_SOURCE_CHANGES] $$Source_Schema_Table=snakepit.txt [Test.$$Source_Schema_Table=environ.orders $DBConnection_Source= UGLY Parmfile5.order_done $DBConnection_Source= HALC Parmfile4.txt [Test.0.orders $DBConnection_Source= ORC99 Parmfile3.1:4001 Test: s_Incremental_SOURCE_CHANGES:pf=’\$PMRootDir\ParmFiles\Parmfile1. The syntax for PMCMD for starting sessions is as follows: pmcmd start {user_name | %user_env_var} {password | %password_env_var} {[TCP/IP:][hostname:]portno | IPX/SPX:ipx/spx_address} [folder_name:]{session_name | batch_name}[:pf=param_file] session_flag wait_flag In this environment there would be five separate commands: pmcmd start tech_user pwd 127.

1:4001 Test: s_Incremental_SOURCE_CHANGES:pf=’\$PMRootDir\ParmFiles\ Parmfile4.1:4001 Test: s_Incremental_SOURCE_CHANGES:pf=’\$PMRootDir\ParmFiles\ Parmfile5. PAGE BP-86 BEST PRACTICES INFORMATICA CONFIDENTIAL .txt ‘ 1 1 pmcmd start tech_user pwd 127.txt ‘ 1 1 pmcmd start tech_user pwd 127.0.pmcmd start tech_user pwd 127.0.0. you could run the sessions in sequence with one parameter file.0.1:4001 Test: s_Incremental_SOURCE_CHANGES:pf=’\$PMRootDir\ParmFiles\ Parmfile3.1:4001 Test: s_Incremental_SOURCE_CHANGES:pf=’\$PMRootDir\ParmFiles\ Parmfile2.txt ‘ 1 1 Alternatively. In this case.0.or post-session script would change the parameter file for the next session. a pre.0.0.txt ‘ 1 1 pmcmd start tech_user pwd 127.0.

The first step in using mappings to trap errors is understanding and identifying the error handling requirement. The following questions should be considered: • • • • • • • • • • What types of errors are likely to be encountered? Of these errors. Another approach is to use mappings to trap data errors. data must be checked and validated prior to entry into the data warehouse. which ones should be captured? What process can capture the possible errors? Should errors be captured before they have a chance to be written to the target database? Should bad files be used? Will any of these errors need to be reloaded or corrected? How will the users know if errors are encountered? How will the errors be stored? Should descriptions be assigned for individual errors? Can a table be designed to store captured errors and the error descriptions? Capturing data errors within a mapping and re-routing these errors to an error table allows for easy analysis for the end users and improves performance. One strategy for handling errors is to maintain database constraints. For example. Description Identifying errors and creating an error handling strategy is an essential part of a data warehousing project. In the production environment.A Mapping Approach to Trapping Data Errors Challenge Addressing data content errors within mappings to facilitate re-routing erroneous rows to a target other than the original target table. Also. The database still enforces the foreign key constraints. if constraint errors are captured within INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-87 . Referential integrity is assured by including this functionality in a mapping. This can be accomplished by creating a lookup into a dimension table prior to loading the fact table. suppose it is necessary to identify foreign key constraint errors within a mapping. but erroneous data will not be written to the target table.

This approach can be effective for many types of data content errors. and incorrect data formats or data types. These two tables might look like the following: The error handling functionality must assigned to a unique description for each error in the rejected row. The TARGET_NAME_ERR table provides the user with the entire row that was rejected. such as the mapping name. Any row containing an error (or errors) will be separated from the valid data and uniquely identified with a composite key consisting of a MAPPING_ID and a ROW_ID. The ERR_DESC_TBL table will hold information about the error. the next step is to separate the error from the data flow. This table is designed to hold all error descriptions for all mappings within the repository for reporting purposes. the PowerCenter server will not have to write the error to the session log and the reject/bad file. we want to capture null values before they enter into a target field that does not allow nulls. In this example. The MAPPING_ID refers to the mapping name and the ROW_ID is generated by a Sequence Generator. In this example. Use the Router Transformation to create a stream of data that will be the error route. Mapping logic can identify data content errors and attach descriptions to the errors. After we’ve identified the type of error. The composite key allows developers to trace rows written to the error tables. the ROW_ID. Error Handling Example In the following example. and a description of the error. null values intended for not null target fields. including: date conversion. These two columns allow the TARGET_NAME_ERR and the ERR_DESC_TBL to be linked.the mapping. Data content errors also can be captured in a mapping. enabling the user to trace the error rows back to the source. the two error tables are ERR_DESC_TBL and TARGET_NAME_ERR. any null value intended for a not null target PAGE BP-88 BEST PRACTICES INFORMATICA CONFIDENTIAL . Error tables are important to an error handling strategy because they store the information useful to error identification and troubleshooting. The TARGET_NAME_ERR table will be an exact replica of the target table with two additional columns: ROW_ID and MAPPING_ID.

we need to filter the columns within the row that are actually errors. You can use the Normalizer Transformation A mapping approach to break one row of data into many rows After a single row of data is separated based on the number of possible errors in it. the row actually has only one error so we need to write only one error with its description to the ERR_DESC_TBL. This step can be done in an Expression Transformation. one row of data may have as many as three errors. For example. When the row is written to the ERR_DESC_TBL. TARGET_NAME_ERR Column1 NULL Column2 NULL Column3 NULL ROW_ID 1 MAPPING_ID DIM_LOAD ERR_DESC_TBL FOLDER_NAME MAPPING_ID ROW_ID ERROR_DESC LOAD_DATE SOURCE Target CUST DIM_LOAD 1 Column 1 is SYSDATE DIM FACT NULL CUST DIM_LOAD 1 Column 2 is SYSDATE DIM FACT NULL CUST DIM_LOAD 1 Column 3 is SYSDATE DIM FACT NULL The solution example would look like the following in a mapping: INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-89 . After field descriptions are assigned. Focus on the bold selections in both tables.field will generate an error message such as ‘Column1 is NULL’ or ‘Column2 is NULL’. we need to break the error row into several rows. with each containing the same content except for a different error description. but in this case. The following chart shows how the two error tables can be linked. we can link this row to the row in the TARGET_NAME_ERR table using the ROW_ID and the MAPPING_ID.

A ‘hard’ error can be defined as one that would fail when being written to the database. This gives business analysts an opportunity to evaluate and correct data imperfections while still allowing the records to be processed for end-user reporting. is its flexibility. The advantage of the mapping approach is that all errors are identified as either data errors or constraint errors and can be properly addressed. PAGE BP-90 BEST PRACTICES INFORMATICA CONFIDENTIAL . the error handling logic can be placed anywhere within a mapping. A ‘soft’ error can be defined as a data content error. business organizations need to decide if the analysts should fix the data in the reject table or in the source systems. thereby using the same logic repeatedly within a mapplet. The mapping approach also reports errors based on projects or categories by identifying the mappings that contain errors. By using the mapping approach to capture identified errors. This makes error detection easy to implement and manage in a variety of mappings. Ultimately. such as a constraint error. A record flagged as a hard error is written to the error route. data warehouse operators can effectively communicate data quality issues to the business users. Once an error type is identified. errors can be flagged as ‘soft’ or ‘hard’. By adding another layer of complexity within the mappings. while a record flagged as a soft error can be written to the target system and the error tables.The mapping approach is effective because it takes advantage of reusable objects. The most important aspect of the mapping approach however.

identifying potential errors. The error handling strategy should reject these rows. Stop and restart processes can be managed through the preand post.session shell scripts for each PowerCenter session. Regardless of whether an error requires manual inspection. it is critical to have a notification process in place. Although source systems vary widely in functionality and data quality standards. stop. Description It important to realize the need for an error handling strategy. You should prepare a high level data flow design to illustrate the load process and the role that error handling plays in it. and set a limit on how many errors can occur before the load process stops.. An error handling strategy should be capable of accounting for unrecoverable errors during the load process and provide crash recovery. the typical requirement of an error handling system is to address data quality issues (i. PowerCenter includes a post-session e-mail functionality that can trigger the delivery of e-mail. especially if a response is critical to the continuation of the process. and determining an optimal plan for error handling. at some point a record with incorrect data will be introduced into the data warehouse from a source system. then devise an infrastructure to resolve the errors. provide a place to put the rejected rows.e. Although error handling varies from project to project. and restart capabilities. correction of data or a rerun of the process. It also should report on the rows that are rejected by the load process. Therefore. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-91 . dirty date). Error handling is an integral part of any load process and directly affects the process when it starts and stops.Design Error Handling Infrastructure Challenge Understanding the need for an error handling strategy. and provide a mechanism for reload. the owner needs to know if any rows were loaded or changed during the load. Implementing an error handling strategy requires a significant amount of planning and understanding of the load process. Post-session scripts can be written to increase the functionality of the notification process to send detailed messages upon receipt of an error or file.

then send an e-mail notification to Product Support.The following table presents examples of one company’s error conditions and the associated notification actions: Error Condition Notification Action Arrival of . 1) E-mail If the required Tablespace is not available. Match the Hash Total and the Column Totals loaded in the target tables with the contents of the . do a rollback of the records loaded in the target. 1) E-mail 2) Page Tablespace check and Database constraints check for creating Target Tables The rejected record number crosses the error threshold limit OR Informatica PowerCenter session fails for any other reason. window. by 5:00 AM.DAT files or . The two tables look like the following: PAGE BP-92 BEST PRACTICES INFORMATICA CONFIDENTIAL . rollback the data load and send notification to Production Support. 1) E-mail 2) Page Load the rejected records to a reject file and send an e-mail notification to Production Support.SENT file. 1) E-mail 2) Page If the Hash total and the total number of records do not match. 1) E-mail 2) Page Timer to check if the load has completed by 5:00 If the load has not completed within the 2-hour AM. Infrastructure Overview A better way of identifying and trapping errors is to create tables within the mapping to hold the rows that contain errors. and notification is sent to the DBA and Production Support.DAT and . If they do not match. ENTERPRISE_ERR_TBL captures descriptions for all errors committed during loading. MAPPING_NAME and SEQ_ID. the system load for all the loads that are part of the system are aborted. send an e-mail notification to Production Support and 2:00 PM Saturday for weekly loads on-call resource. An additional error table. A Sample Scenario: Each target table should have an identical error table. Timer to check if If the .SENT file do not arrive by 3:00 the files have arrived by 3:00 AM for daily loads AM. named <TARGET_TABLE_NAME>_RELOAD with two additional columns.SENT Files.

Thus. we can determine that the row of data in the TARGET_RELOAD table with the SEQ_ID of 1 had three errors. we can identify that mapping DIM_LOAD with the SEQ_ID of 1 had 3 errors. By looking at the first row in the ENTERPRISE_ERR_TBL. By using the MAPPING_NAME and SEQ_ID. The entire process of defining the error handling strategy within a particular mapping depends on the type of errors that you expect to capture. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-93 .The <TARGET_TABLE_NAME>_RELOAD table is target specific. the error description states that ‘LKP1 was Invalid’. The following examples illustrate what is necessary for successful error handling. we can determine which values failed the lookup. Since rows in TARGET_RELOAD have a unique SEQ_ID. we can know that (‘test’) is the failed value in LKP1. By looking at the data rows stored in ENTERPRISE_ERR_TBL. <TARGET_TABLE_NAME>_RELOAD Fields: Values: LKP1 test LKP2 OCC LKP3 VAL ASOF_DT 12/21/00 SEQ_ID 1 MAPPING_NAME DIM_LOAD ENTERPRISE_ERR_TBL FOLDER_NAME Values: Project_1 Project_1 Project_1 MAPPING_NAME DIM_LOAD DIM_LOAD DIM_LOAD SEQ_ID 1 1 1 ERROR_DESC LKP1 Invalid LKP2 Invalid LKP3 Invalid LOAD_DATE SYSDATE SYSDATE SYSDATE SOURCE Target DIM DIM DIM DIM DIM DIM LKP_TBL SAL CUST DEPT The TARGET(<TARGET_TABLE_NAME>)_RELOAD captures rows of data that failed the validation tests. The ENTERPRISE_ERR_TBL is a target table in each mapping that requires error capturing.

sources. and click Reports. Provides information about executed sessions (such as the number of successful rows) in a particular folder. All information about column size and scale. but the amount of metadata that you enter should be determined by the business requirements. datatypes. Description It is crucial to take advantage of the metadata contained in the repository in to document your Informatica mappings. transformations.) You can choose from the following four reports: Mapping report (map.Documenting Mappings Using Repository Reports Challenge Documenting and reporting comments contained in each of the mapping objects. and comments for each target table. descriptions. With PowerCenter. but the Informatica mappings must be properly documented to take full advantage of this metadata. PowerCenter provides several ways to access the metadata contained within the repository.rpt).rpt). and primary keys are stored in the repository. One way of doing this is through the generic Crystal Reports that are supplied with PowerCenter. Shows the source and target dependencies as well as the transformations performed in each mapping. Lists source column and transformation details for each mapping in each folder or repository. You can also drill down to the column level and give descriptions of the columns in a table if necessary. This means that comments must be included at all levels of a mapping. etc. down to the objects and ports within the mapping. from the mapping itself. Once the mappings and sessions contain the proper metadata. These reports are accessible through the Repository Manager. Provides target field transformation expressions. it is important to develop a plan for extracting this metadata. you can enter description information for all repository objects. (Open the Repository Manager.rpt). Executed session report (sessions. PAGE BP-94 BEST PRACTICES INFORMATICA CONFIDENTIAL .rpt). Source and target dependencies report (S2t_dep. Target table report (Trg_tbl. targets.

In PowerCenter 5. or create custom SQL view. then use Alt+PrtSc to copy the active window to the clipboard. you can develop a metadata access strategy using the Metadata Reporter.1. arrange the mapping in Designer so the full mapping appears on the screen. A printout of the mapping object flow is also useful for clarifying how objects are connected. these will not be displayed in the generic Crystal Reports. Use Ctrl+V to paste the copy into a Word document. The Metadata Reporter allows for customized reporting of all repository information without direct access to the repository itself.Note: If your mappings contain shortcuts. For more information on the Metadata Reporter. consult Metadata Reporting and Sharing. You will have to use the MX2 Views to access the repository. or the Metadata Reporter Guide included with the PowerCenter documentation. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-95 . To produce such a printout.

the use of data profiles. These errors need to be fixed in the source systems and reloaded on a subsequent load of the EDW. Description When loading data into an EDW or DM. although it may not be complete. When the source system data does not meet these rules. and alternatives for addressing the most common types of problems. the factual data will be reprocessed and PAGE BP-96 BEST PRACTICES INFORMATICA CONFIDENTIAL . The business must decide what is acceptable and prioritize two conflicting goals: • • The need for accurate information The ability to analyze the most complete information with the understanding that errors can exist. an alternate method for identifying data errors. The business needs to be aware of the consequences of either permitting invalid data to enter the EDW or rejecting it until it is fixed. This provides a very reliable EDW that the users can count on as being correct.Error Handling Strategies Challenge Efficiently load data into the Enterprise Data Warehouse (EDW) and Data Mart (DM). Once the corrected rows have been loaded. Both approaches present complex issues. Dimensional errors cause valid factual data to be rejected because a foreign key relationship cannot be created. Both dimensional and factual data are rejected when any errors are encountered. methods for handling data errors. the loading process must validate that the data conforms to known rules of the business. there are three methods for handling data errors detected in the loading process: • Reject All. This is the simplest to implement since all errors are rejected from entering the EDW when they are detected. Reports indicate what the errors are and how they affect the completeness of the data. the process needs to handle the exceptions in an appropriate manner. Data Integration Process Validation In general. This Best Practice describes various loading scenarios.

and it would then be loaded into the data mart using the normal process. This approach gives users a complete picture of the data without having to consider data that was not available due to it being rejected during the load process. Inserts are important for dimensions because subsequent factual data may rely on the existence of the dimension data row in order to load properly.loaded. and determining the particular data elements to be rejected. The development effort to fix this scenario is significant. This method provides a balance between missing information and incorrect information. these changes need to be loaded into the DM. since the rejected data can be processed through existing mappings once it has been fixed. Minimal additional code may need to be written since the data will only enter the EDW if it is correct. • Reject None. Attributes provide additional descriptive information per key element. but the data may not support correct aggregations. with detail information being redistributed along different hierarchies. The problem is that the data may not be accurate. After the errors are corrected. Both the EDW and DM may contain incorrect information that can lead to incorrect decisions. Factual data can be allocated to dummy or incorrect dimension rows. assuming that all errors have been fixed. Updates do not affect the data integrity as much because the factual data can usually be loaded with the existing dimensional data unless the update is to a Key Element. restoring backup tapes for each night’s load. All changes that are valid are processed into the EDW to allow for the most complete picture. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-97 . • Reject Critical. a new loading process needs to correct both the EDW and DM. Key elements are required fields that maintain the data integrity of the EDW and allow for hierarchies to be summarized at different levels in the organization. After the data is fixed. which can be a time-consuming effort based on the delay between an error being detected and fixed. but incorrect detail numbers. resulting in grand total numbers that are correct. reports may change. With Reject None. and 2) as Inserts or Updates. This delay may cause some user dissatisfaction since the users need to take into account that the data they are looking at may not be a complete picture of the operational systems until the errors are fixed. This approach requires categorizing the data in two ways: 1) as Key Elements or Attributes. Rejected elements are reported as errors so that they can be fixed in the source systems and loaded on a subsequent run of the ETL process. This approach involves examining each row of data. and reprocessing the data. The development strategy may include removing information from the EDW. data integrity is intact. Once the EDW is fixed. The development effort required to fix a Reject All scenario is minimal.

If a third field is changed before the second field is fixed. This allows power users to analyze the EDW using either current (As-Is) or past (As-Was) views of dimensional data. If a field value was invalid. business management needs to understand that some information may be held out of the EDW. which is invalid. but Field 2 is still invalid. but the logic needed to perform this UPDATE instead of an INSERT is complicated. The following hypothetical example represents three field values in a source system. On 1/5/2000. Date 1/1/2000 1/5/2000 1/10/2000 1/15/2000 Field 1 Value Closed Sunday Open Sunday Open Sunday Open Sunday Field 2 Value Black BRed BRed Red Field 3 Value Open Open Open Open 9–5 9–5 24hrs 24hrs Three methods exist for handling the creation and update of Profiles: 1. Informatica generally recommends using the Reject Critical strategy to maintain the accuracy of the EDW. while at the same time screening out the unverifiable data fields. On 1/10/2000 Field 3 changes from Open 9-5 to Open 24hrs.The development effort for this method is more extensive than Reject All since it involves classifying fields as critical or non-critical. When the second field is fixed. As the source systems change. Date 1/1/2000 Profile Date Field 1 Value 1/1/2000 Closed Sunday Field 2 Value Black Field 3 Value Open 9 – 5 PAGE BP-98 BEST PRACTICES INFORMATICA CONFIDENTIAL . When this error is fixed. Profile records are created with date stamps that indicate when the change took place. and developing logic to update the EDW and flag the fields that are in error. which produces a new Profile record. the correction process cannot be automated. Profiles should occur once per change in the source systems. and Field 2 changes from Black to BRed. this method allows the greatest amount of valid data to enter the EDW on each run of the ETL process. On 1/15/2000. it is difficult for the ETL process to produce a reflection of data changes since there is now a question whether to update a previous Profile or create a new one. Using Profiles Profiles are tables used to track history of dimensional data in the EDW. Problems occur when two fields change in the source system and one of those fields produces an error. However. The first value passes validation. Field 2 is finally fixed to Red. while the second value is rejected and is not included in the new Profile. By providing the most fine-grained analysis of errors. it would be desirable to update the existing Profile rather than creating a new one. The first row on 1/1/2000 shows the original values. The effort also incorporates some tasks from the Reject None approach in that processes must be developed to fix incorrect data in the EDW and DM. Field 1 changes from Closed to Open. The first method produces a new Profile record each time a change is detected in the source. then the original field value is maintained. and also that some of the information in the EDW may be at least temporarily allocated to the wrong hierarchies.

we show the third field changed at the same time as the first. we run the risk of losing Profile information.is applied as a new change that creates a new Profile.Date 1/5/2000 1/10/2000 1/15/2000 Profile Date Field 1 Value 1/5/2000 1/10/2000 1/15/2000 Open Sunday Open Sunday Open Sunday Field 2 Value Black Black Red Field 3 Value Open 9 – 5 Open 24hrs Open 24hrs By applying all corrections as new Profiles in this method. If an error is never fixed in the source system. a mistake was entered on the first change and should be reflected in the first Profile. Date 1/1/2000 1/5/2000 1/10/2000 1/15/2000 1/15/2000 Profile Date Field 1 Value 1/1/2000 1/5/2000 1/10/2000 1/5/2000 (Update) 1/10/2000 (Update) Closed Sunday Open Sunday Open Sunday Open Sunday Open Sunday Field 2 Value Field 3 Value Black Black Black Red Red Open Open Open Open 9–5 9–5 24hrs 9-5 Open 24hrs If we try to implement a method that updates old Profiles when errors are fixed. as in this option. but then causes an update to the Profile records on 1/15/2000 to fix the Field 2 value in both. The second method updates the first Profile created on 1/5/2000 until all fields are corrected on 1/15/2000. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-99 . The third method creates only two new Profiles. even if we create the algorithms to handle these methods. This incorrectly shows in the EDW that two changes occurred to the source information when. 2. It involves being able to determine when an error occurred and examining all Profiles generated since then and updating them appropriately. we still have an issue of determining if a value is a correction or a new value. which loses the Profile record for the change to Field 3. we simplify the process by directly applying all changes to the source system directly to the EDW. which incorrectly reflects the changes in the source system. in reality. The second Profile should not have been created. Date 1/1/2000 1/5/2000 1/10/2000 1/15/2000 Profile Date Field 1 Value 1/1/2000 1/5/2000 1/5/2000 (Update) 1/5/2000 (Update) Closed Sunday Open Sunday Open Sunday Open Sunday Field 2 Value Field 3 Value Black Black Black Red Open 9 – 5 Open 9 – 5 Open 24hrs Open 24hrs If we try to apply changes to the existing Profile. 3. causing an automated process to update old Profile records. we need to create complex algorithms that handle the process correctly. If the third field changes before the second field is fixed. as in this method. but a new value is entered. we would identify it as a previous error. when in reality a new Profile record should have been entered. Each change .regardless if it is a fix to a previous error -. When the second field was fixed it would also be added to the existing Profile. And.

in the absence of a primary key.. Then. due to the nature of the error. In this case. is delayed until the data is examined and an action is decided. While information can be provided to the source system site indicating there are file • PAGE BP-100 BEST PRACTICES INFORMATICA CONFIDENTIAL . when the process encounters a new. The file or record would be sent to a reject queue. and potentially deleting the newly inserted record. it is likely that individual unique records within the file are not identifiable. The following types of errors cannot be processed: • A source record does not contain a valid key. In this way. If the file or record is illegible. Quality indicators can be used to: • • • show the record and field level quality associated with a given record at the time of extract identify data sources and errors encountered in specific records support the resolution of specific record error types via an update and resubmission process. Quality indicators may be used to record several types of errors – e. or invalid data value. correct value it flags it as part of the load strategy as a potential fix that should be applied to old Profile records. These records cannot be loaded to the EDW because they lack a primary key field to be used as a unique record identifier in the EDW. This method only delays the As-Was analysis of the data until the correction method is determined because the current information is reflected in the new Profile. Records containing a fatal error are stored in a Rejected Record Table and associated to the original file name and record number. Metadata will be saved and used to generate a notice to the sending system indicating that x number of invalid records were received and could not be processed. A data quality indicator code is included in the DQ fields corresponding to the original fields in the record where the errors were encountered.Recommended Method A method exists to track old errors so that we know when a value was rejected. missing data in a required field. another process examines the existing Profile records and corrects them as necessary. wrong data type/format. Metadata indicating that x number of invalid records were received and could not be processed may or may not be available for a general notice to be sent to the sending system. The source file or record is illegible. If a record contains even one error. but the process of fixing old Profile records. no tracking is possible to determine whether the invalid record has been replaced or not. Once an action is decided.g. However. the corrected data enters the EDW as a new Profile record. This record would be sent to a reject queue. one field for every field in the record. data quality (DQ) fields will be appended to the end of the record. Data Quality Edits Quality indicators can be used to record definitive statements regarding the quality of the data received and stored in the EDW. fatal errors (missing primary key value). The indicators can be append to existing data tables or stored in a separate table linked by the primary key. no tracking is possible to determine whether the invalid record has been replaced or not.

errors for x number of records. This can present credibility problems when trying to track INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-101 . These are used to indicate the quality of incoming data at an elemental level. but they contain errors: • • • A required (non-key) field is missing. Quality Indicators (Quality Code Table) The requirement to validate virtually every data element received from the source data systems mandates the development. the EDW will not be synchronized with the source systems. Aggregated and analyzed over time. If errors are fixed in the reject tables. The value in a field does not fall within the range of acceptable values identified for the field. At the same time. When an error is detected during ingest and cleansing. restoring previous loads from tape. business process problems and information technology breakdowns. capture and maintenance of quality indicators. Typically. the records can be processed. these indicators provide the information necessary to identify acute data quality problems. we cannot rule this out as a possible solution. But how often should these corrections be performed? The correction process can be as simple as updating field information to reflect actual values. These indicators provide the opportunity for operations staff. Reject Tables vs. a reference table is used for this validation. these indicators provide the level of detail necessary for acute quality problems to be remedied in a timely manner. The business needs to decide whether analysts should be allowed to fix data in the reject tables. Source System As errors are encountered. “1”-Fatal Error. “3”-Wrong Data Type/Format. and then reloading the information correctly. Although we try to avoid performing a complete database restore and reload from a previous point in time. “4”-Invalid Data Value and “5”Outdated Reference Table in Use. “2”-Missing Data from a Required Field. specific problems may not be identifiable on a record-by-record basis. systemic issues. apply a concise indication of the quality of the data within specific fields for every data type. or whether data fixes will be restricted to source systems. The quality indicators: “0”-No Error. implementation. In these error types. Handling Data Errors The need to periodically correct data in the EDW is inevitable. or as complex as deleting data from the EDW. the identified error type is recorded. The value in a numeric or date field is non-numeric. data quality analysts and users to readily identify issues potentially impacting the quality of the data. they are written to a reject file so that business analysts can examine reports of the data and the related error messages indicating the causes of error.

Fixing this type of error involves integrating the two records in the EDW. Attribute errors are typically things like an invalid color or inappropriate characters in the address. Integrating the two rows involves combining the Profile PAGE BP-102 BEST PRACTICES INFORMATICA CONFIDENTIAL .) The business should provide default values for each identified attribute. to find specific patterns for market research). then the location number is changed due to some source business rule such as: all Warehouses should be in the 5000 range. then these fixes must be applied correctly to the EDW. Attributes include things like the color of a product or the address of a store. we use the ‘Unknown’ value. Attribute errors can be fixed by waiting for the source system to be corrected and reapplied to the data in the EDW.g. the data integration process is set to populate ‘Null’ into these fields. If all fixes occur in the source systems. which means “undefined” in the EDW. Fields that are restricted to a limited domain of values (e. When attribute errors are encountered for a new dimensional value. default values can be assigned to let the new record enter the EDW. we use the value that represents off or ‘No’ as the default. An analyst would be unable to get a complete picture. Attribute Errors and Default Values Attributes provide additional descriptive information about a dimension concept. it is corrected in the EDW. When errors are encountered in translating these values.g. For example. like numbers. (All reference tables contain a value of ‘Unknown’ for this purpose. the attributes are most useful as qualifiers and filtering criteria for drilling into the data. On/Off or Yes/No indicators).the history of changes in the EDW and DM. (e. In many cases. After a source system value is corrected and passes validation. along with the related facts. The process assumes that the change in the primary key is actually a new warehouse and that the old warehouse was deleted. This type of error causes a separation of fact data. Primary Key Errors The business also needs to decide how to handle new dimensional values such as locations. are handled on a case-by-case basis. Problems occur when the new key is actually an update to an old key in the source system. Some rules that have been proposed for handling defaults are as follows: Value Types Reference Values Small Value Sets Other Description Default Attributes that are foreign keys to other Unknown tables Y/N indicator fields No Any other type of attribute Null or Business provided value Reference tables are used to normalize the EDW model to prevent the duplication of data. with some data being attributed to the old primary key and some to the new. a location number is assigned and the new location is transferred to the EDW using the normal process. Other values. When a source value does not translate into a reference table value. are referred to as small value sets. These types of errors do not generally affect the aggregated facts and statistics in the EDW.

This involves updating the measures in the DM to reflect the changed data. taking care to coordinate the effective dates of the Profiles to sequence properly. and designating one primary data source when multiple sources exist. After the errors are fixed. deleting affected records from the EDW and reloading from the restore to correct the errors. In this case. but used as measures residing on the fact records in the DM. A translation table is associated with each reference table to map the INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-103 . New entities in dimensional data include new locations. Data Stewards Data Stewards are generally responsible for maintaining reference tables and translation tables. the fix process becomes simpler.information. Multiple source data occurs when two source systems can contain different data for the same dimensional entity. hierarchies. The situation is more complicated when the opposite condition occurs (i. we would like to reject the fact until the value is corrected. This nightly reprocessing continues until the data successfully enters the EDW. it is necessary to restore the source information for both dimensions and facts from the point in time at which the error was introduced. etc. Reference data and translation tables enable the EDW to maintain consistent descriptions across multiple source systems. Each table contains a short code value as a primary key and a long description for reporting purposes. After they are loaded. they are populated into the DM as usual.e. products. then the related fact rows must be added together and the originals deleted in order to correct the data. regardless of how the source system stores the data. If facts were loaded using both primary keys. the process to fix the DM can be time consuming and difficult to implement. we must decide how to handle the facts. If we load the facts with the incorrect data. creating new entities in dimensional data. From a data accuracy view. If two Profile records exist for the same day. we save these rows to a reject table for reprocessing the following night. Reference Tables The EDW uses reference tables to maintain consistent descriptions. If we let the facts enter the EDW and subsequently the DM. the affected rows can simply be loaded and applied to the DM. If we reject the facts when these types of errors are encountered. then when we encounter errors that would cause a fact to be rejected. DM Facts Calculated from EDW Dimensions If information is captured as dimensional data from the source. then a manual decision is required as to which is correct. we need to create processes that update the DM after the dimensional data is fixed. two primary keys mapped to the same EDW ID really represent two different IDs).. Initial and periodic analyses should be performed on the errors to determine why they are not being loaded. Fact Errors If there are no business rules that reject fact records except for relationship errors to dimensional data.

The data steward would make the following entries into the translation table to maintain consistency across systems: Source Value OF ST WH Code Translation OFFICE STORE WAREHSE The data stewards are also responsible for maintaining the Reference table that translates the Codes into descriptions. The ETL process uses the Reference table to populate the following values into the DM: Code Translation OFFICE STORE WAREHSE Code Description Office Retail Store Distribution Warehouse Error handling results when the data steward enters incorrect information for these mappings and needs to correct them after data has been loaded. but over time..g. Processes should be built to handle these types of situations. For location.) PAGE BP-104 BEST PRACTICES INFORMATICA CONFIDENTIAL . Dimensional Data New entities in dimensional data present a more complex issue. the SOURCE column in FILE X on System X can contain ‘O’. but Products serves as a good example for error handling. These translation tables map the source system value to the EDW value. Other source systems that maintain a similar field may use a two-letter abbreviation like ‘OF’. The only way to determine which rows should be changed is to restore and reload source data from the first time the mistake was entered. ‘ST’ and ‘WH’. include correction of the EDW and DM. New entities in the EDW may include Locations and Products.codes to the source system values. (Other similar translation issues may also exist. products may have multiple source system values that map to the same product in the EDW. at a minimum. Correcting the above example could be complex (e. The data steward would be responsible for entering in the Translation table the following values: Source Value O S W Code Translation OFFICE STORE WAREHSE These values are used by the data integration process to correctly load the EDW. The translation tables contain one or more rows for each source value and map the value to a matching row in the reference table. the ETL process can load data from the source systems into the EDW and then load from the EDW into the DM. this is straightforward. ‘S’ or ‘W’. if the data steward entered ST as translating to OFFICE by mistake). Dimensional data uses the same concept of translation as Reference tables. For example. Using both of these tables.

or create the translation data through the ETL process and force the data steward to review it. Because they share Store information. data accuracy and Profile problems are likely to occur. If we update the shared information on only one source system. which produces an inaccurate view of the product when reporting. it is difficult to decide which source contains the correct information. Manual Updates Over time. the fact rows for the various SKU numbers need to be merged and the original rows deleted. it is necessary to restore the source information for all loads since the error was introduced. a log of these fixes should be maintained to enable identifying the source of the fixes as manual rather than part of the normal load process. one system may contain Warehouse and Store information while another contains Store and Hub information. Further. In this case. The data steward then opens a report that lists them. A method needs to be established for manually entering fixed data and applying it correctly to the EDW. This requires the data stewards to review the status of new values on a daily basis. The first option requires the data steward to create the translation for new entities.. This causes additional fact rows to be created. any system is likely to encounter errors that are not correctable using source systems. A potential solution to this issue is to generate an e-mail each night if there are any translation table entries pending verification. while the second lets the ETL process create the translation. both sources have the ability to update the same row in the EDW. Affected records from the EDW should be deleted and then reloaded from the restore to correctly split the data. Either require the data steward to enter the translation data before allowing the dimensional data into the EDW. but marks the record as ‘Pending Verification’ until the data steward reviews it and changes the status to ‘Verified’ before any facts that reference it can be loaded. When this happens.There are two possible methods for loading new dimensional entities. requiring manual intervention. This occurs when two sources contain subsets of the required information. two products are mapped to the same product. and subsequently to the DM. Facts should be split to allocate the information correctly and dimensions split to generate correct Profile information. it creates a new Profile indicating the INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-105 . When the dimensional value is left as ‘Pending Verification’ however. Multiple Sources The data stewards are also involved when multiple sources exist for the same data. If both sources are allowed to update the shared information. but really represent two different products). When this is fixed.e. Profiles would also have to be merged. including beginning and ending effective dates. For example. it is really just a changed SKU number. If the changed system is loaded into the EDW. These dates are useful for both Profile and Date Event fixes. the two systems then contain different information. The situation is more complicated when the opposite condition occurs (i. A problem specific to Product is that when it is created as new. facts may be rejected or allocated to dummy values.

Then. this requires additional effort by the data stewards to mark the correct source fields as primary and by the data integration team to customize the load process. If the two systems remain different. it requires complex logic when creating Profiles. This allows developers to pull the information from the system of record. the source that should be considered primary for the field. However. only if the field changes on the primary source would it be changed. it compares its old unchanged value to the new Profile. knowing that there are no conflicts for multiple sources. assumes a change occurred and creates another new Profile with the old. a primary source where information can be shared from multiple sources. the business analysts and developers need to designate. One solution to this problem is to develop a system of record for all sources. When the second system is loaded. To avoid this type of situation. PAGE BP-106 BEST PRACTICES INFORMATICA CONFIDENTIAL . While this sounds simple. the process causes two Profiles to be loaded every day until the two source systems are synchronized with the same information.information changed. Another solution is to indicate. Developers can use the field level information to update only the fields that are marked as primary. unchanged value. at the field level. because multiple sources can provide information toward the one Profile record created for that day. at a field level.

use multiple fields/ports selection to copy or link. press <Alt> <T> then <C>.Using Shortcut Keys in PowerCenter Designer Challenge Using shortcut keys in PowerCenter Designer to edit repository objects. Description General Suggestions • • To Open a folder with workspace open as well. then scroll down and click on “open”. double click on it’s the window’s title bar. Set the Key Type value to “NOT A KEY” prior to dragging. • • • • • • • • INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-107 . click on an Open folder icon (rather than double-clicking on it). To use a Docking\UnDocking window such as Repository Navigator. To delete customized icons. hold the mouse down and drag to view a box. The same action. right click on the folder name. use an icon in the toolbar rather than a command from a drop down menu. To use Create Customized Toolbars to tailor a toolbar for the functions you commonly perform. To expedite mapping development. press and hold <Ctrl> and highlight the mapping with the left mouse button. press <F9>. From here you can either add new icons to your toolbar by “dragging and dropping” them from the toolbar menu or you can “drag and drop” an icon from the current toolbar if you no longer want to use it. To start the Debugger. To copy a mapping from a shared folder. without holding <Ctrl> creates a Shortcut to an object. be sure to start in the Foreign Key table and drag the key/field to the Primary Key table. Alternatively. then drag and drop into another folder or mapping and click OK. go into customize toolbars under the tools menu. Be sure the box touches every object you want to select. If possible. To quickly select multiple transformations. When using the "drag & drop" approach to create Foreign Key/Primary Key relationships between tables.

then press <Alt><w> and click OK. To copy a selected row from the grid. The expression must be highlighted. then press OK once again in the “Expression parsed successfully” pop-up. first highlight it. first highlight it. just begin typing. You don't need to press DEL first to remove the ‘NEWFIELD’ text. To validate the Default value. o Press <F2> then <F3> to quickly open the Expression Editor of an OUT/VAR port. then press <Alt><f> to insert the new field/port below it and click OK. just type the first letter on the list to select the item you want. press <Esc> then click OK. then click OK when you have finished.Edit Tables/Transformation • • • • • • • To edit any cell in the grid. then press <Alt><v> and click OK). press <Ctrl><C>. press <Alt><P> . To select PowerCenter functions and ports during expression creation. To delete a selected field or port from the grid. press <Ctrl><V>. • • • • • • • Expression Editor • • To expedite the validation of a newly created expression. The box must be highlighted in order to check/uncheck the port type. To move the current field in a transformation Up. press <Alt><C>. For all combo/dropdown list boxes. PAGE BP-108 BEST PRACTICES INFORMATICA CONFIDENTIAL . To add a new field or port.. first highlight the port you want to validate. press <Alt><O>. When adding a new port. then move the cursor to the character you want to edit and click OK. To move the current field in a transformation Down. use the Functions and Ports Tab. simply press OK to initiate the parsing/validation of the expression. To paste a selected row from the grid. When moving about the expression fields via arrow keys: o Use the SPACE bar to check/uncheck the port type. then press <Alt><u> and click OK. To copy a selected item in the grid. To cancel an edit in the grid. press <F2. To past a selected item from the Clipboard to the grid. first highlight an existing field or port.

carefully consider whether the effort to create. if the calculation were to be performed in a number of mappings. However. For example. and if all occurrences would be updated following any change or fix – then this would be an ideal case for a reusable object. The second criterion for a reusable object concerns the data that will pass through the reusable object. Often. it is simpler to add the calculation to both mappings. These common routines are excellent candidates for reusable objects.) or even a string of transformations (mapplets).g. update strategy) in two INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-109 . etc.Creating Inventories of Reusable Objects & Mappings Challenge Successfully creating inventories of reusable objects and mappings. test. and document the common object is worthwhile. creating and testing the object does not save development time or future maintenance. reusable objects can be single transformations (lookups. if it was very difficult. filter. Description Reusable Objects The first step in creating an inventory of reusable objects is to review the business requirements and look for any common routines/modules that may appear in more than one data movement.. if there is a simple calculation like subtracting a current rate from a budget rate that will be used for two different mappings. filters. including identifying potential economies of scale in loading multiple sources to the same target. expression. In PowerCenter. Many times developers see a situation where they may perform a certain type of high-level process (e. Evaluate potential reusable objects by two criteria: • • Is there enough usage and complexity to warrant the development of a common object? Are the data types of the information passing through the reusable object the same from case to case or is it simply the same high-level steps with different fields and data? Common objects are sometimes created just for the sake of creating common components when in reality.

However. when creating an inventory of mappings. in practice. A more comprehensive approach to creating the inventory of mappings is to create a spreadsheet listing all of the target tables. or sometimes multiple mappings. thus making the use of a mapplet impractical. a single source table would populate a single target table. In a typical warehouse or data mart model. Keep in mind that it will be impossible to identify 100 percent of the reusable objects at this point. however. create two rows for the target each with the same number. The Table would look similar to the following: PAGE BP-110 BEST PRACTICES INFORMATICA CONFIDENTIAL . the developers may realize that the actual data or ports passing through the high level logic are totally different from case to case. While the business may consider a fact table and its three related dimensions as a single ‘object’ in the data mart or warehouse. when creating a reusable object. after performing half of the mapplet work. Document the list of the reusable objects that pass this criteria test. Typically. In the case of multiple source tables per target. and sometimes a single source of data creates many target tables. Mappings A mapping is an individual movement of data from a source system to a target system. the actual object will be replicated in one to many mappings. each OCCURS statement decomposes to a separate table. in each mapping using the mapplet or reusable transformation object. with an assumption that each target table has its own mapping. For each of the target tables. By simply focusing on the target tables. five mappings may be needed to populate the corresponding star schema with data (i. the challenge is to think in individual components of data movement. Remember. The latter is especially true for mainframe data sources where COBOL OCCURS statements litter the landscape. For this exercise. Create a column with a number next to each Target table. this approach yields multiple mappings. The remainder will be discovered while building the data integration processes. While often true. this seems like a great candidate for a mapplet. However. Thus. list the source file or table that will be used to populate the table. providing a high-level description of what each object will accomplish. Efficiencies can sometimes be realized by loading multiple tables from a single source. in another column. The goal here is to create an inventory of the mappings needed for the project.or more mappings. the focus is on the target tables. The detailed design will occur in a future subtask. these efficiencies can be overlooked. each from a different source system). at first look.. Sometimes multiple sources of data need to be combined to create a target table. and hopefully the most difficult ones. In a simple world. so that it can be successfully applied to multiple cases. Consider whether there is a practical way to generalize the common logic. the goal here is to create an inventory of as many as possible. but at this point the intent is to identify the number and functionality of reusable objects that will be built for the project. the same size and number of ports must pass into and out of the mapping/reusable object. and list the additional source(s) of data. if a single source of data populates multiple tables. one for each of the dimension tables and two for the fact table.e. this is usually not the case.

000 rows High – 100. For the mappings with multiple sources or targets. so there will be no easy way to rerun a single table. give both targets the same number. These names can then be used to distinguish mappings from each other and also can be put on the project plan as individual tasks. in a warehouse where dimension tables are likely to number in the thousands and fact tables in the hundred thousands. with each number representing a separate mapping. or Low number of target rows. re-sort the spreadsheet by number. First.000 to 100.Number 1 2 3 4 4 Target Table Customers Products Customer_Type Orders_Item Orders_Item Source Cust_File Items Cust_File Tickets Ticket_Items When completed. it is often helpful to record some additional information about each mapping to help with planning and maintenance. Then. potentially the Customers table and the Customer_Type tables can be loaded in the same mapping. determine for the project a threshold for a High. the spreadsheet can be sorted either by target table or source table. Apply the naming standards generated in 2.000 rows + INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-111 .000 rows Med – 10. Next. In this example. The mapping will always load two or more target tables from the source. Medium. be sure to keep restartabilty/reloadability in mind. the following thresholds might apply: Low – 1 to 10. give each mapping a name. Sorting by source table can help determine potential mappings that create multiple targets. When merging targets into one mapping in this manner. When using a source to populate multiple tables at once for efficiency. The inventory would look similar to the following: Number 1 2 4 Target Table Customers Customer_Type Products Orders_Item Source Cust_File Items Tickets Ticket_Items At this point.2 DESIGN DEVELOPMENT ARCHITECTURE. For example. merge the data back into a single row to generate the inventory of mappings.

Add any other columns of information that might be useful to capture about each mapping. Med or Low) to each of the mappings based on the expected volume of data to pass through the mapping. These high level estimates will help to determine how many mappings are of ‘High’ volume. these mappings will be the first candidates for performance tuning. initial estimate. or complexity rating. PAGE BP-112 BEST PRACTICES INFORMATICA CONFIDENTIAL . actual completion time. resource (developer) assigned. such as a high-level description of the mapping functionality.Assign a likely row volume (High.

it is helpful to understand that all PowerCenter repository tables and index names begin with "OPB_" or "REP_". and nearly all use one or more indexes to speed up queries. analyze table OPB_ATTR compute statistics. ' compute statistics. The following information is useful for generating scripts to update distribution statistics. Most databases keep and use column distribution statistics to determine which index to use in order to optimally execute SQL queries.' from user_tables where table_name like 'OPB_%' select 'analyze index '. ' compute statistics. INDEX_NAME.' analyze table OPB_ANALYZE_DEP compute statistics. Description The Database Administrator needs to continually update the database statistics to ensure that they remain up-to-date. Because the statistics need to be updated table by table. The frequency of updating depends on how heavily the repository is used. As a result. the repository becomes slower and slower over time. Oracle Run the following queries: select 'analyze table '.' from user_indexes where INDEX_NAME like 'OPB_%' This will produce output like: 'ANALYZETABLE' TABLE_NAME 'COMPUTESTATISTICS.Updating Repository Statistics Challenge The PowerCenter repository has more than eighty tables. choosing a sub-optimal query plan can drastically affect performance. Database servers do not update these statistics continuously. so they quickly become outdated in frequently-used repositories. it is useful for Database Administrators to create scripts to automate the task. For the repository tables. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-113 . In large repositories. and SQL query optimizers may choose a less-than-optimal query plan. table_name.

. Save the output to a file. name from sysobjects where name like 'OPB_%' This will produce output like name update statistics OPB_ANALYZE_DEP update statistics OPB_ATTR update statistics OPB_BATCH_OBJECT PAGE BP-114 BEST PRACTICES INFORMATICA CONFIDENTIAL .e. Sybase Run the following query: select 'update statistics '. . Run this as a SQL script.' Run this as a SQL script. analyze index OPB_DIM_LEVEL compute statistics. (i. then edit the file and remove the header information (i. the lines that look like: 'ANALYZEINDEX' INDEX_NAME 'COMPUTESTATISTICS. Save the output to a file. . analyze index OPB_EXPR_IDX compute statistics. .' analyze index OPB_DBD_IDX compute statistics. edit the file and remove all the headers. .. MS SQL Server Run the following query: select 'update statistics '. the top two lines) and add a 'go' at the end of the file. Then.e. .analyze table OPB_BATCH_OBJECT compute statistics. This updates statistics for the repository tables. This updates statistics for the repository tables. 'ANALYZEINDEX' INDEX_NAME 'COMPUTESTATISTICS. . name from sysobjects where name like 'OPB_%' This will produce output like : name update statistics OPB_ANALYZE_DEP update statistics OPB_ATTR update statistics OPB_BATCH_OBJECT .

the top line that looks like: (constant) tabname (constant) Run this as a SQL script. ' and indexes all. runstats on table PARTH. This updates statistics for the repository tables. update statistics low for table OPB_BATCH_OBJECT . Informix Run the following query: select 'update statistics low for table '.e. tabname. (rtrim(tabschema)||'.OPB_ANALYZE_DEP and indexes all. then edit the file and remove the header information (i. . update statistics low for table OPB_ATTR . .OPB_BATCH_OBJECT INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-115 . . then remove the header information (i.tables where tabname like 'OPB_%' This will produce output like: runstats on table PARTH.e. runstats on table PARTH. .' from sysstat. Run this as a SQL script..OPB_ATTR and indexes all.' from systables where tabname like 'opb_%' or tabname like 'OPB_%'. This updates statistics for the repository tables. This will produce output like : (constant) tabname (constant) update statistics low for table OPB_ANALYZE_DEP . . and add a 'go' at the end of the file. DB2 Run the following query : select 'runstats on table '. Save the output to a file. Save the output to a file.')||tabname. ' . the top two lines)...

. Save the output to a file.and indexes all. . Run this as a SQL script to update statistics for the repository tables. PAGE BP-116 BEST PRACTICES INFORMATICA CONFIDENTIAL . .

a Service Level Agreement and an Operations Manual. To that end. This team is typically involved with the support of other systems and has expertise in database systems and various operating systems. the day-to-day operation of the data warehouse is the responsibility of a Production Support Team. becomes in effect. Description In most organizations. a customer to the Production Support team. This is a high-level document that discusses the system to be maintained. and identifies the groups responsible for monitoring the various components of the system. to help in the support of the production data warehouse.Daily Operations Challenge Once the data warehouse has been moved to production. it should contain the following information: • • • • • • • Times when the system should be available to users Scheduled maintenance window Who is expected to monitor the operating system Who is expected to monitor the database Who is expected to monitor the Informatica sessions How quickly the support team is expected to respond to notifications of system failures Escalation procedures that include data warehouse team contacts in the event that the support team cannot resolve the system failure Operations Manual INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-117 . Service Level Agreement The Service Level agreement outlines how the overall data warehouse system will be maintained. The Data Warehouse Development team. At a minimum. the Production Support team needs two documents. the components of the system. the most important task is keeping the system running and available for the end users.

The Operations Manual is crucial to the Production Support team because it provides the information needed to perform the maintenance of the data warehouse system. the Operations Manual should contain: • • • • • Information on how to stop and re-start the various components of the system Ids and passwords (or how to obtain passwords) for the system components Information on how to re-start failed PowerCenter sessions A listing of all jobs that are run. weekly. and the average run times Who to call in the event of a component failure that cannot be resolved by the Production Support team PAGE BP-118 BEST PRACTICES INFORMATICA CONFIDENTIAL .). etc. This manual should be self-contained. At a minimum. providing all of the information necessary for a production support operator to maintain the system and resolve most problems that may arise. their frequency (daily. monthly. This manual should contain information on how to maintain all components of the data warehouse system.

. session names.Load Validation Challenge Knowing that all data for the current load cycle has loaded correctly is essential for good data warehouse management. you must determine how you want this information presented to you. successful rows and failed rows). under the General tab and ‘Session Commands’ A number of variables are available to simplify the text of the e-mail: %s Session name %e Session status %b Session start time %c Session completion time %i Session elapsed time %l Total records loaded INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-119 . The first step is to determine what information you need for load validation (e. Post-session e-mails on either success or failure Post-session e-mail is configured in the session. session start times. All this information is stored as metadata in the repository. The following paragraphs describe three possible solutions for load validation. depending on the extent of error checking. data validation or data cleansing functionality inherent in the your mappings. you must determine the source of this information.g. session completion times. beginning with a fairly simple solution and moving toward the more complex: 1. However. Description Methods for validating the load process range from simple to complex. Do you want it stored as a flat file? Do you want it e-mailed to you? Do you want it available in a relational table. the need for load validation varies. Finally. but you must have a means of extracting this information. so that history easily be preserved? All of these factors weigh in finding the correct solution for you. Then. batch names.

- %r Total records rejected %t Target table details %m Name of the mapping used in the session %n Name of the folder containing the session %d Name of the repository containing the session %g Attach the session log to the message TIP: One practical application of this functionality is the situation in which a key business user waits for completion of a session to run a report. successful rows and session duration: select subject_area.session_name) order by subject_area. REP_SESS_LOG. session end time. Use a mapping A more complex approach. session_name. (session_timestamp . session name. The MX view. session_name The sample output would look like this: Folder Name Web Analytics Web Analytics Finance Finance HR Session Name Session End Time 5/8/2001 7:49:18 AM 5/8/2001 5/8/2001 5/8/2001 5/8/2001 7:53:01 8:06:01 8:10:32 8:15:27 AM AM AM AM Successful Rows 12900 125000 35987 45 5 Failed Session Rows Duration (sec’s) 0 0 0 0 0 126 478 178 12 10 S M W DYNMIC KEYS FILE LOAD SMW LOAD WEB FACT SMW NEW LOANS SMW UPD LOANS SMW NEW PERSONNEL 3. Query the repository Almost any query can be put together to retrieve data about the load execution from the repository. is a great place to start . The following sample query shows how to extract folder name. notifying him/her that the session was successful and the report can run. 2. The following graphic illustrates a sample mapping: PAGE BP-120 BEST PRACTICES INFORMATICA CONFIDENTIAL . and the most customizable. You can do this by sourcing the MX view REP_SESS_LOG and then performing lookups to other repository tables or views for additional information. successful_rows. session_timestamp.actual_start) * 24 * 60 * 60 from rep_sess_log a where session_timestamp = (select max(session_timestamp) from rep_sess_log where session_name =a. is to create a PowerCenter mapping to populate a table or flat file with desired information. This view is likely to contain all the information you need. You can configure email to this user.

INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-121 . This enables you to compare the current execution time with to the minimum and maximum durations.This mapping selects data from REP_SESS_LOG and performs lookups to retrieve the absolute minimum and maximum run times for that particular session.

A third-party scheduler can start and stop an Informatica session or batch using the PMCMD commands. Low Level Low level integration refers to a third-party scheduler kicking off only one Informatica session or a batch. The third-party scheduler is not adding any functionality that cannot be handled by the PowerCenter scheduler. In this level of integration. Because PowerCenter has a scheduler. many companies require the use of a third-party scheduler that is the company standard. PAGE BP-122 BEST PRACTICES INFORMATICA CONFIDENTIAL . Description When moving into production. The correct level of integration depends on the complexity of the batch/schedule and level and type of production support. there are several levels at which to integrate a third-party scheduler with PowerCenter.Third Party Scheduler Challenge Successfully integrate a third-party scheduler with PowerCenter. This Best Practice describes various levels to integrate a third-party scheduler. Medium Level. This type of integration is very simple and should only be used as a loophole to fulfill a corporate mandate on a standard scheduler. The PowerCenter scheduler handles all processes and dependencies after the third-party scheduler has kicked off the initial batch or session. That initial PowerCenter process subsequently kicks off the rest of the sessions and batches. nearly all control lies with the PowerCenter scheduler. A low level of integration is very simple to implement because the third-party scheduler kicks off only one process. Third Party Scheduler Integration Levels In general. and High Level. there are three levels of integration between a third-party scheduler and Informatica: Low Level.

one of the main disadvantages of this level of integration is that if a batch fails at some point. the Production Support personnel are usually able to determine the exact breakpoint. the control is shared between PowerCenter and a third-party scheduler. Because the PowerCenter sessions are not part of any batches. the majority of the production support burden falls back on the Project Development team. High Level High level integration is when a third-party scheduler has full control of scheduling and kicks off all PowerCenter sessions. Thus. one of the main advantages of this level of integration is that if the batch fails at some point. PowerCenter may have several sessions defined with dependencies. However. Thus. the third-party scheduler controls all dependencies among the sessions. Because Production Support personnel in many companies are knowledgeable only about the company’s standard scheduler. This reduces the integration chores because the third-party scheduler is only communicating with a limited number of PowerCenter batches. but not necessarily the specific session. one significant disadvantage of this level of integration is that if the batch fails at some point. The thirdparty scheduler controls all dependencies between the sessions. High level integration allows the Production Support personnel to have only limited knowledge of PowerCenter. the production support burden lies with the Production Support team. Medium level integration requires Production Support personnel to have a fairly good knowledge of PowerCenter. PowerCenter is controlling the dependencies within those batches. Because many companies only have Production Support personnel with knowledge in the company’s standard scheduler. They are probably able to determine the general area. the production support burden is shared between the Project Development team and the Production Support team. Thus. In this level of integration. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-123 . This type of integration is more complex than low level integration because there is much more interaction between the third-party scheduler and PowerCenter. the Production Support personnel may not be able to determine the exact breakpoint. Because the Production Support personnel in many companies are knowledgeable only about the company’s standard scheduler. to reduce total amount of work required to integrate the third-party scheduler and PowerCenter. but not all sessions. Therefore. A third-party scheduler may kick off several PowerCenter batches and sessions but within those batches. Thus. Medium Level Medium level integration is when a third-party scheduler kicks off many different batches or sessions.Low level integration requires production support personnel to have a thorough knowledge of PowerCenter. many of the PowerCenter sessions may be left in batches. This type of integration is the most complex to implement because there are many more interactions between the third-party scheduler and PowerCenter. the Production Support personnel may not be able to determine the exact breakpoint.

PAGE BP-124 BEST PRACTICES INFORMATICA CONFIDENTIAL .

The file used as the indicator file must be able to be located by the PowerCenter Server. The mere existence of the dummy file is enough to indicate that the session should start. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-125 . under advanced options. or dummy.Event Based Scheduling Challenge In an operational environment. The dummy file will be removed immediately after it is located. Description The indicator file configuration is specified in the session configuration. much like a flat file source. file. When the session starts. If the session is waiting on its source file to be FTP’ed from another server. essential that you do not use your flat file source as the indicator file. the start of a session needs to be triggered by another session or other event. the PowerCenter Server will look for the existence of this file and will remove it when it sees it. This file can be an empty. therefore. the FTP process should be scripted so that it creates the indicator file upon successful completion of the source file FTP. is the use of indicator files. It is. The best method of event-based scheduling with the PowerCenter Server.

the native PowerCenter backup provides a clean backup that can be restored to a new database. if folder copies are taking an unusually long time. either in development or production. execute the following select statement to retrieve the sessions with the most entries in OPB_SESSION_LOG: PAGE BP-126 BEST PRACTICES INFORMATICA CONFIDENTIAL . thereby increasing performance. Removing unnecessary data from these tables will expedite the repository backup process as well as the folder copy operation. Analyzing Tables in the Repository If operations in any of the client tools. and Informatica recommends using both methods. are slowing down .Repository Administration Challenge The task of managing the repository. If database corruption occurs. Purging Old Session Log Information Similarly. you may need to analyze the tables in the repository to facilitate data retrieval. To determine which logs to eliminate. including connectivity to the PowerCenter repository. the OPB_SESSION_LOG and/or OPB_SESS_TARG_LOG tables may be being transferred. although both are not essential. Description The following paragraphs describe several of the key tasks involved in managing the repository: Backing Up the Repository Two back-up methods are advisable for repository backup: (1) either the PowerCenter Repository Manager or ‘pmrep’ command line utility. and (2) the traditional database backup method. A number of best practices are available to facilitate the tasks involved with this responsibility. is extremely important. The native PowerCenter backup is required.

Respond ‘Yes’ when the system prompts you with the question “‘Delete these logs from the Repository?” pmrep Utility The pmrep utility was introduced in PowerCenter 5. This utility is a command-line program for Windows 95/98 or Windows NT/2000 to update session-related parameters in a PowerCenter repository. You can manually delete any of these by highlighting a particular log. the entries in the repository tables do not duplicate.session_id and b. When you delete the session. When a session is copied. Interactive mode invokes pmrep and allows you to issue a series of commands from a pmrep prompt without exiting after each command. • • Command line mode lets you execute pmrep commands from the windows command line. then delete original session. opb_load_session c where a. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-127 .select subj_name. When you select one of the sessions. Command line mode is useful for batch files or scripts.subj_id group by subj_name. sessname.... This mode invokes and exits each time a command is issued. The following examples illustrate the use of pmrep: Example 1: Script to backup PowerCenter Repository echo Connecting to repository <Informatica Repository Name>. Copy the original session. It is a standalone utility that installs in the PowerCenter Client installation directory. then selecting Delete from the Edit menu. opb_subject b. count(*) from opb_session_log a.session_id=c... eliminating all rows for an individual session.subj_id=c.0 to facilitate repository administration and server level administration. d:\PROGRA~1\INFORM~1\pmrep\pmrep connect -r <Informatica Repository Name> -n <Repository User Name> -x <Repository Password> -t <Database Type> -u <Database User Name> -p < Database Password> -c <Database Connection String> echo Starting Repository Backup. all of the session logs will appear on the righthand side of the screen. Log into Repository Manager and expand the sessions in a particular folder. It is not currently available for UNIX. sessname order by count(*) desc 1.. 2. d:\PROGRA~1\INFORM~1\pmrep\pmrep backup -o Output File Name> echo Clearing Connection… d:\PROGRA~1\INFORM~1\pmrep cleanup echo Repository Backup is Complete. The pmrep utility has two modes: command line and interactive mode. the entries in the tables are deleted.

Example 2: Script to update database connection information echo Connecting to repository Informatica Repository <Informatica Repository Name>. If you import a registry containing a DSN that does not exist on that client system. d:\PROGRA~1\INFORM~1\pmrep\pmrep connect -r <Informatica Repository Name> -n <Repository User Name> -x <Repository Password> -t <Database Type> -u <Database User Name> -p < Database Password> -c <Database Connection String> echo Begin Updating Connection Information for <Database Connection Name>… d:\PROGRA~1\INFORM~1\pmrep\pmrep updatedbconfig –d <Database Connection Name> –u <New Database Username> –p <New Database Password> –c <New Database Connection String> -t <Database Type> echo Clearing Connection… d:\PROGRA~1\INFORM~1\pmrep cleanup echo Completed Updating Connection Information for <Database Connection Name>… Export and Import Registry The Repository Manager saves repository connection information in the registry. The section of the registry that you can import and export contains the following repository connection information: • • • • Repository name Database username and password (must be in US-ASCII) Repository username and password (must be in US-ASCII) ODBC data source name (DSN) The registry does not include the ODBC data source. you can export the connection information. for each imported DSN. and then import it to a different client machine (as long as both machines use the same operating system).. To simplify the process of setting up client machines. PAGE BP-128 BEST PRACTICES INFORMATICA CONFIDENTIAL .. Be sure to have the appropriate data source configured under the exact same name as the registry you are going to import. the connection fails.

but only by one server at a time One of the Sun 4500’s serves as the primary data integration server. running Solaris OS Sun High-Availability Clustering Software External EMC storage. When the primary server goes down. a logical IP address can be created specifically for the PowerCenter Server. Thus. Under normal operations. If the machine hosting the PowerCenter Server goes down. the Sun highavailability software changes the ownership of the disk where the PowerCenter Server is installed from the primary server to the secondary server. Description While there are many types of hardware and many ways to configure a clustered environment. with each server owning specific disks PowerCenter installed on a separate disk that is accessible by both servers in the cluster. the Sun high-availability software automatically starts the PowerCenter Server on the secondary server using the basic auto start/stop scripts that are used in many UNIX environments to automatically start the PowerCenter Server whenever a host is rebooted. To facilitate this. another machine must recognize this and start another Server and assume responsibility for running the sessions and batches. In addition.cfg file is needed. only one pmserver. The PowerCenter Server must be running at all times. load schedules cannot be impacted by the failure of physical hardware. although it is physically located on its own server. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-129 . this example is based on the following hardware and software characteristics: • • • • 2 Sun 4500. the PowerCenter Server ‘thinks’ it is physically hosted by the primary server and uses the resources of the primary server. This logical IP address is specified in the pmserver.High Availability Challenge In a highly available environment. while the other server in the cluster is the secondary server.cfg file instead of the physical IP addresses of the servers. This is best accomplished in a clustered environment.

PAGE BP-130 BEST PRACTICES INFORMATICA CONFIDENTIAL .

Tune the source system and target system based on the performance details. After the tuning achieves a desired level of performance. source. the target is inserting the data quickly. 4. When the source and target are optimized.Recommended Performance Tuning Procedures Challenge Efficient and effective performance tuning for PowerCenter products. By running a session and monitoring the server. This indicates that the source data is arriving quickly. This is the optimum desired performance. Only minor tuning of the session can be conducted at this point and usually has only a minor effect. Benchmark the sessions to set a baseline to measure improvements against 2. If the system is paging. Only after the server. Monitor the server. This time look at the details and watch for the Buffer Input and Outputs for the sources and targets. increasing the physical memory available on the machine) can greatly improve performance. Perform Benchmarking. Description Performance tuning procedures consist of the following steps in a pre-determined order to pinpoint where tuning efforts should be focused. 3. and target have been tuned to their peak performance should the mapping be analyzed for tuning. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-131 .. Use the performance details. 5. 6. Re-run the session and monitor the performance details. and the actual application of the business rules is the slowest portion. 1. re-run the session to determine the impact of the changes.g. it should immediately be apparent if the system is paging memory or if the CPU load is too high for the number of available processors. the DTM should be the slowest portion of the session details. correcting the system to prevent paging (e.

7. PAGE BP-132 BEST PRACTICES INFORMATICA CONFIDENTIAL . Finally. comparing the new performance with the old performance. optimizing one or two sessions to run quickly can have a disastrous effect on another mapping and care should be taken to ensure that this does not occur. In some cases. re-run the sessions that have been identified as the benchmark.

Because SYS is the owner of these views. which allows the ‘ANY’ keyword to apply to SYS owned objects. with each query having an immediate hit. You can grant viewing privileges with either the ‘SELECT’ privilege. Explain Plan allows the DBA or developer to determine the execution path of a block of SQL code. This Best Practice covers tips on tuning several databases: Oracle. which allows a user to view for individual V$ views or the ‘SELECT ANY TABLE’ privilege. SQL Server and Teradata. • Explain Plan Explain Plan. The SQL in a source qualifier or in a lookup that is running INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-133 . only SYS can query them. so we’ve included only a short description of some of the major ones here.Performance Tuning Databases Challenge Database tuning can result in tremendous improvement in loading performance. With this in mind. and TKPROF are powerful tools for revealing bottlenecks and developing a strategy to avoid them. Keep in mind that querying these views impacts database performance. SQL Trace. Most DBAs are already familiar with these tools. enabling the DBA to draw conclusions about database performance. which allows the user to view all V$ views. • V$ Views V$ views are dynamic performance views that provide real-time information on database activity. Using the SELECT ANY TABLE option requires the ‘O7_DICTIONARY_ACCESSIBILITY’ parameter be set to ‘TRUE’. carefully consider which users should be granted the privilege to query these views. Oracle Performance Tuning Tools Oracle offers many tools for tuning an Oracle instance.

so you need to run this utility for a long while and through several operations (i. • TKPROF The output of SQL Trace is provided in a dump file that is difficult to read. both loading and querying). this. While this type of planning is time consuming. • SQL Trace SQL Trace extends the functionality of Explain Plan by providing statistical information about the SQL statements executed in a session that has tracing enabled. This utility is run for a session with the ‘ALTER SESSION SET SQL_TRACE = TRUE’ statement. TKPROF formats this dump file into a more understandable report. Co-locate tables that are heavily used with tables that are rarely used to help minimize disk contention. • UTLBSTAT & UTLESTAT Executing ‘UTLBSTAT’ creates tables to store dynamic performance statistics and begins the statistics collection process. or RAID technology can help immensely in reducing disk contention. a standard set of parameters to optimize PowerCenter is not practical and will probably never exist. Memory and Processing Memory and processing configuration is done in the init. Rollback files should be separated onto their own disks because they have significant disk I/O. the payoff is well worth the effort in terms of performance gains. Disk I/O Disk I/O at the database level provides the highest level of performance gain in most systems.txt.. Database files should be separated and identified. PAGE BP-134 BEST PRACTICES INFORMATICA CONFIDENTIAL . they are not fighting for the same resource.’ This report should give the DBA a fairly complete idea about the level of usage the database experiences and reveal areas that should be addressed. Review the PowerCenter session log for long initialization time (an indicator that the source qualifier may need tuning) and the time it takes to build a lookup cache to determine if the SQL for these transformations should be tested. Run this utility after the database has been up and running (for hours or days). ‘UTLESTAT’ ends the statistics collection process and generates an output file called ‘report. Accumulating statistics may take time.for a long time should be generated and copied to SQL*PLUS or other SQL tool and tested to avoid inefficient execution of these statements.e. Because each database is different and requires an experienced DBA to analyze and tune it for optimal performance.ora file. Also be sure to implement disk striping. Separate indexes so that when queries run indexes and tables.

in bytes. o Initially not set on Install. while high values favor table scans. its value defaults to twice the value of the SORT_AREA_SIZE parameter. o The value of this parameter can be changed without shutting down the Oracle instance by using the ALTER SESSION command. We’ve also included the descriptions and documentation from Oracle for each setting to help DBAs of other (nonOracle) systems to determine what the commands do in the Oracle environment to facilitate setting their native database commands and settings in a similar fashion. The default of 0 means that the optimizer chooses the best serial plan. • HASH_AREA_SIZE = 16777216 o Default value: 2 times the value of SORT_AREA_SIZE o Range of values: any integer o This parameter specifies the maximum amount of memory.ora file will take effect after a restart of the instance. Use svrmgr to issue the commands “shutdown” and “startup” (eventually “shutdown immediate”) to the instance. to be used for the hash join. A value of 100 means that the optimizer uses each object's degree of parallelism in computing the cost of a full table scan operation. • parallel_max_servers=40 o Used to enable parallel query.) Optimizer_percent_parallel=33 This parameter defines the amount of parallelism that the optimizer uses in its cost functions. a RULE hint or optimizer mode or goal is ignored. • • INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-135 . The settings presented here are those used in a 4-CPU AIX server running Oracle 7.4 set to make use of the parallel query option to facilitate parallel processing of queries and indexes.3. o Maximum number of query servers or parallel recovery processes for an instance. Parallel_min_servers=8 o Used to enable parallel query. If this parameter is not set. Cost-based optimization is always used for queries that reference an object with a nonzero degree of parallelism. (Note: ALTER SESSION refers to the Database Administration command issued at the svrmgr command prompt. Use of a FIRST_ROWS hint or optimizer mode overrides a nonzero setting of OPTIMIZER_PERCENT_PARALLEL. For such queries. Low values favor indexes. The value of this parameter can be changed without shutting down the Oracle instance by using the ALTER SESSION command.TIP: Changes made in the init.

In another mapping.armafix = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL =TCP) (HOST = armafix) (PORT = 1526) ) ) (CONNECT_DATA=(SID=DW) ) ) PAGE BP-136 BEST PRACTICES INFORMATICA CONFIDENTIAL . of Program Global Area (PGA) memory to use for a sort. o Increasing SORT_AREA_SIZE size improves the efficiency of large sorts. • SORT_AREA_SIZE=8388608 o Default value: Operating system-dependent o Minimum value: the value equivalent to two database blocks o This parameter specifies the maximum amount. After the sort is complete and all that remains to do is to fetch the rows out. in bytes. A normal tcp (network tcp/ip) connection in tnsnames.000 row write (array inserts). o The default is usually adequate for most database operations. particularly the CREATE INDEX statements. However.. Performance went from about 2Mb/min (280 rows/sec) to about 10Mb/min (1360 rows/sec). Changing the connection type to IPC reduced this to 45 seconds. For example. PMServer and Oracle target on same box). the memory is released down to the size specified by SORT_AREA_RETAINED_SIZE. primary key with unique index in place. if one process is doing all database access.e. 500. Minimum number of query server processes for an instance. the total time decreased from 24 minutes to 8 minutes for ~120130 bytes/row. After the last row is fetched out. not to the operating system. This is also the number of query server processes Oracle creates when the instance is started.o o Initially not set on Install. using an IPC connection can significantly reduce the time it takes to build a lookup cache. as in a full database import. if very large indexes are created. there is only one memory area of SORT_AREA_SIZE for each user process at any time. In one case.000 rows from a table was taking 19 minutes. a fact mapping that was using a lookup to get five columns (including a foreign key) and about 500.ora would look like this: DW. The memory is released back to the PGA. then an increased value for this parameter may speed the import. Multiple allocations never exist. all memory is freed. IPC as an Alternative to TCP/IP on UNIX On an HP/UX server with Oracle as a target (i. this parameter may need to be adjusted.

ALTER TABLE MDDB_DEV. then writing another SQL statement to rebuild it can be a very tedious process.' FROM USER_CONSTRAINTS WHERE (TABLE_NAME LIKE '%DIM' OR TABLE_NAME LIKE '%FACT') AND CONSTRAINT_TYPE = 'R' This produces output that looks like: ALTER TABLE MDDB_DEV. Run the following to generate output to disable the foreign keys in the data warehouse: SELECT 'ALTER TABLE ' || OWNER || '.Make a new entry in the tnsnames like this.CUSTOMER_DIM DISABLE CONSTRAINT SYS_C0011060 .AGREEMENT_DIM DISABLE CONSTRAINT SYS_C0011075 . With this in mind.AGREEMENT_DIM DISABLE CONSTRAINT SYS_C0011077 . Oracle 7 (and above) offers an alternative to dropping and rebuilding indexes by allowing you to disable and re-enable existing indexes. and use it for connection to the local Oracle instance: DWIPC. then generate SQL statements as output to disable and enable these indexes. Oracle stores the name of each index in a table that can be queried. ALTER TABLE MDDB_DEV. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-137 . For example.armafix = (DESCRIPTION = (ADDRESS = (PROTOCOL=ipc) (KEY=DW) ) (CONNECT_DATA=(SID=DW)) ) Improving Data Load Performance • Alternative to Dropping and Reloading Indexes Dropping and reloading indexes during very large loads to a data warehouse is often recommended but there is seldom any easy way to do this.' || TABLE_NAME || ' DISABLE CONSTRAINT ' || CONSTRAINT_NAME || ' . writing a SQL statement to drop each index. it is an easy matter to write a SQL statement that queries this table.

CUSTOMER_SALES_FACT DISABLE CONSTRAINT SYS_C0011133 .SQL’ PAGE BP-138 BEST PRACTICES INFORMATICA CONFIDENTIAL . Save the results in a single file and name it something like ‘DISABLE. Run the results of this SQL statement after disabling the foreign key constraints: SELECT 'ALTER TABLE ' || OWNER || '. Finally. ALTER TABLE MDDB_DEV.ALTER TABLE MDDB_DEV.CUSTOMER_SALES_FACT DISABLE PRIMARY KEY . ALTER TABLE MDDB_DEV. ALTER TABLE MDDB_DEV.' FROM USER_CONSTRAINTS WHERE (TABLE_NAME LIKE '%DIM' OR TABLE_NAME LIKE '%FACT') AND CONSTRAINT_TYPE = 'U' ALTER TABLE MDDB_DEV. Dropping or disabling primary keys will also speed loads.CUSTOMER_DIM DISABLE PRIMARY KEY . ALTER TABLE MDDB_DEV.CUSTOMER_DIM DISABLE CONSTRAINT SYS_C0011059 .CUSTOMER_SALES_FACT DISABLE CONSTRAINT SYS_C0011071 .AGREEMENT_DIM DISABLE PRIMARY KEY . disable any unique constraints with the following: SELECT 'ALTER TABLE ' || OWNER || '.' || TABLE_NAME || ' DISABLE PRIMARY KEY . ALTER TABLE MDDB_DEV.' FROM USER_CONSTRAINTS WHERE (TABLE_NAME LIKE '%DIM' OR TABLE_NAME LIKE '%FACT') AND CONSTRAINT_TYPE = 'P' This produces output that looks like: ALTER TABLE MDDB_DEV.CUSTOMER_SALES_FACT DISABLE CONSTRAINT SYS_C0011131 .' || TABLE_NAME || ' DISABLE PRIMARY KEY .CUSTOMER_SALES_FACT DISABLE CONSTRAINT SYS_C0011134 .CUSTOMER_DIM DISABLE CONSTRAINT SYS_C0011070 . ALTER TABLE MDDB_DEV.

which specifies how data should be loaded into the database. SQL*Loader has several options that can improve data loading performance and are easy to implement. fires triggers. you need a control file. Re-enable the unique constraints first. you can exclude the index that will be used for the lookup from your script. If you use lookups and updates (especially on large tables). The DIRECT path obtains an exclusive lock on the table being loaded and writes the data blocks directly to the database files. rerun these queries after replacing ‘DISABLE’ with ‘ENABLE. such as ‘OPTIONS (DIRECT = TRUE)’. The CONVENTIONAL path is the default method for SQL*Loader. To use the Oracle bulk loader.To re-enable the indexes.SQL’ and run it as a post-session command.’ Save the results in another file with a name such as ‘ENABLE. and re-enable primary keys before foreign keys. bypassing all SQL INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-139 . SQL*Loader • Loader Options SQL*Loader is a bulk loader utility used for moving data from external files into the Oracle database. but this also slows queries (such as lookups) and updates. Re-enable constraints in the reverse order that you disabled them. If you do not use lookups or updates on your target tables you should get a boost by using this SQL statement to generate scripts. These options are: • • • • DIRECT PARALLEL SKIP_INDEX_MAINTENANCE UNRECOVERABLE A control file normally has the following format: LOAD DATA INFILE <dataFile> APPEND INTO TABLE <tableName> FIELDS TERMINATED BY '<separator>' (<list of all attribute names to load>) To use any of these options. This performs like a typical INSERT statement that updates indexes. TIP: Dropping or disabling foreign keys will often boost loading. merely add ‘OPTIONS (OPTION = TRUE) to beginning of the control file. and evaluates constraints. You may want to experiment to determine which method is faster.

and ability to create and drop very quickly. Note that no other users can write to the loading table due to this exclusive lock and no SQL transformations can be made in the control file during the load. however.3. Oracle will default to btree. for example).. The PARALLEL option can be used with the DIRECT option when loading multiple partitions of the same table. by create the Oracle target table with the same number of partitions as the session. Since most dimension tables in a warehouse have nearly every column indexed. You will have to rebuild the indexes after the load. Bitmap indexes are suited to data warehousing because of their performance. Disabling these constraints with the SQL scripts described earlier will benefit performance when loading data into a target warehouse.processing. Optimizing Query Performance • Oracle Bitmap Indexing With version 7. This kind of data is an excellent candidate for a bitmap index. If the CONVENTIONAL path must be used (i. the performance time can be reduced to that of loading a single partition. If the partitions are located on separate disks. size. but overall performance may improve significantly. A typical example of a low cardinality field is gender – it is either male or female (or possibly unknown). that b-tree indexing is still the Oracle default. but is not much help for low cardinality/highly duplicated data and may even increase query time.e. Also note that for certain columns. but not PRIMARY KEY.x. transformations are performed during the load. then you can bypass index updates by using the SKIP_INDEX_MAINTENANCE option. Recoverability should not be an issue since the data file still exists. bitmaps will be smaller and faster to create than a b-tree index on the same column. the space savings is dramatic. UNIQUE KEY and NOT NULL constraints. The DIRECT option automatically disables CHECK and foreign key REFERENCES constraints. If you don’t specify an index type when creating an index. Oracle added bitmap indexing to supplement the traditional b-tree index. Keep in mind. But it is important to note that when a bitmap-indexed column is PAGE BP-140 BEST PRACTICES INFORMATICA CONFIDENTIAL . The UNRECOVERABLE option in the control file allows you to redo log writes during a CONVENTIONAL load. A b-tree index can greatly improve query performance on data that has high cardinality or contains mostly unique values. Loading Partitioned Sessions To improve performance when loading data to an Oracle database using a partitioned session. and can significantly improve query performance.

updated. • B-tree indexes: drop index emp_active. • Bitmap indexes: drop index emp_active_bit. inserts and updates). all_indexes. create bitmap index emp_gender_bit on emp (gender). With a b-tree index on the Fact table. Information for bitmap indexes in stored in the data dictionary in dba_indexes.2. add the word ‘bitmap’ between ‘create’ and ‘index’. you must set the following items in the instance initialization file: • • • compatible = 7. This ‘star query’ access method is only used if the STAR_TRANSFORMATION_ENABLED parameter is equal to TRUE in the init. For this reason. Creating bitmap indexes is similar to creating b-tree indexes. Also. create index emp_active on emp (active_flag). create index emp_gender on emp (gender). bitmap indexes are rebuilt after each DML statement (e. a ‘star query’ may be created that accesses the Fact table first followed by the Dimension table joins.0 # or higher event = "10111 trace name context forever" event = "10112 trace name context forever" INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-141 . then joins back to the Fact table. With a bitmapped index on the Fact table. it is a good idea to drop or disable bitmap indexes prior to the load and recreate or re-enable them after the load. a query processes by joining all the Dimension tables in a Cartesian product based on the WHERE clause.0.. create bitmap index emp_active_bit on emp (active_flag). To enable bitmap indexes. drop index emp_gender_bit.ora file and if there are single column bitmapped indexes on the fact table foreign keys. which can make loads very slow. avoiding a Cartesian product of all possible Dimension attributes. The relationship between Fact and Dimension keys is another example of low cardinality.g. every row associated with that bitmap entry is locked. drop index emp_gender. making bitmap indexing a poor choice for OLTP database tables with constant insert and update traffic.3. All other syntax is identical.’ Bitmap indexes cannot be unique. To specify a bitmap index. and user_indexes with the word ‘BITMAP’ in the Uniqueness column rather than the word ‘UNIQUE.

The following SQL statement can be used to analyze the indexes in the database: SELECT 'ANALYZE INDEX ' || INDEX_NAME || ' COMPUTE STATISTICS. ANALYZE TABLE MARKET_DIM COMPUTE STATISTICS. ANALYZE TABLE VENDOR_DIM COMPUTE STATISTICS. start and log into SQL*Plus.' FROM USER_TABLES WHERE (TABLE_NAME LIKE '%DIM' OR TABLE_NAME LIKE '%FACT') This generates the following results: ANALYZE TABLE CUSTOMER_DIM COMPUTE STATISTICS.' FROM USER_INDEXES WHERE (TABLE_NAME LIKE '%DIM' OR TABLE_NAME LIKE '%FACT') This generates the following results: ANALYZE INDEX SYS_C0011125 COMPUTE STATISTICS. The following will improve query results on Fact and Dimension tables (including appending and updating records) by updating the table and index statistics for the data warehouse: The following SQL statement can be used to analyze the tables in the database: SELECT 'ANALYZE TABLE ' || TABLE_NAME || ' COMPUTE STATISTICS. Index Statistics • Table Method Index statistics are used by Oracle to determine the best method to access tables and should be updated periodically as normal DBA procedures.• event = "10114 trace name context forever" Also note that the parallel query option must be installed in order to create bitmap indexes. PAGE BP-142 BEST PRACTICES INFORMATICA CONFIDENTIAL . • TIP: To check if the parallel query option is installed. the keyword ‘bitmap’ won't be recognized. If the parallel query option is installed. If you try to create bitmap indexes without the parallel query option. a syntax error will appear in your SQL statement. the word ‘parallel’ appears in the banner text.

Otherwise. especially for very large tables. Example of improper use of alias: INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-143 . In this example. Save these results as a SQL script to be executed before or after a load. The degree of parallelism should be identified based on the number of processors and disk drives on the server. If you find the exact computation of the statistics consumes too much time. The following examples demonstrate how to utilize four processors: SELECT /*+ PARALLEL(order_fact. be sure to use this alias in the hint. • Schema Method Another way to update index statistics is to compute indexes by schema rather than by table. it is often acceptable to estimate the statistics rather than compute them. • SQL Level Parallelism Hints are used to define parallelism at the SQL statement level.4) */ ….Analyze_Schema ('BDB'. 'compute'). with the number of processors being the minimum degree. If data warehouse indexes are the only indexes located in a single schema. BDB is the schema for which the statistics should be updated. then you can use the following command to update the statistics: EXECUTE SYS. we recommend running them at off-peak times when no other process is using the database. and you will not receive an error message. TIP: These SQL statements can be very resource intensive. Parallelism Parallel execution can be implemented at the SQL statement. SELECT /*+ PARALLEL_INDEX(order_fact. or instance level for many SQL operations.DBMS_UTILITY. order_fact_ixl. Use ‘estimate’ instead of ‘compute’ in the above examples. database object. For this reason.ANALYZE INDEX SYS_C0011119 COMPUTE STATISTICS. the hint will not be used. TIP: When using a table alias in the SQL Statement.4) */ …. ANALYZE INDEX SYS_C0011105 COMPUTE STATISTICS. Note that the DBA must grant the execution privilege for dbms_utility to the database user executing this command.

SELECT /*+PARALLEL (EMP. 4) */ EMPNO. 4) */ EMPNO. to execute the ENABLE. . Additional Tips • Executing Oracle SQL Scripts as Pre and Post Session Commands on UNIX You can execute queries as both pre. For a UNIX environment. Create the Oracle user “pmuser” with the following SQL statement: CREATE USER PMUSER IDENTIFIED EXTERNALLY DEFAULT TABLESPACE .SQL file created earlier (assuming the data warehouse is on a database named ‘infadb’). ENAME FROM EMP A Here. the Informatica id “pmuser” is used to log onto the Oracle database. this may be a security issue since both username and password are hard-coded and unencrypted. the format of the command is: sqlplus –s user_id/password@database @ script_name. In the following example.and post-session commands. PAGE BP-144 BEST PRACTICES INFORMATICA CONFIDENTIAL . The following example demonstrates how to set a table’s degree of parallelism to four for all eligible SQL statements on this table: ALTER TABLE order_fact PARALLEL 4. TEMPORARY TABLESPACE . you would execute the following as a post-session command: sqlplus -s pmuser/pmuser@infadb @ /informatica/powercenter/Scripts/ENABLE. The correct way is: SELECT /*+PARALLEL (A. use the operating system’s authentication to log onto the database instance. To avoid this. ENAME FROM EMP A • Table Level Parallelism Parallelism can also be defined at the table and index level. . .sql For example. Ensure that Oracle is not contending with other processes for these resources or you may end up with degraded performance due to resource contention.SQL In some environments. . the parallel hint will not be used because of the used alias “A” for table EMP.

In the following pre-session command. To force the Oracle optimizer to process the join on the source instance. Too much unneeded data and index information flowing into buffer cache quickly pushes out valuable pages. the Source Qualifier transformation should be executed on the target instance. Oracle fetches all of the data from both tables. However. If database I/O (input/output operations to the physical disk subsystem) can be reduced to the minimal required set of data and index pages. these pages will stay in RAM longer. then processes everything on the target instance. this causes a great deal of network traffic. moves the data across the network to the target instance. SQL Server Description Proper tuning of the source and target database is a very important consideration to the scalability and usability of a business analytical environment. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-145 . “pmuser” (the id Informatica is logged onto the operating system as) is automatically passed from the operating system to the database and used to execute the script: sqlplus -s /@infadb @/informatica/powercenter/Scripts/ENABLE. • • • • • • Manage system memory usage (RAM caching) Create and maintain good indexes Partition large data sets and indexes Monitor disk I/O subsystem performance Tune applications and queries Optimize active data Manage RAM Caching Managing random access memory (RAM) buffer cache is a major consideration in any database server environment.SQL You may want to use the init. If either data source is large. For example. Accessing data in RAM cache is much faster than accessing the same Information from disk. • DRIVING_SITE ‘Hint’ If the source and target are on separate instances. Managing performance on an SQL Server encompasses the following points.ora parameter “os_authent_prefix” to distinguish between “normal” oracle-users and “external-identified” ones. which may reduce the number of selected rows. use the ‘Generate SQL’ option in the source qualifier and include the ‘driving_site’ hint in the SQL statement as: SELECT /*+ DRIVING_SITE */ …. you want to join two source tables (A and B) together. The primary goal of performance tuning is to reduce I/O so that buffer cache is best utilized.

Set Working Set Size Option Use this option to reserve physical memory space for SQL Server that is equal to the server memory setting. these include: Full Recovery Bulk-Logged Recovery Simple Recovery Cost Threshold for Parallelism Option Use this option to specify the threshold where SQL Server creates and executes parallel plans.Several settings in SQL Server can be adjusted to take advantage of SQL Server RAM usage: • • Max async I/O is used to specify the number of simultaneous disk I/O operations (???) that SQL Server can submit to the operating system. Max Degree of Parallelism Option Use this option to limit the number of processors (a max of 32) to use in parallel plan execution. Setting ‘set working set’ size means the operating system will not attempt to swap out SQL Server pages even if they can be used more readily by another process when SQL Server is idle. Note that this setting is automated in SQL Server 2000 SQL Server allows several selectable models for database recovery. Only set cost threshold for parallelism on symmetric multiprocessors (SMP). The server memory setting is configured automatically by SQL Server based on workload and available resources. SQL Server creates and executes a parallel plan for a query only when the estimated cost to execute a serial plan for the same query is higher than the value set in cost threshold for parallelism. It will vary dynamically between min server memory and max server memory. Optimizing Disk I/O Performance PAGE BP-146 BEST PRACTICES INFORMATICA CONFIDENTIAL . The cost refers to an estimated elapsed time in seconds required to execute the serial plan on a specific hardware configuration. Priority Boost Option Use this option to specify whether SQL Server should run at a higher scheduling priority than other processors on the same computer. SQL Server runs at a priority base of 13. Set the value to a number greater than 1 to restrict the maximum number of processors used by a single query execution . The default value is 0. If you set this option to one. which uses the actual number of available CPUs. Set this option to 1 to suppress parallel plan generation. which is a priority base of seven. The default is 0.

• • Bcp is a command prompt utility that copies data into or out of SQL Server. it is necessary to drive configuration around maximizing SQL Server disk I/O performance by load-balancing across multiple hard drives. Some possible candidates for partitioning include: • • • • • Transaction log Tempdb Database Tables Non-clustered indexes Using bcp and BULK INSERT Two mechanisms exist inside SQL Server to address the need for bulk movement of data. The first mechanism is the bcp utility. you need not be particularly concerned with the subject of disk I/O and balancing of SQL Server I/O activity across hard drives for maximum performance.000. When the server recovers. The server suddenly loses power just as it finishes processing row number 999. you attempt to load 1. SQL Server commits all rows to be loaded as a single batch. An advantage of using BULK INSERT is that it can copy data into instances of SQL Server using a Transact-SQL statement.000 rows of new data into a table.. file groups. rather than having to shell out to the command prompt. it is good to get in the habit of specifying a batch size for recoverability reasons.999 rows will need to be rolled back out of the database before you attempt to reload the data. because SQL Server would have only had to rollback 9999 rows instead of 999. RAID partitioning) and applying various data configuration mechanisms in SQL Server such as files. To build larger SQL Server databases however. performance can be improved by partitioning the data to increase the amount of disk I/O parallelism. disk. Partitioning can be done using a variety of techniques.999. Partitioning for Performance For SQL Server databases that are stored on multiple disk drives.999.When configuring a SQL Server that will contain only a few gigabytes of data and not sustain heavy read or write activity. BULK INSERT can only pull data into SQL Server. If none is specified. Unlike bcp.000 you could have saved significant recovery time. which will contain hundreds of gigabytes or even terabytes of data and/or that can sustain heavy read/write activity (as in a DSS application). those 999. Methods for creating and managing partitions include configuring your storage subsystem (i. tables and views. TIP: Both of these mechanisms enable you to exercise control over the batch size. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-147 . By specifying a batch size of 10.e. For example. BULK INSERT is a Transact-SQL statement that can be executed from within the database environment. Unless you are working with small volumes of data. The second is the BULK INSERT statement.

Tuning MultiLoad There are many aspects to tuning a Teradata database. With PowerCenter 5. Other areas to analyze when performing a MultiLoad job include estimating space requirements and monitoring MultiLoad performance.0.x several aspects of tuning can be controlled by setting MultiLoad parameters to maximize write throughput.1. One of TPump’s advantages is that it does not lock the table that is being loaded. MultiLoad Parameters PAGE BP-148 BEST PRACTICES INFORMATICA CONFIDENTIAL . This best practice will focus on MultiLoad since PowerCenter 5.x can auto-generate MultiLoad scripts and invoke the MultiLoad utility per PowerCenter target. such as online users modifying the database during bulk loads. · Change from Full to Bulk-Logged Recovery mode unless there is an overriding need to preserve a point–in time recovery. and “upserts” to any table. whereas in PowerCenter 5. updates. Read operations should not affect bulk loads. MultiLoad supports inserts. the data is first written to file. deletes. the Informatica server transfers data via a UNIX named pipe to MultiLoad. and TPump. Note: In PowerCenter 5. Teradata Description Teradata offers several bulk load utilities including FastLoad.General Guidelines for Initial Data Loads While loading data: • • • • • • Remove indexes Use Bulk INSERT or bcp Parallel load using partitioned data files into partitioned tables Run one load stream for each available CPU Set Bulk-Logged or Simple Recovery model Use TABLOCK option While loading data • • • Create indexes Switch to the appropriate recovery model Perform backups General Guidelines for Incremental Data Loads • • Load Data with indexes in place Performance and concurrency requirements should determine locking granularity (sp_indexoption). MultiLoad. FastLoad is used for loading inserts into an empty table.

Ensure that the date format used in your target flat file is equivalent to the date format parameter in your MultiLoad script. In addition to the space that may be required by target tables. • • • • • • Estimating Space Requirements for MultiLoad Jobs Always estimate the final size of your MultiLoad target tables and make sure the destination has enough space to complete your MultiLoad job. but also allows you to set performance options. A client based operand that is part of the logon string. in particular. Delete. Use the following formula to prepare the preliminary space estimate for one target table. and no non-unique secondary indexes: INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-149 . Spool space is freed at each restart. or the restart log table. selecting the one that will be most efficient for each target table. it represents the number of records to write before performing a checkpoint operation. assuming no fallback protection. This value should not exceed one per working amp (Access Module Process). Allows you to specify whether to drop or retain the three error tables for a MultiLoad session.1. Load Mode. each MultiLoad job needs permanent space for: • • • Work tables Error tables Restart Log table Note: Spool space cannot be used for MultiLoad work tables. and Upsert. Checkpoint. If the checkpoint is set to a value greater than 60. Available only in PowerCenter 5. Interval in hours between MultiLoad attempts to log on to the database when the maximum number of sessions are already running. Date Format. Tenacity. Here are the MultiLoad-specific parameters that are available in PowerCenter: • • TDPID. Set this parameter to 1 to drop error tables or 0 to retain error tables. Available load methods include Insert. By using permanent space for the MultiLoad tables. Also remember to account for the size of error tables since error tables are generated for each target table. Also validate that your date format is compatible with the date format specified in the Teradata database. To maximize write speed to the database.1. When you set the checkpoint value to less than 60. try to limit the number of checkpoint operations that are performed. error tables. require a lot of extra permanent space. A checkpoint interval is similar to a commit interval for other databases.With PowerCenter 5. This not only enhances development. this parameter specifies the number of minutes that MultiLoad waits before retrying a logon operation. Sleep. data is preserved for restart operations after a system failure. you can auto-generate MultiLoad scripts. it represents the interval in minutes between checkpoint operations. Available only in PowerCenter 5. Max Sessions.x. no journals. Drop Error Tables. Consider creating separate external loader connections for each method. this parameter specifies the maximum number of sessions that are allowed to log on to the database. Update. Work tables.

NUSIs degrade MultiLoad performance because the utility builds a separate NUSI change row to be applied to each NUSI sub-table after all of the rows have been applied to the primary table. then the issue may be with the client system. 6. 5. as data is applied to the target tables. • If the performance bottleneck is during the acquisition phase. 7. Verify that the primary index is unique. Monitoring MultiLoad Performance Here are some tips for analyzing MultiLoad performance: 1. PAGE BP-150 BEST PRACTICES INFORMATICA CONFIDENTIAL . Use the Teradata RDBMS Query Session utility to monitor the progress of the MultiLoad job.Resusage table for problem areas. Determine which phase of the MultiLoad job is causing poor performance. Write operations to the fallback error tables are performed at normal SQL speed. as data is acquired from the client system. Non-unique primary indexes can cause severe MultiLoad performance problems. then the issue is not likely to be with the client system. Determine whether the target tables have non-unique secondary indexes (NUSIs). 3. such as data bus or CPU capacities at or near 100 percent for one or more processors. Check for locks on the MultiLoad target tables and error tables. Save these listings for evaluation. Check the DBC. • 2. The MultiLoad job output lists the job phases and other useful information. If it is during the application phase.PERM = (using data size + 38) x (number of rows processed) x (number of apply conditions satisfied) x (number of Teradata SQL statements within the applied DML) Make adjustments to your preliminary space estimates according to the requirements and expectations of your MultiLoad job. which is much slower than normal MultiLoad tasks. Check the size of the error tables. 4.

use pstat and swap. On a memory-starved and I/O-bound server. If the swap space is too small for the intended applications. Does ps show that your system is running many memory-intensive jobs? Look for jobs with a large set (RSS) or a high storage integral. While some of these tips will be more helpful than others in a particular environment. it should be increased. increase memory to prevent swapping. Identifying and Resolving Memory Issues Use vmstat or sar to check swapping actions. Some swapping will normally occur regardless of the tuning settings. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-151 . Swapping. on any database system. This occurs because some processes use the swap space by their design.Performance Tuning UNIX Systems Challenge The following tips have proven useful in performance tuning UNIX-based machines. you can get a snapshot of page swapping. What processes are using most of the CPU? This may help you distribute the workload better. causes a major performance decrease and increased I/O. What processes are using most of the memory? This may help you distribute the workload better. Description Running ps-axu Run ps-axu to check for the following items: • • • • Are there any processes waiting for disk access or for paging? If so check the I/O and memory subsystems. To check swap space availability. By using sar 5 10 or vmstat 1 10. Check the system to ensure that swapping does not occur at any time during the session processing. If page swapping does occur at any time. this can effectively shut down the PowerCenter process and any databases running on the server. all are worthy of consideration.

if your system has one. Alternatively. This may limit the system’s capacity (number of files. are the most active disks also the fastest disks? Run sadp to get a seek histogram of disk activity.Run vmstate 5 (sar –wpgr ) for SunOS. or in two welldefined peaks at opposite ends (bad)? • • • Reorganize your file systems and disks to distribute I/O activity as evenly as possible. as well as CPU load. Using symbolic links helps to keep the directory structure the same throughout while still moving the data files that are causing I/O contention. The buffer cache is not used in system V. If you have statically allocated STREAMS buffers. spread evenly across the disk (tolerable). If you don’t see any significant improvement. put performance-critical files into one filesystem and use the fastest drive for that filesystem. • PAGE BP-152 BEST PRACTICES INFORMATICA CONFIDENTIAL . but you may not care about them as much.X systems. Is activity concentrated in one area of the disk (good). which is a memory hog. Try running jobs requiring a lot of memory at night. you are short of memory. This may reduce network performance. Long bursts of swap-outs mean that active jobs are probably falling victim and indicate extreme memory shortage. Try to limit the time spent running sendmail. you are extremely short of memory. reduce the number of large (2048. If only one memory-intensive job is running at a time. Are there a high number of address translation faults? (System V only) This suggests a memory shortage. Take notice of how fairly disk activity is distributed among the system disks. add more memory.and 4096-byte) buffers. BSD systems swap-out inactive jobs. but netstat-m should give you an idea of how many buffers you really need. number of processes. vmstat –S 5 to detect and confirm memory problems and check for the following: • • • Are pages-outs occurring consistently? If so. Occasional swap-outs are normal. Try running jobs requiring a lot of memory in a batch queue. try following remedial steps: • • • • • • • Reduce the size of the buffer cache. Reduce the size of your kernel’s tables. Iostat can be used to monitor the I/O load on the disks on the UNIX server. Using iostat permits monitoring the load on specific disks. If memory seems to be the bottleneck of the system. look at the w and de fields of vmstat. Identifying and Resolving Disk I/O Issues Use iostat to check i/o load and utilization. etc.). These should ALWAYS be zero. If you don’t have vmsta –S. Put performance-critical files on a filesystem with a large block size: 16KB or 32KB (BSD). Making the buffer cache smaller will hurt disk I/O performance. your system may perform satisfactorily. Use your fastest disk drive and controller for your root filesystem. if single-file throughput is important. this will almost certainly have the heaviest activity. Are swap-outs occurring consistently? If so. If it is not. This may not help the memory problems. by decreasing BUFPAGES.4 and SunOS 4.

editor backup and auto-save files. Identifying and Resolving Network Issues INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-153 . without letup? It is good for the CPU to be busy. and %idle (% of idle time). If your workload grows. If you are running BSD UNIX or V. try the following actions: • • • Write a find script that detects old core dumps. If the system shows a heavy load of %sys. You don’t have local disk I/O problems. use the disk quota system to prevent individual users from gathering too much storage. This provides the %usr (user). Is the idle time always 0. Consider upgrading your system. provided the work is done in the morning. and restore). • • • Eliminate unnecessary daemon processes.g. this is indicative of memory and contention of swapping/paging problems. This may hurt your system’s memory performance. Use nice to lower the priority of CPU-bound jobs will improve interactive performance. Check memory statistics again by running vmstat 5 (sar-rwpg).• • • • Increase the size of the buffer cache by increasing BUFPAGES (BSD). Run the script through cron. Get users to run jobs at night with at or any queuing system that’s available always for help. Use a smaller block size on file systems that are mostly small files (e. rwhod and routed are particularly likely to be performance problems. Also. or buying another system to share the load. If you are using NFS and using remote files. work must be piling up somewhere. it is necessary to make memory changes to reduce the load on the system server. A target goal should be %usr + %sys= 80 and %wio = 10 leaving %idle at 10. but if it is always busy 100 percent of the time. build a new filesystem. When you run iostat 5above. object modules. using nice is really only a temporary solution. using nice to raise the priority of CPU-bound jobs will expedite them but will hurt interactive performance. Rebuild your file systems periodically to eliminate fragmentation (backup. %wio (waiting on I/O). If your system is paging or swapping consistently. Swapping makes performance worse. Identifying and Resolving CPU Overload Issues Use sar –u to check for CPU loading. it will soon become insufficient. If your system has disk capacity problem and is constantly running out of disk space. and %usr has a high %idle. %sys (system). look at your network situation. This points to CPU overload. also observe for CPU idle time. If %wio is higher. In general though. source code files. but any savings will help. You may not care if the CPU (or the memory or I/O system) is overloaded at night. and other trash and deletes it automatically. In this case. replacing it. the disk and I/O contention should be investigated to eliminate I/O bottleneck on the UNIX server.4. and small data files).. you have memory problems. fix memory problem first.

3 (or earlier). A large number of output errors suggests problems with your system and its interface to the network.0 or System V.intensive programs across the network. have users log into the remote system to do their work. If the number of collisions is large. then spray the remote system from the local system and run netstat-s again. Use systems with good network performance as file servers. A large number of dropped packets may also indicate data corruption.You can suspect problems with network capacity or with data integrity if users experience slow performance when they are using rlogin or when they are accessing files via NFS. If badmixis roughly equal to timeout. Look to see if there are CPU. suspect an overloaded network. or one or more servers may have crashed. the remote system is slow network server If the increase of socket full drops is less than the number of dropped packets. Run netstat-s on the remote system. some part of the network between the NFS client and server is overloaded and dropping packets. If collisions and network hardware are not a problem. Minimize the number of files per directory. the system may just not be able to tolerate heavy network workloads. If you use sh. If the number of input or output errors is large. Try to prevent users from running I/O. A large number of input errors indicate problems somewhere on the network. Run nfsstat and look at the client RPC data. at least one NFS server is overloaded. General Tips and Summary of Other Useful Commands • • • • • Use dirs instead of pwd. If the number of dropped packets is large. The greputility is a good example of an I/O intensive program. suspect hardware problems. If the increase of UDP socket full drops (as indicated by netstat) is equal to or greater than the number of drop packets that spray reports. look for network errors. If the retransfield is more than 5 percent of calls. the network may be faulty. Use vi or a native window editor rather than emacs. Try to reorganize the network so that this system isn’t a file server. at least one NFS server is overloaded. If timeout is high. Reorganize the computers and disks on your network so that as many users as possible can do as much work as possible on a local system. Look at netsat-i. If timeoutand retrans are high. If not. PAGE BP-154 BEST PRACTICES INFORMATICA CONFIDENTIAL . Use spray to send a large burst of packets to the slow system. figure out which system appears to be slow. the network or an NFS server is overloaded. Instead. reconfigure the kernel with more buffers. the remote system most likely cannot respond to incoming data fast enough. If you are short of STREAMS data buffers and are running Sun OS 4. Avoid ps. but badxidis low. avoid long search paths. memory or disk I/O problems on the remote system.

Of particular attention is maxuproc. lsattr –E –l sys0 is used to determine some current settings on most UNIX environments.powermar pm4 5000000 1421 2714 m 8003 00000000 --rw------. Don’t run grep or other I/O. The UNIX System V File System.cfg 1 202 0:04 dtm pmserver. Avoid raw devices. vxfs. and lastly raw devices that. Maxuproc is the setting to determine the maximum level of user background processes. Typical choices include: s5.• • • Use egrep rather than grep: it’s faster. The “UNIX File System” derived from Berkeley (BSD). to view the current Informatica processes.cfg 1 202 - <-----------.intensive applications across NFS.cfg 1 202 0:02 dtm pmserver. On most UNIX environments. this is defaulted to 40 but should be increased to 250 on most systems. Use rlogin rather than NFS to access files on remote systems. ufs.Current Semaphore Resources ---------------> INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-155 . In general.Current Shared Memory Resources ---------------> IPC status from <running system> as of Tue Feb 16 18:13:55 1999 T ID KEY MODE OWNER GROUP SEGSZ CPID LPID Shared Memory: m 0 0x094e64a5 --rw-rw---.cfg 0 202 1:30 pmserver 0:08 dtm pmserver.powermar pm4 25000000 2714 2714 <-----------.oracle dba 20979712 1254 1273 m 1 0x0927e9b2 --rw-rw---. in reality are not a file system at all.oradba dba 21749760 1331 2478 m 202 00000000 --rw------.Current PowerMart processes ---------------> UID PID PPID C powermar 2711 1421 289406976 powermar 2713 2711 289406976 powermar 1421 1 powermar 2712 2711 289406976 powermar 2714 1421 289406976 powermar 2721 2714 289406976 powermar 2722 2714 289406976 STIME TTY TIME CMD 16 18:13:11 ? 0:07 dtm pmserver. proprietary file systems from the UNIX vendor are most efficient and well suited for database work when tuned properly. Use PMProcs Utility ( PowerCenter Utility). For example: harmon 125: pmprocs <-----------. The Veritas File System.cfg 0 202 11 18:13:17 ? 1 08:39:19 ? 17 18:13:17 ? 11 18:13:20 ? 12 18:13:27 ? 8 18:13:27 ? 0:05 dtm pmserver. Be sure to check the database vendor documentation to determine the best file system for the specific machine.cfg 0 202 0:04 dtm pmserver.powermar pm4 25000000 2711 2711 m 4 00000000 --rw------.

Last PID that accessed the resource Semaphores .3 running on AIX 4.shows slot in LM shared memory Finally.3. is the main reference book for this Best Practice. Because PowerCenter processes data in a similar fashion as SMP databases. you also tune the system for PowerCenter.used to sync the reader and writer 0 or 1 . For detailed information on each of the parameters discussed here and much more on performance tuning of the applications running on UNIX-based systems refer this book. by tuning the server to support the database. there is a specific IBM Redbook for Oracle 7. when tuning UNIX environments. Most database systems provide a special tuning supplement for each specific version of UNIX. the general rule of thumb is to tune the server for a major database system. For example. References: System Performance Tuning (from O’Reilly Publishing) by Mike Loukid.Creator PID LPID . PAGE BP-156 BEST PRACTICES INFORMATICA CONFIDENTIAL .There are 19 Semaphores held by PowerMart processes • • • • • • Pmprocs is a script that combines the ps and ipcs commands Only available for UNIX CPID .

NT is considered a “selftuning” operating system because it attempts to configure and tune memory to the best of its ability. When using the Performance Monitor. Unfortunately. There is currently no solution for optimizing this situation. one CPU may be at 100% utilization while the other CPUs are at 0% utilization. but offers limited performance options. look for these performance indicators to check: Processor: percent processor time. However. there is a need to tune the memory INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-157 . If the system is “maxed out” (i. The Performance tab (hit ctrl+alt+del. Also keep in mind NT’s inability to split processes across multiple CPUs. running at 100 percent for all CPUs). While some are likely to be more helpful than others in any particular environment. although Microsoft is working on the problem. Memory: pages/second. it may be necessary to add processing power to the server. and click on the Performance tab). Description The two places to begin when tuning an NT server are: • • The Performance Monitor.Performance Tuning Windows NT/2000 Systems Challenge The Microsoft Windows NT/2000 environment is easier to tune than UNIX environments. a number of five pages per second or less is acceptable. For SMP environments you need to add one monitor for each CPU. If the number is much higher. especially in comparison with UNIX environments. In this comparison. choose task manager. with differences for Windows 2000 noted in the last section. all are worthy of consideration. The following tips have proven useful in performance tuning NT-based machines.e. Note: Tuning is essentially the same for both NT and 2000 based systems. NT scalability is quite limited. this does not mean that the NT system administrator is entirely free from performance improvement responsibilities. Thus.

connections. level the load across the disk devices by moving files. By analyzing the disk I/O. Server: bytes total/second. Intimate knowledge of the network card. In situations where there are multiple controllers. combined with the use of a Network Analyzer. Remember that this is only a guideline. Physical disks: percent time. files should be moved to less utilized disk devices to optimize overall performance. the best tuning option for database applications in the NT environment. can eliminate bottlenecks and improve throughput of network traffic at a magnitude of 10 to 1000 times depending on the hardware. High I/O settings indicate possible contention for I/O. Load reasonableness. by far. Before adding memory. Assume that some software will not be well coded. can potentially starve the CPUs on the machine. be sure to level the load across the controllers too. I/O Optimization. and the recommended setting may be too high for some systems. This setting is used to determine the number of users sitting idle waiting for access to the same disk device. Device Drivers. If necessary. This is the best place to tune database performance within NT environments. moving files to less frequently used disk devices should level the load of the disk device. making it difficult to identify real problems.to make better use of hardware rather than virtual memory. and very possibly resulting in a false sense of security. and hubstacks is critical for optimal server performance when moving data across the network. It monitors the server network connection. it is also expensive and usually must be planned to support the BANK system for EISA and PCI architectures. The device drivers for some types of hardware are notorious for wasting CPU clock cycles. Some connections may be fast while others are slow. PAGE BP-158 BEST PRACTICES INFORMATICA CONFIDENTIAL . Memory and services. the load on the database can be leveled across multiple disks. Thus. Although adding memory to NT is always a good solution. Physical disks: queue length. Off-loading CPU hogs may be the only recourse. Careful analysis of the network card (or cards) and their settings. It is nebulous because it bundles multiple network connections together. This is a very nebulous performance indicator. If this number is greater than two. both the unused old service and the new service may be using valuable CPU memory resources. Be sure to get the latest drivers from the hardware vendor to minimize this problem. and some background processes. check the Services in Control Panel because many background applications do not uninstall the old service when installing a new update or version. This is. Resolving Typical NT Problems The following paragraphs describe some common performance problems in an NT environment and suggest tuning solutions. such as a mail server or web server running on the same machine.

This is useful in monitoring other systems that require administration. NT.Using electrostatic devices and fast-wide SCSI can also help to increase performance. by default. session execution. be sure to implement disk stripping to split single data files across multiple disk drives and take advantage of RAID (Redundant Arrays of Inexpensive Disks) technology. and network level. memory. Finally. or system tools in the task manager. and cached lookup tables. Also increase the priority of the disk devices on the NT server. With Windows 2000. Alerts are INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-159 . The Informatica server also uses system memory for other data such as aggregate. to monitor the amount of system resources used by the Informatica server and to identify system bottlenecks. sets the disk device priority low. rank. Change the disk priority setting in the Registry at service\lanman\server\parameters and add a key for ThreadPriority of type DWORD with a value of 2. Data in counter logs can be saved as comma-separated or tab-separated files that are easily viewed with Excel. Monitoring System Performance In Windows 2000 In Windows 2000 the Informatica server uses system resources to process transformation. on NT servers. or thread activity.exe at the command prompt causes the system to start System Monitor. The alerting function allows you to define a counter value that will trigger actions such as sending a network message. you can use system monitor in the Performance Console of the administrative tools. Counter logs record sampled data about hardware resources and system services based on performance objects and counters in the same manner as System Monitor. or starting a log. and fragmentation can be eliminated by using a Windows NT/2000 disk defragmentation product. joiner. processor. Trace logs collect event traces that measure performance statistics associated with events such as disk and file I/O. Therefore they can be viewed in System Monitor. You can copy counter paths and settings from the System Monitor display to the Clipboard and paste counter paths from Web pages or other sources into the System Monitor display. Windows 2000 provides the following tools (accessible under the Control Panel/Administration Tools/Performance) for monitoring resource usage on your computer: • • System Monitor Performance Logs and Alerts These Windows 2000 monitoring tools enable you to analyze usage and detect bottlenecks at the disk. The System Monitor is portable. not Performance Monitor. running a program. The Performance Logs and Alerts tool provides two types of performance-related logs—counter logs and trace logs—and an alerting function. The System Monitor displays a graph which is flexible and configurable. page faults. and reading and writing of data. Typing perfmon. Using this type of product is a good idea whether the disk is formatted for FAT or NTFS.

but want to be notified when it exceeds or falls below a specified value so that you can investigate and determine the cause of the change. and Processor(_Total)\ % Processor Time. PAGE BP-160 BEST PRACTICES INFORMATICA CONFIDENTIAL . are configured to create a binary log that. You might want to set alerts based on established performance baseline values for your system. (The subkey is HKEY_CURRENT_MACHINE\SYSTEM\CurrentControlSet\Services\SysmonLog\Log_Qu eries. If you start logging with these settings. PhysicalDisk(_Total)\Avg. data is saved to the Perflogs folder on the root directory and includes the counters: Memory\ Pages/sec. after manual start-up. Some other useful counters include Physical Disk: Reads/sec and Writes/sec and Memory: Available Bytes and Cache Bytes.) The predefined log settings under Counter Logs named System Overview.useful if you are not actively monitoring a particular counter threshold value. updates every 15 seconds and logs continuously until it achieves a maximum size. If you want to create your own log setting press the right mouse on one of the log types. Disk Queue Length. Note:You must have Full Control access to a subkey in the registry in order to create or modify a log configuration.

Consolidate separate mappings into one mapping with either a single Source Qualifier Transformation or one set of Source Qualifier Transformations as the data source for the separate data flows. if a mapping moves data from an Integer port to a Decimal port. When these conversions are performed unnecessarily performance slows. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-161 . Consider Single-Pass Reading If several mappings use the same data source. source and target for peak performance. a PowerCenter mapping is the biggest ‘bottleneck’ in the load process as business rules determine the number and complexity of transformations in a mapping. Scrutinize Datatype Conversions PowerCenter Server automatically makes conversions between compatible datatypes. a single-pass reading will reduce the number of times that function will be called in the session.Tuning Mappings for Better Performance Challenge In general. consider a single-pass reading. Similarly. For example. or in the update override of a target object. then back to an Integer port. if a function is used in several mappings. The extent to which and how SQL can be tuned depends on the underlying source or target database system. the conversion may be unnecessary. be sure the SQL statement is tuned. This Best Practice offers some guidelines for tuning mappings. Lookup Transformation. Description Analyze mappings for tuning only after you have tuned the system. Optimize SQL Overrides When SQL overrides are required in a Source Qualifier.

In Mapping X. This is especially true when integer values are used in place of other datatypes for performing comparisons using Lookup and Filter transformations. and so on.In some instances however. During transformation errors. the PowerCenter Server caches the lookup table and queries the lookup cache during the session. When this option is not enabled. conflicting mapping logic. When to Cache Lookups When caching is enabled. any condition that is specifically set up as an error. datatype conversions can help improve performance. A better rule of thumb than memory size is to determine the ‘size’ of the potential lookup cache with regard to the number of rows expected to be processed. For example. re-evaluate the constraints for these transformation. the PowerCenter Server queries the lookup table on a row-by-row basis. consider the following example. if the lookup table needs less than 300MB of memory. If errors recur consistently for certain transformations. and logs the error in the session log. the source and lookup contain the following number of records: ITEMS (source): MANUFACTURER: DIM_ITEMS: Number of Disk Reads 5000 records 200 records 100000 records PAGE BP-162 BEST PRACTICES INFORMATICA CONFIDENTIAL . NOTE: All the tuning options mentioned in this Best Practice assume that memory and cache sizing for lookups are sufficient to ensure that caches will not page to disks. removes the row causing the error from the data flow. Optimize Lookup Transformations There are a number of ways to optimize lookup transformations that are setup in a mapping. Transformation errors can be caused by many things including: conversion errors. The session log can help point out the cause of these errors. the PowerCenter Server engine pauses to determine the cause of the error. Eliminate Transformation Errors Large numbers of evaluation errors significantly slow performance of the PowerCenter Server. lookup caching should be enabled. Any source of errors should be traced and eliminated. In general. Practices regarding memory and cache sizing for Lookup transformations are covered in Best Practice: Tuning Sessions for Better Performance.

then it will take a total of 10. In the non-cached log. If your expected source records is less than X. Code the lookup into the mapping. In the cached log. In this case the number of records in the lookup table is not small in comparison with the number of times the lookup will be executed. Use the following formula to find the breakeven row point: (LS*NRS*CRS)/(CRS-NRS) = X Where X is the breakeven point. Note this time in seconds: LOOKUP TIME IN SECONDS = LS.Cached Lookup LKP_Manufacturer Build Cache Read Source Records Execute Lookup Total # of Disk Reads LKP_DIM_ITEMS Build Cache Read Source Records Execute Lookup Total # of Disk Reads 100000 5000 0 105000 200 5000 0 5200 Un-cached Lookup 0 5000 5000 100000 0 5000 5000 10000 Consider the case where MANUFACTURER is the lookup table. Consider the case where DIM_ITEMS is the lookup table.000. If the lookup table is not cached. the number of records in the lookup table is small in comparison with the number of times the lookup is executed.000 total disk reads to execute the lookup. it will result in 105. take the time from the last lookup cache to the end of the load in seconds and divide it into the number or rows being processed: NON-CACHED ROWS PER SECOND = NRS. 7. 2. This is the more likely scenario. So this lookup should be cached. it is better to cache the lookup.000 total disk reads to build and execute the lookup. In this case. Run the mapping with caching turned off and save the log. 8.000 rows. it will take a total of 5200 disk reads to build the cache and execute the lookup. 6. If the lookup table is cached. add a where clause on a relational source to load a sample 10. 4. Thus the lookup should not be cached. Look in the cached lookup log and determine how long it takes to cache the lookup object. If your expected source records is more than X. 5. then the disk reads would total 10. If the lookup table is not cached. Use the following eight step method to determine if a lookup should be cached: 1. it is better to not cache the lookup. For example. 3. Select a standard set of data from the source. take the time from the last lookup cache to the end of the load in seconds and divide it into number or rows being processed: CACHED ROWS PER SECOND = CRS. If the lookup table is cached. Run the mapping with caching turned on and save the log to a different name than the log created in step 3. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-163 .

The formula would result in: (166*147*232)/(232-147) = 66. If multiple cached lookups are from the same table but are expected to return different columns of data. If the option of creating a persistent cache is set in the lookup properties. if the source has less than 66. the use of an unnamed persistent cache allows multiple runs to use an existing cache file stored on the PowerCenter Server. If it has more than 66. This can improve performance because the Server builds the memory cache from cache files instead of the database. Across different mappings and sessions. if the same lookup is used multiple times in a mapping. Thus. Bringing back a common set of columns may reduce the number of disk reads.For example: Assume the lookup takes 166 seconds to cache (LS=166). Assume with a non-cached lookup the load is 147 rows per second (NRS = 147). • Within a specific session run for a mapping. Across sessions of the same mapping. the PowerCenter Server will re-use the cache for the multiple instances of the lookup. set the conditions with an equal sign first in order to optimize lookup performance.603. the lookup must be cached. Options can be added to the WHERE clause to reduce the set of records included in the resulting cache. Assume with a cached lookup the load is 232 rows per second (CRS=232). • • Reducing the Number of Cached Rows There is an option to use a SQL override in the creation of a lookup cache. Sharing Lookup Caches There are a number of methods for sharing lookup caches. it may be better to setup the multiple lookups to bring back the same columns even though not all return ports are used in all lookups. the lookup should not be cached. This feature should only be used when the lookup table is not expected to change between session runs.603 records. PAGE BP-164 BEST PRACTICES INFORMATICA CONFIDENTIAL . the use of a named persistent cache allows sharing of an existing cache file.603 records. Using the same lookup multiple times in the mapping will be more resource intensive with each successive instance. NOTE: If you use a SQL override in a lookup. Optimizing the Lookup Condition In the case where a lookup uses more than one lookup condition. the memory cache created for the lookup during the initial run is saved to the PowerCenter Server. then the lookup should be cached.

use a filter on the Source Qualifier or a Filter Transformation immediately after the source qualifier to improve performance. This option requires that data sent to the aggregator be sorted in the order in which the ports are used in the aggregator’s group by. indexes on the database table should include every column used in a lookup condition. When it is used. an ORDER BY condition is issued in the SQL statement used to create the cache. When possible. calculations can be performed and information passed on to the next transformation. This can improve performance for both cached and un-cached lookups. the Server must wait for all rows of data before processing aggregate calculations. Filters or routers should also be used to drop rejected rows from an Update Strategy transformation if rejected rows do not need to be saved. sort and compare values in the lookup condition columns. Columns used in the ORDER BY condition should be indexed. use numbers instead of strings or dates in the GROUP BY columns. performance can be helped by indexing columns in the lookup condition. Without sorted input. Also avoid complex expressions in the Aggregator expressions. Use the Sorted Input option in the aggregator. ¨ In the case of a cached lookup. Optimize Filter and Router Transformations Filtering data as early as possible in the data flow improves the efficiency of a mapping. Instead of using a Filter Transformation to remove a sizeable number of rows in the middle or end of a mapping.Indexing the Lookup Table The PowerCenter Server must query. Avoid complex expressions when creating the filter condition. Use simple columns in the group by condition to make the Aggregator Transformation more efficient. Replace multiple filter transformations with a router transformation. ¨ In the case of an un-cached lookup. As a result. since a SQL statement created for each row passing into the lookup transformation. This reduces the number of transformations in the mapping and makes the mapping easier to follow. Filter transformations are most effective when a simple integer or TRUE/FALSE expression is used in the filter condition. The session log will contain the ORDER BY statement. Optimize Aggregator Transformations Aggregator Transformations often slow performance because they must group data before processing it. as a group is passed through an aggregator. Use of the INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-165 . the PowerCenter Server assumes all data is sorted by group and. especially in GROUP BY ports. The Sorted Input option decreases the use of aggregate caches.

PAGE BP-166 BEST PRACTICES INFORMATICA CONFIDENTIAL . In order to minimize memory requirements. Thus. using this option assumes that a mapping is using an Aggregator with Sorted Input option. If it is set to cache no values then the Informatica Server must query the Informatica repository each time to determine what is the next number which can be used. This technique can only be used if the source data can be sorted. The premise is to use the previous row of data to determine whether the current row is a part of the current group or is the beginning of a new group. Optimize Joiner Transformations Joiner transformations can slow performance because they need additional space in memory at run time to hold intermediate results.Sorted Inputs option is usually accompanied by a Source Qualifier which uses the Number of Sorted Ports option. the use of variable ports is required to hold data from the previous row of data processed. which include Stored Procedures. Define the rows from the smaller set of data in the joiner as the Master rows. Use an Expression and Update Strategy instead of an Aggregator Transformation. If possible. the smaller set of data should be cached and thus set as Master. so a SQL override or a join condition should be used when joining multiple tables from the same database schema. Further. It should be noted any cached values not used in the course of a session are ‘lost’ since the sequence generator value in the repository is set. to give the next set of cache values. The Master rows are cached to memory and the detail records are then compared to rows in the cache of the Master rows. Use Normal joins whenever possible. then its data would be used to continue calculating the current group function. This property determines the number of values the Informatica Server caches at one time. when it is called next time. thus increasing the Number of Cached Values property can increase performance. Configuring the Number of Cached Values to a value greater than 1000 should be considered. An Update Strategy Transformation would follow the Expression Transformation and set the first row of a new group to insert and the following rows to update. if the row is a part of the current group. In the Expression Transformation. External Procedures and Advanced External Procedures. making calls to external procedures slows down a session. Normal joins are faster than outer joins and the resulting set of data is also smaller. Database systems usually can perform the join more quickly than the Informatica Server. Avoid External Procedure Transformations For the most part. avoid the use of these Transformations. Use the database to do the join when sourcing data from the same database schema. Optimize Sequence Generator Transformations Sequence Generator transformations need to determine the next available sequence number.

When examining expressions. If the transformation expressions are complex. Time the session with the original expression. To help isolate slow expressions. Processing field level transformations takes time. Likely candidates for optimization are the fields with the most complex expressions. 5. Run and time the edited session.Field Level Transformation Optimization As a final step in the tuning process. expressions used in transformations can be tuned. 4. Copy the mapping and replace half the complex expressions with a constant. Aggregate function calls can sometime be reduced. If a mapping performs the same logic multiple times in a mapping. move the lookup to a position before the data flow splits. Each target requires a Social Security Number lookup. Keep in mind that there may be more than one field causing performance problems. In the case of each aggregate function call. Instead of performing the lookup right before each target. Its often possible to get a 10. the Informatica Server must search and group the data. Factoring out Common Logic This can reduce the number of times a mapping performs the same logic. Minimize Function Calls Anytime a function is called it takes resources to process. Make another copy of the mapping and replace the other half of the complex expressions with a constant.20% performance improvement by optimizing complex field level transformations. 3. For example. a mapping has five target tables. focus on complex expressions for possible simplification. Use the target table mapping reports or the Metadata Reporter to examine the transformations. There are several common examples where function calls can be reduced or eliminated. then processing will be slower. moving the task upstream in the mapping may allow the logic to be done just once. 2. Thus the following expression: SUM(Column A) + SUM(Column B) Can be optimized to: INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-167 . do the following: 1. Run and time the edited session.

VAL_C. VAL_A+VAL_C. For example if you have an expression which involves a CONCAT function such as: CONCAT(CONCAT(FIRST_NAME.’ ‘). 0. 0. IIF(FLG_A=’N’ and FLG_B=’Y’ and FLG_C=’N’. The following is an example of rework of an expression which eliminates three comparisons down to one: For example: IIF(X=1 OR X=5 OR X=9. IIF(FLG_A=’Y’ and FLG_B=’Y’ and FLG_C=’N’. LAST_NAME) It can be optimized to: FIRST_NAME || ‘ ‘ || LAST_NAME Remember that IIF() is a function that returns a value. IIF(FLG_A=’Y’ and FLG_B=’N’ and FLG_C=’Y’. 'no') PAGE BP-168 BEST PRACTICES INFORMATICA CONFIDENTIAL . so operators should be used whenever possible. For example: IIF(FLG_A=’Y’ and FLG_B=’Y’ and FLG_C=’Y’. VAL_B+VAL_C. 0. 'yes'.0) + IIF(FLG_B=’Y’. IIF(FLG_A=’Y’ and FLG_B=’N’ and FLG_C=’N’. Be creative in making expressions more efficient. VAL_C.SUM(Column A + Column B) In general. not just a logical test. VAL_A.0) The original expression had 8 IIFs. VAL_A+VAL_B+VAL_C. 3 comparisons and two additions. 0. 16 ANDs and 24 comparisons. VAL_B. IIF(FLG_A=’N’ and FLG_B=’Y’ and FLG_C=’Y’. VAL_A.0) + IIF(FLG_C=’Y’. The optimized expression results in 3 IIFs. operators are faster than functions. IIF(FLG_A=’N’ and FLG_B=’N’ and FLG_C=’Y’.0)))))))) Can be optimized to: IIF(FLG_A=’Y’. VAL_B. IIF(FLG_A=’N’ and FLG_B=’N’ and FLG_C=’N’. VAL_A+VAL_B. This allows many logical statements to be written in a more compact fashion.

Use DECODE instead of LOOKUP When a LOOKUP function is used. Use Many Times Avoid calculating or testing the same value multiple times. using DECODE may improve performance. The local variable can be used only within the transformation but by calculating the variable only once can speed performance. Choose Numeric versus String Operations The Informatica Server processes numeric operations faster than string operations. Optimizing Char-Char and Char-Varchar Comparisons When the Informatica Server performs comparisons between CHAR and VARCHAR columns. it slows each time it finds trailing blank spaces in the row. When a DECODE function is used. As there is always overhead involved in moving data between transformations. the lookup values are incorporated into the expression itself so the Informatica Server does not need to lookup a separate table. This is especially important with data being pulled from the Source Qualifier Transformation. the Informatica Server must lookup a table in the database. 'yes'. Along the same lines. unnecessary links between transformations should be removed to minimize the amount of data moved. if a lookup is done on a large amount of data on two columns. 'no') Calculate Once. 4) = 1. consider making the subexpression a local variable. For example. Thus. configuring the lookup around EMPLOYEE_ID improves performance. The Treat CHAR as CHAR On Read option can be set in the Informatica Server setup so that the Informatica Server does not trim trailing spaces from the end of CHAR source fields.Can be optimized to: IIF(MOD(X. when looking up a small set of unchanging values. If the same subexpression is used several times in a transformation. Reduce the Number of Transformations in a Mapping Whenever possible the number of transformations should be reduced. EMPLOYEE_NAME and EMPLOYEE_ID. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-169 .

this should not be the major or only area of focus when implementing performance tuning. Joiner. it may also create cache files. When the PowerCenter Server creates memory caches. A common misconception is that this is the area where most tuning should occur. Rank and/or Lookup transformations can point to a session bottleneck. Joiner. Because index and data caches are created for each of these transformations. you should review the sessions for performance optimization. target database and mappings. depending on the factors discussed in the following paragraphs. Review the memory cache settings for sessions where the mappings contain any of these transformations. Both index and data cache files can be created for the following transformations in a mapping: • • • • Aggregator transformation (without sorted ports) Joiner transformation Rank transformation Lookup transformation (with caching enabled) PAGE BP-170 BEST PRACTICES INFORMATICA CONFIDENTIAL . When performance details are collected for a session. Rank and Lookup Transformations use caches.Tuning Sessions for Better Performance Challenge Running sessions is where ‘the pedal hits the metal’. information about readfromdisk and writetodisk counters for Aggregator. . Description When you have finished optimizing the sources. Any value other than zero for these counters may indicate a bottleneck. Caches The greatest area for improvement at the session level usually involves tweaking memory cache settings. both the index cache and data cache sizes may affect performance. The Aggregator. While it is true that various specific session options can be modified to improve performance.

• • Allocate at least enough space to hold at least one row in each aggregate group. The naming convention used by the PowerCenter Server for these files is PM [type of widget] [generated number]. o Aggregator Caches Keep the following items in mind when configuring the aggregate memory cache sizes. The PowerCenter Server uses memory to process an INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-171 . When creating these files. When a session is run. Cache files may also remain if the session does not complete successfully. and the session is configured for incremental aggregation. The mapping contains a Lookup transformation that is configured to use a persistent lookup cache. You may encounter performance or reliability problems when you cache large quantities of data on a mapped or mounted drive. The cache directory may be changed however.idx2. the PowerCenter Server creates multiple index and data files. the DTM generally deletes the overflow index and data cache files. If a cache file handles more than 2 gigabytes of data. the PowerCenter Server writes a message in the session log indicating the cache file name and the transformation name.The PowerCenter Server creates the index and data cache files by default in the PowerCenter Server variable directory. try to configure the index and data cache sizes to store the appropriate amount of data in memory.dat. The number of index and data files is limited only by the amount of disk space available in the cache directory. it stores the overflow values in these cache files. If the PowerCenter Server requires more memory than the configured cache size. an aggregate data cache file would be named PMAGG31_19. The PowerCenter Server writes to the index and data cache files during a session in the following cases: • • • • The mapping contains one or more Aggregator transformations. Remember that you only need to configure cache memory for an Aggregator transformation that does NOT use sorted ports.dat or . and the Informatica Server runs the session for the first time. The DTM runs out of cache memory and pages to the local cache files. Refer to Chapter 9: Session Caches in the Informatica Session and Server Guide for detailed information on determining cache sizes. if disk space is a constraint. Informatica recommends that the cache directory be local to the PowerCenter Server.idx1 and PMAGG*. The session fails if the local directory runs out of disk space. $PMCacheDir. The DTM may create multiple files when processing large amounts of data. Since paging to disk can slow session performance. such as PMAGG*. When a session completes. The mapping contains a Lookup transformation that is configured to initialize the persistent lookup cache. For example.idx. the PowerCenter Server appends a number to the end of the filename. However. index and data files may exist in the cache directory if the session is configured for either incremental aggregation or to use a persistent lookup cache.

Lookup caching should be enabled for relatively small tables. regardless of whether the lookup table is cached or not. the PowerCenter Server saves index and data cache information to disk at the end of the session. the PowerCenter Server reads all the rows from the master source and builds memory caches based on the master rows. • Joiner Caches The source with fewer records should be specified as the master source because only the master source records are read into cache. the PowerCenter Server automatically aligns all data for joiner caches on an eight-byte boundary. The next time the session runs. These files are reused for subsequent runs. Lookup cache files are saved after a session which has a lookup that uses a persistent cache is run for the first time. bypassing the querying of the database for the lookup. when the transformation is configured to not cache.dat and PMAGG*. Mappings that have sessions which use incremental aggregation should be set up so that only new detail records are read with each subsequent run. When it is used. • Lookup Caches Several options can be explored when dealing with lookup transformation caches. When the Lookup transformation is not configured for caching. The result of the Lookup query and processing is the same. Just like for a joiner. • Persistent caches should be used when lookup data is not expected to change often. the PowerCenter Server queries the lookup table for each input row.• Aggregator transformation with sorted ports. Incremental aggregation can improve session performance. the PowerCenter Server aligns all data for lookup caches on an eight-byte boundary which helps increase the performance of the lookup. Refer to Best Practice: Tuning Mappings for Better Performance to determine when lookups should be cached. After the memory caches are built.idx and saves them to the cache directory. which helps increase the performance of the join. • • PAGE BP-172 BEST PRACTICES INFORMATICA CONFIDENTIAL . Also. you must be sure to set the Recache from Database option to ensure that the lookup cache files will be rebuilt. However. Using a lookup cache can sometimes increase session performance. When a session is run with a Joiner transformation. The PowerCenter Server names these files PMAGG*. the PowerCenter Server queries the lookup table instead of the lookup cache. If the lookup table changes. the PowerCenter Server reads the rows from the detail source and performs the joins. not cache memory. the PowerCenter Server uses this historical information to perform the incremental aggregation.

The PowerCenter Server uses DTM buffer memory to create the internal data structures and buffer blocks used to bring data into and out of the Server.Allocating Buffer Memory When the PowerCenter Server initializes a session. increasing the DTM buffer pool size may improve performance. the combined DTM buffer memory allocated for the sessions or batches must not exceed the total memory for the PowerCenter Server system. then it was not a factor in session performance. use the number of groups in the XML source or target in the total calculation for the total number of sources and targets. If you don’t see a significant performance increase after increasing DTM buffer memory. This specifies the size of a memory block that is used to move data throughout the pipeline. the total memory available on the PowerCenter Server needs to be evaluated. which can improve performance during momentary slowdowns. Row size INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-173 .000 bytes Default Buffer Block Size – the default size is 64. • Optimizing the Buffer Block Size Within a session. When the DTM buffer memory is increased.000. the PowerCenter Server creates more buffer blocks. If a session is part of a concurrent batch. Increasing DTM buffer memory allocation generally causes performance to improve initially and then level off. You can tweak session properties to increase the number of available memory blocks by adjusting: • • DTM Buffer Pool Size – the default setting is 12. to create the required number of session blocks. If a session’s performance details show low numbers for your source and target BufferInput_efficiency and BufferOutput_efficiency counters. you may modify the buffer block size by changing it in the Advanced Parameters section. first determine the number of memory blocks the PowerCenter Server requires to initialize the session. each transformation. When the DTM buffer memory allocation is increased. Each source. Sessions that use a large number of source and targets may require additional memory blocks. which results in different numbers of rows that can be fit into one memory block. • Increasing the DTM Buffer Pool Size The DTM Buffer Pool Size setting specifies the amount of memory the PowerCenter Server uses as DTM buffer memory. Then you can calculate the buffer pool size and/or the buffer block size based on the default settings. it allocates blocks of memory to hold source and target data.000 bytes To configure these settings. and each target may have a different row size. If there are XML sources and targets in the mappings.

Also. you must remember to increase the size of the database rollback segments to accommodate this larger PAGE BP-174 BEST PRACTICES INFORMATICA CONFIDENTIAL . based on number of ports. disk. Partitioning allows you to break a single source into multiple sources and to run each in parallel. and the slower the overall performance. processing. plus or minus a factor of ten. If you increase the commit interval. Therefore. use the source or target with the largest row size. When increasing the commit interval at the session level. Increasing the Target Commit Interval One method of resolving target database bottlenecks is to increase the commit interval. Informatica recommends that the size of the shared memory (which determines the number of buffers available to the session) should not be increased at all unless the mapping is “complex” (i. When calculating this. and writing. the more often the PowerCenter Server writes to the target database. so it may need to be increased for optimal performance. Partitioning Sessions If large amounts of data are being processed with PowerCenter 5.. Also. thus allowing for simultaneous reading. The PowerCenter Server will spawn a Read and Write thread for each partition. it has been noted that simple mappings (i. Running Concurrent Batches Performance can sometimes be improved by creating a concurrent batch to run several sessions in parallel on one PowerCenter Server. Keep in mind that each partition will compete for the same resources (i.4 CPUs for the first session. If there is a complex mapping with multiple sources. more than 20 transformations).. This technique should only be employed on servers with multiple CPUs available. The buffer block size does not become a factor in session performance until the number of rows falls below 10 or goes above 1000.x. performance slows.e. mappings with only a few transformations) do not make the engine “CPU bound” .is determined in the server. data can be processed in parallel with a single session by partitioning the source via the source qualifier. block size should be configured so that it can hold roughly 100 rows. The default is 64K. the DTM buffer pool size is split among all partitions. Each concurrent session will use a maximum of 1. If there are independent sessions that use separate sources and mappings to populate different targets. the number of times the PowerCenter Server commits decreases and performance may improve. the smaller the commit interval. memory. so make sure that the hardware and memory are sufficient to support a parallel session. their datatypes and precisions.e. This enables you to place the sessions for each of the mappings in a concurrent batch to be run in parallel. Each time the PowerCenter Server commits. Ideally. they can be placed in a concurrent batch and run at the same time.. and a maximum of 1 CPU for each additional session.e. you can separate it into several simpler mappings with separate sources. and CPU). and therefore use a lot less processing power than a full CPU.

you may be able to improve performance by reducing the amount of data the PowerCenter Server writes to the session log. Do not use Verbose tracing INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-175 . However. since reading and manipulating a highprecision datatype (i. disabling decimal arithmetic may improve session performance. Disabling Session Recovery You can improve performance by turning off session recovery. As an additional debug option (beyond the PowerCenter Debugger). this will significantly affect the session performance. If you increase both the commit interval and the database rollback segments.. But be sure to weigh the importance of improved session performance against the ability to recover an incomplete session when considering this option. At this tracing level. However. Reducing Error Tracing If a session contains a large number of transformation errors. set the tracing level to Terse. you should see an increase in performance. it must be configured so that the PowerCenter Server recognizes this datatype by selecting Enable Decimal Arithmetic in the session property sheet. you may want to consider leaving the tracing level at Normal and focus your efforts on reducing the number of transformation errors. One of the major reasons that Informatica has set the default commit interval to 10. To use a high-precision Decimal datatype in a session. Terse tracing should only be set if the sessions run without problems and session details are not required. To reduce the amount of time spent writing to the session log file. In some cases though. Disabling Decimal Arithmetic If a session runs with decimal arithmetic enabled. However.000 is to accommodate the default rollback segment / extent size of most databases. The Decimal datatype is a numeric datatype with a maximum precision of 28. The PowerCenter Server setup can be set to disable session recovery. if terse is not an acceptable level of detail. you may set the tracing level to Verbose to see the flow of data between transformations. just increasing the commit interval without making the appropriate database changes may cause the session to fail part way through (you may get a database error like “unable to extend rollback segments” in Oracle).number of rows. This can decrease performance. session performance may be improved by disabling decimal arithmetic. Note that the tracing level must be set to Normal in order to use the reject loading utility. those with a precision of greater than 28) can slow the PowerCenter Server. the PowerCenter Server does not write error messages or row-level information for reject data. The PowerCenter Server writes recovery information in the OPB_SRVR_RECOVERY table during each commit.e.

The session tracing level overrides any transformation-specific tracing levels within the mapping. Always remember to switch tracing back to Normal after the testing is complete. Because there are only a handful of reasons why transformation errors occur. PAGE BP-176 BEST PRACTICES INFORMATICA CONFIDENTIAL .except when testing sessions. Informatica does not recommend reducing error tracing as a long-term response to high levels of transformation errors. it makes sense to fix and prevent any recurring transformation errors.

Make a temporary copy of the mapping and/or session that is to be tuned. Delete the temporary sessions upon completion of performance tuning. 5. 5. 4. 2. use a process of elimination. 3. You should be able to compare the session’s original performance with that of the tuned session’s performance. 2. you should establish an approach for identifying performance bottlenecks. then tune the copy before making changes to the original.Determining Bottlenecks Challenge Because there are many variables involved in identifying and rectifying performance bottlenecks. Write Read Mapping Session System Before you begin. The actual execution time may be used as a performance metric. The swap method is very useful for determining the most common bottlenecks. investigating each area in the order indicated: 1. Make appropriate tuning changes to mappings and/or sessions. It involves the following five steps: 1. 4. Implement only one change at a time and test for any performance improvements to gauge which tuning methods work most effectively in the environment. 3. Description The first step in performance tuning is to identify performance bottlenecks. Carefully consider the following five areas to determine where bottlenecks exist. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-177 . attempt to isolate the problem by running test sessions. To begin. Document the change made to the mapping/and or session and the performance metrics achieved as a result of the change. an efficient method for determining where bottlenecks exist is crucial to good data warehouse management.

you probably do not have a write bottleneck. Copy the read query directly from the session log. You may also use a database query to indicate if a read bottleneck exists. Using a Test Session with a Flat File Source 1.Write Bottlenecks Relational Targets The most common performance bottleneck occurs when the PowerCenter Server writes to a target database. Create a mapping and session that writes the source table data to a flat file. Using a Database Query To identify a source bottlenecks by executing a read query directly against the source database. follow these steps: 1. If the test session’s performance increases significantly. PAGE BP-178 BEST PRACTICES INFORMATICA CONFIDENTIAL . you have a read bottleneck. 3. the source qualifier. Make a copy of the original session Configure the test session to write to a flat file If the session performance is significantly increased when writing to a flat file. Create a session for the test mapping. You can optimize session performance by writing to a flat file target local to the PowerCenter server. Run the query against the source database with a query tool such as SQL Plus. and the target table. If the local flat file is very large. you can optimize the write process by dividing it among several physical drives. This type of bottleneck can easily be identified with the following procedure: 1. you should first use a read test session with a flat file as the source in the test session. 3. Read Bottlenecks Relational Sources If the session reads from a relational source. 2. 2. Create a test mapping that contains only the flat file source. you have a write bottleneck. Measure the query execution time and the time it takes for the query to return the first row. Flat File Targets If the session targets a flat file. 2.

Tuning the Line Sequential Buffer Length to a size large enough to hold approximately four to eight rows of data at a time (for flat files) may help when reading flat file sources. Make a copy of the original mapping 2. refer to the Best Practice: Tuning Mappings for Better Performance Session Bottlenecks Session performance details can be used to flag other problem areas in the session Advanced Options Parameters or in the mapping. Connect the source qualifiers to the target. If a session has large numbers in any of the Transformation_errorrows counters. retain only the sources.If there is a long delay between the two time measurements. you can use the session’s performance details to determine if mapping bottlenecks exist. Follow these steps to identify mapping bottlenecks: Using a Test Mapping without transformations 1. source qualifiers. 4. Ensure the flat file source is local to the PowerCenter Server. Flat File Sources If your session reads from a flat file source. and any custom joins or queries 3. Low Buffer Input and Buffer Output Counters INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-179 . you probably do not have a read bottleneck. High Rowsinlookupcache counters: Multiple lookups can slow the session. High Errorrows counters: Transformation errors affect session performance. You may improve session performance by locating the largest lookup tables and tuning those lookup expressions. After using the swap method. Mapping Bottlenecks If you have eliminated the reading and writing of data as bottlenecks. you have a source bottleneck. you may improve performance by eliminating the errors. High Rowsinlookupcache and Errorrows counters indicate mapping bottlenecks. Use the swap method to determine if the bottleneck is in the mapping. Remove all transformations. For further details on eliminating mapping bottlenecks. you may have a mapping bottleneck. In the copied mapping.

Transformation Source Qualifier and Normalizer Transformations Counters BufferInput_Efficiency Description Percentage reflecting how seldom the reader waited for a free buffer when passing data to the DTM. increasing the session DTM buffer pool size may improve performance. and session. and Joiner Readfromdisk and Writetodisk Counters If a session contains Aggregator. If these counters display any number other than zero. Aggregator. you can improve session performance by increasing the index and data cache sizes. BufferOutput_Efficiency Target BufferInput_Efficiency BufferOutput_Efficiency Aggregator and Rank Aggregator/Rank_readfromdisk PAGE BP-180 BEST PRACTICES INFORMATICA CONFIDENTIAL . Windows NT/2000 Use system tools such as the Performance tab in the Task Manager or the Performance Monitor to view CPU usage and total memory usage. refer to the Best Practice: Tuning Sessions for Better Performance. target. For further information regarding system tuning.If the BufferInput_efficiency and BufferOutput_efficiency counters are low for all sources and targets. refer to the Best Practices: Performance Tuning UNIX-Based Systems and Performance Tuning NT/2000-Based Systems. Percentage reflecting how seldom the DTM waited for a free buffer when passing data to the writer. Number of times the Informatica Server read from the index or data file on the local disk. use system tools like vmstat and iostat to monitor such items as system performance and disk swapping actions. Note that these can only be found in the Session Performance Details file. you may also consider tuning the system hosting the PowerCenter Server. or Joiner transformations. System Bottlenecks After tuning the source. instead of using cached data. The following table details the Performance Counters that can be used to flag session and mapping bottlenecks. Rank. Rank. Percentage reflecting how seldom the DTM waited for a full buffer of data from the reader. examine each Trasnformation_readfromdisk and Transformation_writetodisk counter. For further details on eliminating session bottlenecks. mapping. Percentage reflecting how seldom the Informatica server waited for a full buffer of data from the reader. UNIX On UNIX.

INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-181 . Number of rows in which the Infor matica Server encountered an error Note: The PowerCenter Server generates two sets of performance counters for a Joiner transformation. instead of using cached data.Transformations Aggregator/Rank_writetodisk Joiner Transformation (see Note below) Joiner_readfromdisk Joiner_writetodisk Lookup Transformation All Transformations Lookup_rowsinlookupcache Transformation_errorrows Number of times the Informatica server wrote to the index or data file on the local disk. Number of times the Informatica server wrote to the index or data file on the local disk. The second set of counters refers to the detail source. Number of times the Informatica Server read from the index or data file on the local disk. The Joiner transformation does not generate output row counters associated with the master source. The first set of counters refers to the master source. instead of using cached data. Number of rows stored in the lookup cache. instead of using cached data.

Resolving the Missing or Invalid License Key Issue The “missing or invalid license key” error occurs when attempting to install PowerCenter Client tools on NT 4.Advanced Client Configuration Options Challenge Setting the Registry in order to ensure consistent client installations. Description Ensuring Consistent Data Source Names To ensure the use of consistent data source names for the same data sources across the domain. resolve potential missing or invalid license key issues and change the Server Manager Session Log Editor to your preferred editor. the Administrator can create a single "official" set of data sources. You can then distribute this file and import the connection information for each client machine. PAGE BP-182 BEST PRACTICES INFORMATICA CONFIDENTIAL .’ This problem also occurs when the client software tools are installed under the Administrator account. Solution • • From Repository Manager. choose Export Registry from the Tools drop down menu. For all subsequent client installs. The user who attempts to log in using the normal ‘nonadministrator’ userid will be unable to start the PowerCenter Client tools.0 or Windows 2000 with a userid other than ‘Administrator. and subsequently a user with a non-administrator ID attempts to run the tools. simply choose Import Registry from the Tools drop down menu. then use the Repository Manager to export that connection information to a file. Instead. the software will display the message indicating that the license key is missing or invalid.

Solution • • While logged in as the installation user with administrator authority. Select the Log File Editor entry by double clicking on it. From the menu bar. Solution • • While logged in as the installation user with administrator authority.exe.e. Select Registry --> Exit from the menu bar to save the entry. • • • INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-183 . Under HKEY_LOCAL_MACHINE open Software/Informatica/PowerMart Client Tools/. prompting the user to enter the full path name of the editor to be used to view the logs.) Changing the Server Manager Session Log Editor The session log editor is not automatically determined when the PowerCenter Client tools are installed. i. use regedt32 to go into the registry. typically WordPad. Replace the entry with the appropriate editor entry. select Security/Permissions. A window appears the first time a session log is viewed from the PowerCenter Server Manager. (Note that the registry entries for both PowerMart and PowerCenter server and client tools are stored as PowerMart Server and PowerMart Client tools.exe or Write. Users often set this parameter incorrectly and must access the registry to change it. Move to registry path location: HKEY_CURRENT_USER Software\Informatica\PowerMart Client Tools\[CLIENT VERSION]\Server Manager\Session Files. use regedt32 to edit the registry. select View Tree and Data. and grant read access to the users that should be permitted to use the PowerMart Client. From the menu bar.

If the session still hangs. Description Configuring the Throttle Reader If problems occur when running sessions.x and above ONLY: If a session is hanging and it is partitioned. This will cause the server to manage many buffer blocks. This parameter closely manages buffer blocks in memory by restricting the number of blocks that can be utilized by the Reader. Note for PowerCenter 5. the server makes separate connections to the source and target for every partition. Solution: To limit the number of reader buffers using Throttle Reader in NT/2000: • • Access file hkey_local_machine\system\currentcontrolset\services\powermart\parameter s\miscinfo. and configuring server variables. adjusting semaphore settings in the Unix environment. it is best to remove the partitions before adjusting the throttle reader. Create a new String value with value name of 'ThrottleReader' and value data of '10'. One technique that often helps resolve “hanging” sessions is to limit the number of reader buffers that use Throttle Reader.cfg file: ThrottleReader=10 PAGE BP-184 BEST PRACTICES INFORMATICA CONFIDENTIAL . To do the same thing in UNIX: • • Add this line to . This is particularly effective if your mapping contains many target tables. some adjustments at the Server level can help to alleviate issues or isolate problems. When a session is partitioned. try adjusting the throttle reader.Advanced Server Configuration Options Challenge Configuring the Throttle Reader and File Debugging options. or if the session employs constraint-based loading.

Refer to the operating system documentation for parameter limits: INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-185 . "DebugReader". Setting Shared Memory and Semaphore Parameters Informatica recommends setting the following parameters as high as possible for the operating system.cfg file: • • • • DebugScrubber=4 DebugWriter=1 DebugReader=1 DebugDTM=1 Adjusting Semaphore Settings The UNIX version of the PowerCenter Server uses operating system semaphores for synchronization. if you set these parameters too high. but use "DebugWriter". Go to hkey_local_machine. and type “regedit” 2. Place "DebugScrubber" as the value then hit OK. Most installations require between 64 and 128 available semaphores. Select Start. help technical support to resolve the issue by supplying them with Debug files. Run. miscInfo 3. This is in addition to any semaphores required by other software. depending on the number of sessions the server runs concurrently. system. To set the debug options on for NT/2000: 1. The method used to change the parameter depends on the operating system: • • • HP/UX: Use sam (1M) to change the parameters. "DebugDTM" with all three set to "1" To do the same in UNIX: Insert the following entries in the pmserver. with a limit per user and system. such as database servers. However. Insert "4" as the value 5. The total number of available operating system semaphores is an operating system configuration parameter.Configuring File Debugging Options If problems occur when running sessions or if the PowerCenter Server has a stability issue. You may need to increase these semaphore settings before installing the server. Repeat steps 4 and 5. Select edit. the machine may not boot. powermart. The number of semaphores required to run a session is 7. Solaris: Use admintool or edit /etc/system to change the parameters. current_control_set. then add value 4. services. AIX: Use smit to change the parameters.

Must be equal to the maximum number of processes. SEMMNI determines the number of semaphores that can be created at any one time. Number of semaphore set identifiers in the system.Parameter SHMMAX SHMMIN SHMMNI SHMSEG Recommended Value for Solaris 4294967295 1 100 10 Description Maximum size in bytes of a shared memory segment. Number of semaphores in the system. Ease of switching sessions from one server machine to another without manually editing all the sessions to change directory paths. SEMMNS SEMMNI 200 70 SEMMSL equal to or greater than the value of the PROCESSES initialization parameter For example. Maximum number of semaphores in one semaphore set. PAGE BP-186 BEST PRACTICES INFORMATICA CONFIDENTIAL . Configuring Server Variables One configuration best practice is to properly configure and leverage Server variables. Maximum number of shared memory segments that can be attached by a process. Benefits of using server variables: • • Ease of deployment from development environment to production environment. Minimum size in bytes of a shared memory segment. you might add the following lines to the Solaris /etc/system file to configure the UNIX kernel: set shmsys:shminfo_shmmax = 4294967295 set shmsys:shminfo_shmmin = 1 set shmsys:shminfo_shmmni = 100 set shmsys:shminfo_shmseg = 10 set semsys:shminfo_semmns = 200 set semsys:shminfo_semmni = 70 Always reboot the system after configuring the UNIX kernel. Number of shared memory identifiers.

If the session log directory is specified as $PMSessionLogDir. The list is fixed. What if a variable is not referenced in the session or mapping? • The variable is just a convenience. edit the server configuration to set or change the variables. the user can choose to use it or not. then the logs are put in that location. The variable will be expanded only if it is explicitly referenced from another location. e. Note that this location may be different on every server.• All the variables are related to directory paths used by server. But if the session log directory field is changed to designate a specific location. etc. Designer: Aggregator/Rank/Joiner attribute for ‘Cache Directory’. ‘/home/john/logs’. Server Variable $PMRootDir $PMSessionLogDir $PMBadFileDir $PMCacheDir $PMTargetFileDir $PMSourceFileDir $PMExtProcDir $PMSuccessEmailUser $PMFailureEmailUser $PMSessionLogCount $PMSessionErrorThreshold Value (no default – user must insert a path) $PMRootDir/SessLogs $PMRootDir/BadFiles $PMRootDir/Cache $PMRootDir/TargetFiles $PMRootDir/SourceFiles $PMRootDir/ExtProc (no default – user must insert a path) (no default – user must insert a path) 0 0 Where are these variables referenced? • • Server manager session editor: anywhere in the fields for session log directory. not userextensible. If you remove any variable reference from the session or the widget attributes then the server does not use that variable.g. Each registered server has its own set of variables. bad file directory. This is in fact a primary purpose for utilizing variables. Approach In Server Manager. (The variable $PMSessionLogDir will be unused so it does not matter what the value of the variable is set to). INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-187 . then the session logs will instead be placed in the directory location as designated. External Procedure attribute for ‘Location’ Does every session and mapping have to use these variables (are they mandatory)? • No.

PAGE BP-188 BEST PRACTICES INFORMATICA CONFIDENTIAL .

Certain terms used within this Best Practice are specific to Informatica’s PowerCenter. In addition to requirements for PowerCenter. This is important to remember if sessions will be executed concurrently.Platform Sizing Challenge Determining the appropriate platform size to support PowerCenter. Each session: • Represents an active task that performs data loading. • INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-189 . Be sure to consider all mandatory server software components. Uses up to 140% of CPU resources. the database engine. This shared memory setting is important. other applications may share the server. considering specific environmental and processing requirements. Environmental configurations may very greatly with regard to hardware and software sizing. and will also be used to provide a level of performance that meets your needs. let us review the PowerCenter engine and its associated resource needs. Sizing may not be an easy task because it may be necessary to configure a single server to support numerous applications. Regardless of whether or not the server is shared. etc. it will be necessary to research the requirements of these additional software components when estimating the size of the overall environment. Description This Best Practice provides general guidance for sizing computing environments. PowerCenter provides session parameters that can be set to specify the amount of required shared memory per session. Technical Information Before delving into key sizing questions. front-end engines. It also discusses some potential questions and pitfalls that may arise when migrating to Production. as it will dictate the amount of RAM required when running concurrent sessions. Please consult the appropriate PowerCenter manuals for explanation of these terms where necessary. including the operating system and all of its components.

Consider the following questions when estimating the required number of sessions. and disk space to achieve the required performance to meet the load window. disk space will need to be carefully considered: • • • • Data is staged to flat files on the PowerCenter server. when cached in full. and the caching requirements for the session’s lookup tables. and heterogeneous joins. The space consumed is about the size of the data aggregated. and offers general guidance for estimating session resources. Refer to the Session and Server guide to determine the exact amount of memory necessary per session. Key Questions The goal of this analysis is to size the machine so that the ETL processes can complete within the specified load window. unless the cache requires it after filling system memory. Aggregate caches store the individual groups. PAGE BP-190 BEST PRACTICES INFORMATICA CONFIDENTIAL . or joins. However. Use these estimates along with recommendations in the preceding Technical Information section to determine the required number of processors. memory. In a join. The Performance Tuning section provides additional information on factors that typically affect session performance. The PowerCenter engine: Requires 20-30 MB of memory for the main server engine for session coordination. because: • • • Lookup tables. more memory is used if there are more groups. lookups. Note: It may be helpful to refer to the Performance Tuning section in Phase 4 of the Informatica Methodology when determining memory settings. aggregation. if the following conditions exist. Note: Sorting the input to aggregations will greatly reduce the need for memory. or heterogeneous data joins contained within the mapping. Data is stored in incremental aggregation files for adding data to aggregates. result in memory consumption commensurate with the size of the tables involved. This includes all types of data such as flat files and database tables.• • Requires 20-30 MB of memory per session if there are no aggregations. lookups. Temporary space is not used like a database on disk. Requires additional memory when caching for aggregation. The amount of memory can be calculated per session. May require additional memory for the caching of aggregations. Disk space is not a factor if the machine is dedicated exclusively to the server engine. lookups. or joins. cache the master table. Data does not need to be stripped to prevent head contention. memory consumed depends on the size of the master. the volume of data moved per session.

if the ETL processing is performed after business hours. PowerCenter commonly runs on a server that also hosts a database engine plus query/analysis tools.Please note that the hardware sizing analysis is highly dependent on the environment in which the server is deployed. is the data updated. With these additional processing requirements in mind. or will they be accessed via a network connection? What kind of network connection exists? Have you decided on the target environment (database/hardware/operating system)? If so. It is very important to understand the performance characteristics of the environment before making any sizing conclusions. or will all tables be truncated and reloaded? Will the data processing require staging areas? What is the load plan? Are there dependencies between facts and dimensions? How often will the data be refreshed? Will the refresh be scheduled at a certain time. if any.. other applications may be vying for server resources. aggregations. via flat file processing or relational tables? What is the load strategy. It is vitally important to remember that in addition to PowerCenter. or driven by external events? Is there a "modified" timestamp on the source table rows. incrementally loaded. the query/analysis tool often drives the hardware requirements. transform. what is it? Have you decided on the PowerCenter server environment (hardware/operating system)? Is it possible for the PowerCenter server to be on the same machine as the target? How will information be accessed for reporting purposes (e. ad-hoc query tool. run on the PowerCenter server? Has the database table space been distributed across controllers. However. or both? If data is being aggregated. cube. in bytes? What is the largest table (bytes and rows)? Is there any key on this table that could be used to partition load sessions. etc. and how long do they take? What is the total volume of data that must be moved. consider platform size in light of the following questions: • • • • • • • • • • • • • • • • • • • • • • • • What sources are accessed by the mappings? How do you currently access those sources? Do the sources reside locally.g. to maximize throughput by reading and writing data in parallel? When considering the server engine size. what is the ratio of source/target rows for the largest result set? How large is the result set (bytes and rows)? INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-191 . where possible. the query/analysis tool requirements may not impose a sizing limitation. if necessary? How will the data be moved. answer the following questions: Are there currently extract. enabling incremental load strategies? What is the size of the batch window that is available for the load? Does the load process populate detail data. In an environment where PowerCenter runs in parallel with all of these tools.) and what tools will you use to implement this access? What other applications or services. what are the processes. and load processes in place? If so.

The source and target were both hosted locally on the ETL Server. joining several sources and utilizing several Expression.org. However. A Sample Performance Result The following is a testimonial from a customer configuration. populating a large product sales table. The performance tests were performed on a 4-processor Sun E4500 with 2GB of memory.8GB of data. This processor handled just under 20.The answers to these questions will provide insight into the factors that impact PowerCenter's resource requirements. Four sessions ran after the set of 22.1. This website contains benchmarking reports that will help you fine tune your environment and may assist in determining processing power required. To simplify the analysis. The source and target database used in the tests was Oracle.tpc. focus on large. in less than 54 minutes. Links The following link may prove helpful when determining the platform size:www. In this test scenario. Please note that these performance tests were run on a previous version of PowerCenter. populating various summarization tables based on the product sales table.5 million rows. which did not include the performance and functional enhancements in release 5. These results are offered as one example of throughput. 22 sessions ran in parallel. PAGE BP-192 BEST PRACTICES INFORMATICA CONFIDENTIAL . All of the mappings were complex. Lookup and Aggregation transformations. and more than 2. results will definitely vary by installation because each environment has a unique architecture and unique data characteristics. "critical path" jobs that drive the resource requirement.

If the session has file targets. The session is configured for a normal (not bulk) target load. The server uses database logging to perform recovery. even if the session does not complete. If a session writing to file targets fails. The server can only perform recovery on relational tables. Since bulk loading bypasses database logging. you can tell the server to keep data already committed to the target database and process the rest of the source. But that is not the only option. That is. as if the session completed successfully with one run. the server cannot perform recovery. The server can recover committed target data if the following three criteria are met: • All session targets are relational. the server cannot recover sessions configured to bulk load targets. delete the files. The server then reads all sources again. When you run a session in recovery mode. This results in accurate and complete target data. the server notes the row id of the last row committed to the target database. and then passes data to the Data Transformation Manager (DTM) starting from row 1001. but only processes from the subsequent row id. when you run the session in recovery mode. and run the session again. This is called nested recovery. Although recovering a large session can be more efficient • INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-193 . the server can recover the same session more than once. This technique is called performing recovery. if the server commits 1000 rows before the session fails.Running Sessions in Recovery Mode Challenge Use PowerCenter standard functionality to recover data that is committed to a session's targets. the server reads all source tables. Description When a network or other problem causes a session whose source contains a million rows to fail after only half of the rows are committed to the target. you can re-run the session in recovery mode until the session completes successfully. one option is to truncate the target and run the session again from the beginning. Rather than processing the first half of the source again. if a session fails while running in recovery mode. When necessary. For example.

the server does not write information to that table. and deleting source data. This session is configured for a normal load. This allows you to correct and load all rejected rows from the completed session. Changes in source files or tables can result in inaccurate data. Example Session “s_recovery” reads from a Sybase source and writes to a target table in “production_target”. the following must be true: • • Source data does not change before performing recovery. sessions using these transformations are not guaranteed to return the same values when performing recovery. In addition. The server appends rejected rows from the recovery session (or sessions) to the session reject file. the server overwrites the existing log when you recover the session. If the table already exists. Both the Sequence Generator and the Normalizer transformations generate source values: the Sequence Generator generates sequences. • The server configuration parameter Disable Recovery is not selected. the server logs a message in the session log stating that recovery is not supported. When you configure a session to load in bulk. The mapping used in the session does not use a Sequence Generator or Normalizer. The mapping consists of: Source Qualifier: SQ_LINEITEM Expression transformation: EXP_TRANS Target: T_LINEITEM The session is configured to save 5 session logs. Reject Files When performing recovery. When the Disable Recovery option is checked. PAGE BP-194 BEST PRACTICES INFORMATICA CONFIDENTIAL . the server creates a new log for each session run. to ensure accurate results from the recovery. and the Normalizer generates primary keys. a Microsoft SQL Server database. If you perform nested recovery. the server creates a new session log for the recovery session. the server creates a single reject file. If the session is not configured to archive session logs. Therefore. the server does not create the OPB_SRVR_RECOVERY table in the target database to store recovery-related information. Session Logs If a session is configured to archive session logs. This includes inserting. When configuring session properties for sessions processing large amounts of data. updating. weigh the importance of performing recovery when choosing a target load type. bulk loading increases general session performance.than running the session again.

] Thu Jan 14 18:42:44 1999 CMN_1022 Database driver error.] Thu Jan 14 18:42:44 1999 CMN_1040 SQL Server Event CMN_1040 [01/14/99 18:42:44 DB-Library Error 10007 : General SQL Server error: Check messages from the SQL Server. the server performs six target -based commits before the session fails.bad..First Run The first time the session runs. The following section of the session log shows the server preparing to load normally to the production_target database. the server creates a session log named s_recovery. bulk mode [OFF] . TM_6095 Starting Transformation Engine.... CMN_1053 Writer: Target is database [TOMDB@PRODUCTION_TARGET]. Since the server cannot find OPB_SRVR_RECOVERY.. (If the session is configured to save logs by timestamp.) The server also creates a reject file for the target table named t_lineitem. user [lchen].log.. it creates the table. As the following session log show. CMN_1039 SQL Server Event CMN_1039 [01/14/99 18:42:44 SQL Server Message 208 : Invalid object name 'OPB_SRVR_RECOVERY'. Start loading table [T_LINEITEM] at: Thu Jan 14 18:42:50 1999 TARGET BASED COMMIT POINT Thu Jan 14 18:43:59 1999 ============================================= Table: T_LINEITEM Rows Output: 10125 INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-195 . the server appends the date and time to the log file name. CMN_1022 [Function Name : Execute SqlStmt : SELECT SESSION_ID FROM OPB_SRVR_RECOVERY] WRT_8017 Created OPB_SRVR_RECOVERY table in target database.

Rows Applied: 10125 Rows Rejected: 0 TARGET BASED COMMIT POINT Thu Jan 14 18:45:09 1999 ============================================= Table: T_LINEITEM Rows Output: 20250 Rows Applied: 20250 Rows Rejected: 0 TARGET BASED COMMIT POINT Thu Jan 14 18:46:25 1999 ============================================= Table: T_LINEITEM Rows Output: 30375 Rows Applied: 30375 Rows Rejected: 0 TARGET BASED COMMIT POINT Thu Jan 14 18:47:31 1999 ============================================= Table: T_LINEITEM Rows Output: 40500 Rows Applied: 40500 Rows Rejected: 0 TARGET BASED COMMIT POINT Thu Jan 14 18:48:35 1999 ============================================= Table: T_LINEITEM Rows Output: 50625 Rows Applied: 50625 Rows Rejected: 0 TARGET BASED COMMIT POINT Thu Jan 14 18:49:41 1999 ============================================= PAGE BP-196 BEST PRACTICES INFORMATICA CONFIDENTIAL .

the server notes the session is in recovery mode. either increase the number of session logs saved.e. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-197 . the server provides more detailed information about the session. rather than running the whole session again. you can truncate the target and run the entire session again. However. Note: Setting the tracing level to Verbose Data slows the server's performance and is not recommended for most production sessions. since the server committed more than 60. the server creates a new session log. Second Run (Recovery Session) When you run the session in recovery mode. check the Perform Recovery option on the Log Files tab of the session property sheet. The server reopens the existing reject file (t_lineitem. row 60751. … TM_6026 Recovering from row [60751] for target instance [T_LINEITEM].Table: T_LINEITEM Rows Output: 60750 Rows Applied: 60750 Rows Rejected: 0 When a session fails. the server sets row 60751 as the row from which to recover. the server reads the source.log.0.log. and then passes data to the DTM beginning with the first uncommitted row. 60752. In the session log below. it renames the existing log s_recovery. and writes all new session information in s_recovery. Start the session.) TM_6098 Session [s_recovery] running in recovery mode. or if necessary. and states the row at which it will begin recovery (i. It opens the existing reject file and begins processing with the next row. Since the session is configured to save multiple logs.. you can configure the session to recover the committed rows. When performing recovery. When running the session with the Verbose Data tracing level. To archive the existing session log. As seen below. edit the session schedule and reschedule the session.bad) and appends any rejected rows to that file.000 rows to the target. or choose Save Session Log By Timestamp option on the Log Files tab. Running a Recovery Session To run a recovery session.

the DisableRecovery server initialization flag defaults to Yes. and ‘Perform Recovery’ will not be possible unless this flag is changed to No during server configuration. you can run the session in recovery mode again. If necessary. it performs any configured postsession stored procedures or commands normally. When the server completes loading target tables. you must edit the session properties to clear the Perform Recovery option. creating a new session log and appending bad data to the reject file. You will need to have “create table” permissions in the target database in order to create this table. The server runs the session as it did the earlier recovery sessions. return the session to its normal schedule and reschedule the session. Returning to Normal Session After successfully recovering a session. PAGE BP-198 BEST PRACTICES INFORMATICA CONFIDENTIAL . as if the session completed in a single run. Things to Consider In PowerCenter 5.bad] Third Run (Nested Recovery) If the recovery session fails before completing. This means the OPB_SRVR_RECOVERY table will not be created.1.CMN_1053 SetRecoveryInfo for transform(T_LINEITEM): Rows To Recover From = [60751]: CMN_1053 Current Transform [SQ_lineitem]: Rows To Consume From = [60751]: CMN_1053 Output Transform [EXPTRANS]: Rows To Produce From = [60751]: CMN_1053 Current Transform [EXPTRANS]: Rows To Consume From = [60751]: CMN_1053 Output Transform [T_LINEITEM]: Rows To Produce From = [60751]: CMN_1053 Writer: Opened bad (reject) file [C:\winnt\system32\BadFiles\t_lineitem. You can run the session in recovery mode as many times as necessary to complete the session's target tables.

what strategic or tactical benefits does the business expect to gain from the project.g.Interview (individually or in forum) Project Sponsor and/or beneficiaries regarding business goals and objectives for the project. using individual interviews or general meetings to elicit the information.Developing the Business Case Challenge Identifying the departments and individuals that are likely to benefit directly from the project implementation. • Activity . For example.. the problem may be expressed as "a lack of information" rather than "a lack of technology" and should detail the business decisions or analysis that is required to resolve the lack of information. 2. This information should be clearly defined in a Problem/Needs Statement. In many cases. Again. The next step in creating the project scope is defining the business goals and objectives for the project and detailing them in a comprehensive Statement of Project Goals and Objectives. is key to defining and scoping the project. Description The following four steps summarize business case development and lay a good foundation f or proceeding into detailed business requirements for the project.) and should avoid any technical considerations at this point. This information can then be summarized in an organization chart that is useful for ensuring that all project team members understand the corporate/business organization. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-199 . Deliverable . define their business roles and project participation. • • Activity . 1. Deliverable . Understanding these individuals.Organization chart of corporate beneficiaries and participants.Interview project sponsor to identify beneficiaries. • • Activity . using business terms to describe the problem. One of the first steps in establishing the business scope is identifying the project beneficiaries and understanding their business roles and project participation. the Project Sponsor can help to identify the beneficiaries and the various departments they represent. and their business information requirements. This statement should be a high-level expression of the desired business solution (e.Interview (individually or in forum) Project Sponsor and/or beneficiaries regarding problems and needs related to project. It may be practical to combine information gathering for the needs assessment and goals definition. the Project Sponsor and beneficiaries are the best sources for this type of information. The best way to gather this type of information is by interviewing the Project Sponsor and/or the project beneficiaries.Problem/Need Statement 3. The next step in establishing the business scope is to understand the business problem or need that the project addresses.

Deliverable - Statement of Project Goals and Objectives

4. The final step is creating a Project Scope and Assumptions statement that clearly defines the boundaries of the project based on the Statement of Project Goals and Objective and the associated project assumptions. This statement should focus on the type of information or analysis that will be included in the project rather than what will not. The assumptions statements are optional and may include qualifiers on the scope, such as ass umptions of feasibility, specific roles and responsibilities, or availability of resources or data.

• •

Activity - Business Analyst develops Project Scope and Assumptions statement for presentation to the Project Sponsor.
Deliverable - Project Scope and Assumptions statement

PAGE BP-200

BEST PRACTICES

INFORMATICA CONFIDENTIAL

Assessing the Business Case

Challenge
Developing a solid business case for the project that includes both the tangible and intangible potential benefits of the project.

Description
The Business Case should include both qualitative and quantitative assessments of the project. The Qualitative Assessment portion of the Business Case is based on the Statement of Problem/Need and the Statement of Project Goals and Objectives (both generated in Subtask 1.1.1) and focuses on d iscussions with the project beneficiaries of expected benefits in terms of problem alleviation, cost savings or controls, and increased efficiencies and opportunities. The Quantitative Assessment portion of the Business Case provides specific measurable details of the proposed project, such as the estimated ROI, which may involve the following calculations:

Cash flow analysis- Projects positive and negative cash flows for the anticipated life of the project. Typically, ROI measurements use the cash flow formula to depict results. Net present value - Evaluates cash flow according to the long-term value of current investment. Net present value shows how much capital needs to be invested currently, at an assumed interest rate, in order to create a stream of payments over time. For instance, to generate an income stream of $500 per month over six months at an interest rate of eight percent would require an investment-a net present value-of $2,311.44. Return on investment - Calculates net present value of total incremental cost savings and revenue divided by the net present value of total costs multiplied by 100. This type of ROI calculation is frequently referred to as return of equity or return on capital employed. Payback - Determines how much time will pass before an initial capital investment is recovered.

The following are steps to calculate the quantitative business case or ROI:

INFORMATICA CONFIDENTIAL

BEST PRACTICES

PAGE BP-201

Step 1. Develop Enterprise Deployment Map. This is a model of the project phases over a timeline, estimating as specifically as possible customer participation (e.g., by department and location), subject area and type of information/analysis, numbers of users, numbers of data marts and data sources, types of sources, and size of data set. Step 2. Analyze Potential Benefits. Discussions with representative managers and users or the Project Sponsor should reveal the tangible and intangible benefits of the project. The most effective format for presenting this analysis is often a "before" and "after" format that compares the current situation to the project expectations. Step 3. Calculate Net Present Value for all Benefits. Information gathered in this step should help the customer representatives to understand how the expected benefits will be allocated throughout the organization over time, using the enterprise deployment map as a guide. Step 4. Define Overall Costs. Customers need specific cost information in order to assess the dollar impact of the project. Cost estimates should address the following fundamental cost components:

• • • • • • • • •

Hardware Networks RDBMS software Back-end tools Query/reporting tools Internal labor External labor Ongoing support Training

Step 5. Calculate Net Present Value for all Costs. Use either actual cost estimates or percentage-of-cost values (based on cost allocation assumptions) to calculate costs for each cost component, projected over the timeline of the enterprise deployment map. Actual cost estimates are more accurate than percentage-of-cost allocations, but much more time-consuming. The percentage-of-cost allocation process may be valuable for initial ROI snapshots until costs can be more clearly predicted. Step 6. Assess Risk, Adjust Costs and Benefits Accordingly. Review potential risks to the project and make corresponding adjustments to the costs and/or benefits. Some of the major risks to consider are:

• • • •

Scope creep, which can be mitigated by thorough planning and tight project scope Integration complexity, which can be reduced by standardizing on vendors with integrated product sets or open architectures Architectural strategy that is inappropriate Other miscellaneous risks from management or end users who may withhold project support; from the entanglements of internal politics; and from technologies that don't function as promised

Step 7. Determine Overall ROI. When all other portions of the business case are complete, calculate the project's "bottom line". Determining the overall ROI is simply a matter of subtracting net present value of total costs from net present value of (total incremental revenue plus cost savings). For more detail on these steps, refer to the Informatica White Paper: 7 Steps to Calculating Data Warehousing ROI.

PAGE BP-202

BEST PRACTICES

INFORMATICA CONFIDENTIAL

Defining and Prioritizing Requirements

Challenge
Defining and prioritizing business and functional requirements is often accomplished through a combination of interviews and facilitated meetings (i.e., workshops) between the Project Sponsor and beneficiaries and the Project Manager and Business Analyst.

Description
The following three steps are key for successfully defining and prioritizing requirements:

Step 1: Discovery
During individual (or small group) interviews with high-level management, there is often focus and clarity of vision that for some, may be hindered in large meetings or not available from lower-level management. On the other hand, detailed review of existing reports and current analysis from the company's "information providers" can fill in helpful details. As part of the initial "discovery" process, Informatica generally recommends several interviews at the Project Sponsor and/or upper management level and a few with those acquainted with current reporting and analysis processes. A few peer group forums can also be valuable. However, this part of the process must be focused and brief or it can become unwieldy as much time can be expended trying to coordinate calendars between worthy forum participants. Set a time period and target list of participants with the Project Sponsor, but avoid lengthening the process if some participants aren't available. Questioning during these session should include the following:

• • • •

What are the target business functions, roles, and responsibilities? What are the key relevant business strategies, decisions, and processes (in brief)? What information is important to drive, support, and measure success for those strategies/processes? What key metrics? What dimensions for those metrics? What current reporting and analysis is applicable? Who provides it? How is it presented? How is it used?

Step 2: Validation and Prioritization
The Business Analyst, with the help of the Project Architect, documents the findings of the discovery process. The resulting Business Requirements Specification includes a matrix linking the specific business requirements to their functional requirements.

INFORMATICA CONFIDENTIAL

BEST PRACTICES

PAGE BP-203

the Architect begins the Functional Requirements Specification providing details on the technical requirements for the project. This document. they develop a phased. will facilitate discussion of informational details and provide the starting point for the target model definition. Items of secondary priority and those with poor near-term feasibility are relegated to subsequent phases of the project. or incremental. based on the business requirements findings. The detailed business requirements and information requirements should be reviewed with the project beneficiaries and prioritized based on business need and the stated project objectives and scope. "roadmap" for the project (Project Roadmap).At this time also. This is presented to the Project Sponsor for approval and becomes the first "Increment" or starting point for the Project Plan. and Architect develop consensus on a project "phasing" approach. the Project Manager. PAGE BP-204 BEST PRACTICES INFORMATICA CONFIDENTIAL . Thus. Step 3: The Incremental Roadmap Concurrent with the validation of the business requirements. Business Analyst. As general technical feasibility is compared to the prioritization from Step 2. the Architect develops the Information Requirements Specification in order to clearly represent the structure of the information requirements.

3. it is not necessary to determine the critical path for completing these tasks.4 may have sequential requirements that force us to complete them in order. the Project Manager can begin to estimate the level of effort involved in completing each of the steps. an Excel version of the Work Breakdown Structure is available.3.1 through 4. The end result is the Project Plan.and should . This sample is a Microsoft Project file that has been "pre-loaded" with the Phases. individual resources can be assigned and scheduled. The Project Manager can use this WBS as a starting point.3. Many projects will require the addition of detailed steps to accurately represent the development effort..g.3. accurate WBS. and subtasks can be exported from Excel into many other project management tools. The phases. tasks. although subtasks 4. One general guideline is to keep task detail to a duration of at least a day.5 through 4. Tasks in the hierarchy are often completed in parallel.3). it is critical to develop a thorough. Tasks. One challenge in developing a good WBS is obtaining the correct balance between enough detail. When the estimate is complete.1 through 4. The WBS serves as a starting point for both the project estimate and the project plan.7 under task 4.3. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-205 . Refer to Developing and Maintaining the Project Plan for further information about the project plan. but should review it carefully to ensure that it corresponds to the specific development effort. For example.3. and Subtasks that make up the Informatica Methodology. subtasks 4. However.Developing a WBS Challenge Developing a comprehensive work breakdown structure that clearly depicts all of the various tasks. Description A WBS is a tool for identifying and organizing the tasks that need to be completed in a project. subtasks required to complete the project. At this stage of project planning.1 through 4. Because project time and resource estimates are typically based on the Work Breakdown Structure (WBS). removing any steps that aren't relevant or adding steps as necessary. the goal is to list every task that must be completed. and too much detail.7 are complete. 4. The Project Plan provides a starting point for further development of the project WBS. the BUILD phase is not complete until tasks 4. we may have multiple subtasks under a task (e.7 can .be completed in parallel if they do not have sequential requirements. The WBS shouldn't be a 'grocery list' of every minor detail in the project. simplifying the effort to develop the WBS. it is important to remember that a task is not complete until all of its corresponding subtasks are completed whether sequentially or in parallel. but some work can (and should) begin for the DEPLOY phase long before the BUILD phase is complete. It is also important to remember that the WBS is not necessarily a sequential document. So. For example. After the WBS has been loaded into the selected project management tool and refined for the specific project needs. but it does need to break the tasks down to a manageable level of detail. If the Project Manager chooses not to use Microsoft Project.

Developing and Maintaining the Project Plan

Challenge
Developing the first-pass of a project plan that incorporates all of the necessary components but which is sufficiently flexible to accept the inevitable changes.

Description
Use the following steps as a guide for developing the initial project plan:

• • •

Define the project's major milestones based on the Project Scope. Break the milestones down into major tasks and activities. The Project Plan should be helpful as a starting point or for recommending tasks for inclusion. Continue the detail breakdown, if possible, to a level at which tasks are of about one to three days' duration. This level provides satisfactory detail to facilitate estimation and tracking. If the detail tasks are too broad in scope, estimates are much less likely to be accurate. Confer with technical personnel to review the task definitions and effort estimates (or even to help define them, if applicable). Establish the dependencies among tasks, where one task cannot be started until another is completed (or must start or complete concurrently with another). Define the resources based on the role definitions and estimated number of resources needed for each role. Assign resources to each task. If a resource will only be part-time on a task, indicate this in the plan.

• • • •

At this point, especially when using Microsoft Project, it is advisable to create dependencies (i.e., predecessor relationships) between tasks assigned to the same resource in order to indicate the sequence of that person's activities. The initial definition of tasks and effort and the resulting schedule should be an exercise in pragmatic feasibility unfettered by concerns about ideal completion dates. In other words, be as realistic as possible in your initial estimations, even if the resulting scheduling is likely to be a hard sell to c ompany management. This initial schedule becomes a starting point. Expect to review and rework it, perhaps several times. Look for opportunities for parallel activities, perhaps adding resources, if necessary, to improve the schedule.

PAGE BP-206

BEST PRACTICES

INFORMATICA CONFIDENTIAL

When a satisfactory initial plan is complete, review it with the Project Sponsor and discuss the assumptions, dependencies, assignments, milestone dates, and such. Expect to modify the plan as a result of this review.

Reviewing and Revising the Project Plan
Once the Project Sponsor and company managers agree to the initial plan, it becomes the basis for assigning tasks to individuals on the project team and for setting expectations regarding delivery dates. The planning activity then shifts to tracking tasks against the schedule and updating the plan based on status and changes to assumptions. One approach is to establish a baseline schedule (and budget, if applicable) and then track changes against it. With Microsoft Project, this involves creating a "Baseline" that remain s static as changes are applied to the schedule. If company and project management do not require tracking against a baseline, simply maintain the plan through updates without a baseline. Regular status reporting should include any changes to the schedule, beginning with team members' notification that dates for task completions are likely to change or have already been exceeded. These status report updates should trigger a regular plan update so that project management can track the effect on the overall schedule and budget. Be sure to evaluate any changes to scope (see 1.2.4 Manage Project and Scope Change Assessment ), or changes in priority or approach, as they arise to determine if they impact the plan. It may be necessary to modify the plan if changes in scope or priority require rearranging task assignments or delivery sequences, or if they add new tasks or postpone existing ones.

INFORMATICA CONFIDENTIAL

BEST PRACTICES

PAGE BP-207

Managing the Project Lifecycle

Challenge
Providing a structure for on-going management throughout the project lifecycle.

Description
It is important to remember that the quality of a project can be directly correlated to the amount of review that occurs during its lifecycle.

Project Status and Plan Reviews
In addition to the initial project plan review with the Project Sponsor, schedule regular status meetings with the sponsor and project team to review status, issues, scope changes and schedule updates. Gather status, issues and schedule update information from the team one day before the status meeting in order to compile and distribute the Status Report .

Project Content Reviews
The Project Manager should coordinate, if not facilitate, reviews of requirements, plans and deliverables with company management, including business requirements reviews with business personnel and technical reviews with project technical personnel. Set a process in place beforehand to ensure appropriate personnel are invited, any relevant documents are distributed at least 24 hours in advance, and that reviews focus on questions and issues (rather than a laborious "reading of the code"). Reviews may include:

• • • • • • • • •

Project scope and business case review Business requirements review Source analysis and business rules reviews Data architecture review Technical infrastructure review (hardware and software capacity and configuration pla nning) Data integration logic review (source to target mappings, cleansing and transformation logic, etc.) Source extraction process review Operations review (operations and maintenance of load sessions, etc.) Reviews of operations plan, QA plan, deployment and support plan

PAGE BP-208

BEST PRACTICES

INFORMATICA CONFIDENTIAL

Change Management
Directly address and evaluate any changes to the planned project activities, priorities, or staffing as they arise, or are proposed, in terms of their impact on the project plan.

• • •

Use the Scope Change Assessment to record the background problem or requirement and the recommended resolution that constitutes the potential scope change. Review each potential change with the technical team to assess its impact on the project, evaluating the effect in terms of schedule, budget, staffing requirements, and so forth. Present the Scope Change Assessment to the Project Sponsor for acceptance (with formal sign-off, if applicable). Discuss the assumptions involved in the impact estimate and any potential risks to the project.

The Project Manager should institute this type of change management process in response to any issue or request that appears to add or alter expected activities and has the potential to affect the plan. Even if there is no evident effect on the schedule, it is important to document these changes because they may affect project direction and it may become necessary, later in the project cycle, to justify these changes to management.

Issues Management
Any questions, problems, or issues that arise and are not immediately resolved should be tracked to ensure that someone is accountable for resolving them so that their effect can also be visible. Use the Issues Tracking template, or something similar, to track issues, their owner, and dates of entry and resolution as well as the details of the issue and of its solution. Significant or "showstopper" issues should also be mentioned on the status report.

Project Acceptance and Close
Rather than simply walking away from a project when it seems complete, there should be an explicit close procedure. For most projects this involves a meeting where the Project Sponsor and/or department managers acknowledge completion or sign a statement of satisfactory completion.

• •

Even for relatively short projects, use the Project Close Report to finalize the project with a final status report detailing: o What was accomplished o Any justification for tasks expected but not completed o Recommendations Prepare for the close by considering what the project team has learned about the environments, procedures, data integration design, data architecture, and other project plans. Formulate the recommendations based on issues or problems that need to be addressed. Succinctly describe each problem or recommendation and if applicable, briefly describe a recommended approach.

INFORMATICA CONFIDENTIAL

BEST PRACTICES

PAGE BP-209

Knowledge of PowerCenter’s security facilities is also a prerequisite to security design. While this is less important in a development/unit test environment. PAGE BP-210 BEST PRACTICES INFORMATICA CONFIDENTIAL . and data – in order to ensure system integrity and data confidentiality. it is imperative to answer the following basic questions: • • • • • • • • Who needs access to the Repository? What do they need the ability to do? Is a central administrator required? What permissions are appropriate for him/her? Is the central administrator responsible for designing and configuring the repository security? If not. and end users’ access requirements. batches. the administrator takes care of maintaining the Repository. Before implementing security measures. Description Configuring security is one of the most important components of building a Data Warehouse. sessions.Configuring Security Challenge Configuring a PowerCenter security scheme to prevent unauthorized access to mappings. it is critical for protecting the production environment. repositories. In most implementations. There should be a limit to the number of administrator accounts for PowerCenter. folders. Determining an optimal security configuration for a PowerCenter environment requires a thorough understanding of business requirements. Security should be implemented with the goals of easy maintenance and scalability. data content. has a security administrator been identified? What levels of permissions are appropriate for the developers? Do they need access to all the folders? Who needs to start sessions manually? Who is allowed to start and stop the Informatica Server? How will PowerCenter security be administered? Will it be the same as the database security scheme? Do we need to restrict access to Global Objects? The following pages offer some answers to the these questions and some suggestions for assigning user groups and access privileges.

Can edit metadata in the Designer. Global Objects include Database Connections. The internal security enables multi-user development through management of users. all password information is encrypted and stored in the repository. Global Object permissions. the Repository Manager. and execute permissions for global objects. These are PowerCenter users. This means that the permissions for Global Objects can be changed after enabling Enhanced Security. All security management is performed through the Repository Manager. privileges are commonly assigned to groups. Only the owner of the Object or a Super User can manage permissions for a Global Object. in addition to privileges and permissions assigned using the Repository Manager. and the command line program.PowerCenter’s security approach is similar to database security environments. modify. This approach is simpler than assigning privileges on a user-by-user basis. The Repository may be connected to sources/targets that contain sensitive information. Can browse repository contents through the Repository Manager. The Server Manager provides another level of security for this purpose. Choosing the Enable Security option activates the following set of default privileges: User Owner Owner Group World Default Global Object Permissions Read. FTP Connections and External Loader Connections. Although privileges can be assigned to users or groups. with users then added to each group. and folders. Can configure connections on the server and INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-211 . affect the ability to perform tasks in the Server Manager. since there are generally few groups and many users. privileges. write. Can create and modify folders. regardless of folder level permissions. pmcmd. and delete sessions and batches in Server Manager. and any user can belong to more than one group. Every user ID must be assigned to one or more groups. It is used to assign read. Write and Execute Read and Execute No Permissions Enabling Enhanced Security does not lock the restricted access settings for Global Objects. groups. not database users. The Server Manager also offers an enhanced security option that allows you to specify a default set of privileges that applies restricted access controls for Global Objects. The following table summarizes some possible privileges that may be granted: Privilege Session Operator Use Designer Browse Repository Create Sessions and Batches Administer Repository Administer Server Description Can run any sessions or batches. Can create.

which must identify a folder owner and group. meaning that shortcuts can be created pointing to objects within the folder. Users without read permissions cannot see the folder. For each folder. group. Can perform all tasks with the repository and the server The next table suggests a common set of initial groups and the privileges that may be associated with them: Group Developer Description PowerCenter developers who are creating the mappings.e. and Execute: Privilege Read Description Can read. Use Designer. Data warehouse Administrators who maintain the entire warehouse environment. so changes to common logic or elements can be managed more efficiently. Can edit metadata in the folder. thereby enabling object reuse. copy. any user). that folder inherits the properties of the object. Browse Repository Super User Administrator Users with Administer Repository or Super User privileges may edit folder properties. When other folders create a shortcut from a shareable folder. The following table details the three folder level privileges: Read. privileges are set for the owner. and mappings. Write. A recommended practice is to create only one shareable folder per repository. and also determine whether the folder is shareable. this property cannot be changed. and create shortcuts to repository objects in the folder.. Business end users who run reports off of the data warehouse. Administer Server.Privilege Super User Description stop the server through the Server Manager or the command-line interface. and to place all reusable objects within that sharable folder. Privileges Session Operator. Browse Repository. After a folder is flagged as shareable. Write Execute Allowing shortcuts enables other folders in the same repository to share objects such as source/target tables. Create Sessions and Batches Browse Repository End User Operator Session Operator. PAGE BP-212 BEST PRACTICES INFORMATICA CONFIDENTIAL . Operations department that runs and maintains the environment in production. transformations. and repository (i. Can run sessions using mappings in the folder.

You might also wish to add a group specific to each application if there are many application development tasks being performed within the same repository. In this example. Repository privileges should be restricted to Read permissions only. When a session is in use by a developer. Only a few people should have Administer Repository or Super User privileges. if you have two projects. Locks thus prevent repository corruption by preventing simultaneous uncoordinated updates. it may be appropriate to create a group for ABC developers and another for XYZ developers. Note that users with the Session Operator privilege can run sessions or batches. and the Allow Shortcuts option. One of the most important reasons for this is session level locking. In this way. while everyone else should have the appropriate privileges within the folders they use. ABC and XYZ.Users who own a folder or have Administer Repository or Super User privileges can edit folder properties to change the owner. the three levels of privileges. regardless of folder level permissions. it is difficult to identify which developer is making (or has made) changes to an object. you may assign group level security for all of the ABC folders to the ABC group. depending on the desired level of security. This enables you to assign folder level security to the group and keep the two projects from accidentally working in folders that belong to the other project team. if multiple individuals share a common login ID. the group assigned to the folder. it cannot be opened and modified by anyone but that user. only members of the ABC group can make changes to those folders. However members within the folder’s group may contain only Read/Write. INFORMATICA CONFIDENTIAL BEST PRACTICES PAGE BP-213 . Also. Tight security is recommended in the production environment to ensure that the developers and other users do not accidentally make changes to production. or possibly all three levels. A folder owner should be allowed all three folder level permissions. Informatica recommends creating individual User IDs for all developers and administrators on the system rather than using a single shared ID. if any at all. For example.

PAGE BP-214 BEST PRACTICES INFORMATICA CONFIDENTIAL .

Sign up to vote on this title
UsefulNot useful