Professional Documents
Culture Documents
Administrator Guide
Informatica® PowerCenter®
(Version 8.6.1)
Informatica PowerCenter Data Analyzer Administrator Guide
Version 8.6.1
December 2008
Copyright © 2001-2008 Informatica Corporation. All rights reserved. Printed in the USA.
This software and documentation contain proprietary information of Informatica Corporation and are provided under a license agreement containing restrictions on use and disclosure and are also
protected by copyright law. Reverse engineering of the software is prohibited. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying,
recording or otherwise) without prior consent of Informatica Corporation. This Software may be protected by U.S. and/or international Patents and other Patents Pending.
Use, duplication, or disclosure of the Software by the U.S. Government is subject to the restrictions set forth in the applicable software license agreement and as provided in DFARS 227.7202-1(a) and
227.7702-3(a) (1995), DFARS 252.227-7013(c)(1)(ii) (OCT 1988), FAR 12.212(a) (1995), FAR 52.227-19, or FAR 52.227-14 (ALT III), as applicable.
The information in this product or documentation is subject to change without notice. If you find any problems in this product or documentation, please report them to us in writing.
Informatica, PowerCenter, PowerCenterRT, PowerCenter Connect, PowerCenter Data Analyzer, PowerExchange, PowerMart, Metadata Manager, Informatica Data Quality, Informatica Data
Explorer, Informatica B2B Data Exchange and Informatica On Demand are trademarks or registered trademarks of Informatica Corporation in the United States and in jurisdictions throughout the
world. All other company and product names may be trade names or trademarks of their respective owners.
Portions of this software and/or documentation are subject to copyright held by third parties, including without limitation: Copyright DataDirect Technologies. All rights reserved. Copyright © Sun
Microsystems. All rights reserved. Copyright © Aandacht c.v. All rights reserved. Copyright 2007 Isomorphic Software. All rights reserved.
This product includes software developed by the Apache Software Foundation (http://www.apache.org/) and other software which is licensed under the Apache License, Version 2.0 (the "License"). You
may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the
License.
This product includes software which was developed by Mozilla (http://www.mozilla.org/), software copyright The JBoss Group, LLC, all rights reserved; software copyright, Red Hat Middleware,
LLC, all rights reserved; software copyright © 1999-2006 by Bruno Lowagie and Paulo Soares and other software which is licensed under the GNU Lesser General Public License Agreement, which may
be found at http://www.gnu.org/licenses/lgpl.html. The materials are provided free of charge by Informatica, “as-is”, without warranty of any kind, either express or implied, including but not limited
to the implied warranties of merchantability and fitness for a particular purpose.
This product includes software copyright (C) 1996-2006 Per Bothner. All rights reserved. Your right to use such materials is set forth in the license which may be found at http://www.gnu.org/software/
kawa/Software-License.html.
This product includes software developed by the Indiana University Extreme! Lab. For further information please visit http://www.extreme.indiana.edu/.
This product includes software licensed under the Academic Free License (http://www.opensource.org/licenses/afl-3.0.php).
This Software is protected by Patents including US Patents Numbers 6,640,226; 6,789,096; 6,820,077; and 6,823,373 and other Patents Pending.
DISCLAIMER: Informatica Corporation provides this documentation “as is” without warranty of any kind, either express or implied, including, but not limited to, the implied warranties of non-
infringement, merchantability, or use for a particular purpose. Informatica Corporation does not warrant that this software or documentation is error free. The information provided in this software or
documentation may include technical inaccuracies or typographical errors. The information in this software and documentation is subject to change at any time without notice.
iii
Setting Access Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Restricting Data Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Using Global Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Understanding Data Restrictions for Multiple Groups . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Restricting Data Access by Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Restricting Data Access by User or Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
iv Table of Contents
Viewing Task Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Viewing or Clearing a Report History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Removing a Report from an Event-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Attaching Imported Cached Reports to an Event-Based Schedule . . . . . . . . . . . . . . . . . . 37
v
Chapter 9: Managing System Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Managing Color Schemes and Logos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Using a Predefined Color Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Editing a Predefined Color Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Creating a Color Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Selecting a Default Color Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Assigning a Color Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Managing Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Viewing the User Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Configuring and Viewing the Activity Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Configuring the System Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Configuring the JDBC Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Managing LDAP Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Managing Delivery Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Configuring the Mail Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Configuring the External URL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Configuring SMS/Text Messaging and Mobile Carriers . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Specifying Contact Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Viewing System Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Setting Rules for Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Setting Query Rules at the System Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Setting Query Rules at the Group Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Setting Query Rules at the User Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Setting Query Rules at the Report Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Configuring Report Table Scroll Bars. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Configuring Report Headers and Footers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Configuring Departments and Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Configuring Display Settings for Groups and Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
vi Table of Contents
IBM DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Microsoft SQL Server 2000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
HP-UX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Solaris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Application Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Servlet/JSP Container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
JSP Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
EJB Container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Data Analyzer Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Ranked Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Datatype of Table Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Date Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
JavaScript on the Analyze Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Interactive Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Number of Charts in a Report. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Scheduler and User-Based Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Frequency of Schedule Runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Row Limit for SQL Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Indicators in Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Purging of Activity Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Recommendations for Dashboard Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Chart Legends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Connection Pool Size for the Data Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Server Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Informatica Resources
Informatica Customer Portal
As an Informatica customer, you can access the Informatica Customer Portal site at http://my.informatica.com.
The site contains product information, user group information, newsletters, access to the Informatica customer
support case management system (ATLAS), the Informatica How-To Library, the Informatica Knowledge Base,
Informatica Documentation Center, and access to the Informatica user community.
Informatica Documentation
The Informatica Documentation team takes every effort to create accurate, usable documentation. If you have
questions, comments, or ideas about this documentation, contact the Informatica Documentation team
through email at infa_documentation@informatica.com. We will use your feedback to improve our
documentation. Let us know if we can contact you regarding your comments.
ix
Informatica Knowledge Base
As an Informatica customer, you can access the Informatica Knowledge Base at http://my.informatica.com. Use
the Knowledge Base to search for documented solutions to known technical issues about Informatica products.
You can also find answers to frequently asked questions, technical white papers, and technical tips.
North America / South America Europe / Middle East / Africa Asia / Australia
x Preface
CHAPTER 1
Introduction
PowerCenter Data Analyzer provides a framework for performing business analytics on corporate data. With
Data Analyzer, you can extract, filter, format, and analyze corporate information from data stored in a data
warehouse, operational data store, or other data storage models. Data Analyzer uses the familiar web browser
interface to make it easy for a user to view and analyze business information at any level.
You can use Data Analyzer to run PowerCenter Repository Reports, Metadata Manager Reports, Data Profiling
Reports, or create and run custom reports. You can create a Reporting Service in the PowerCenter
Administration Console. The Reporting Service is the application service that runs the Data Analyzer
application in a PowerCenter domain. For more information about the Reporting Service, see the PowerCenter
Administrator Guide.
Data Analyzer works with the following data models:
Analytic schema. Based on a dimensional data warehouse in a relational database. Data Analyzer uses the
characteristics of a dimensional data warehouse model to assist you to analyze data. When you set up an
analytic schema in Data Analyzer, you define the fact and dimension tables and the metrics and attributes in
the star schema.
Operational schema. Based on an operational data store in a relational database. When you set up an
operational schema in Data Analyzer, you define the tables in the schema. Identify which tables contain the
metrics and attributes for the schema, and define the relationship among the tables. Use the operational
schema to analyze data in relational database tables that do not conform to the dimensional data model.
Hierarchical schema. Based on data in an XML document. A hierarchical schema contains attributes and
metrics from an XML document on a web server or an XML document returned by a web service operation.
Each schema must contain all the metrics and attributes that you want to analyze together.
Data Analyzer supports the Java Message Service (JMS) protocol to access real-time messages as data sources.
To display real-time data in a Data Analyzer real-time report, you create a Data Analyzer real-time message
1
stream with the details of the metrics and attributes to include in the report. Data Analyzer updates the report
when it reads JMS messages.
Data Analyzer stores metadata for schemas, metrics and attributes, queries, reports, user profiles, and other
objects in the Data Analyzer repository. When you create a Reporting Service, you need to specify the Data
Analyzer repository details. The Reporting Service configures the Data Analyzer repository with the metadata
corresponding to the selected data source. When you run reports for any data source, Data Analyzer uses the
metadata in the Data Analyzer repository to determine the location from which to retrieve the data for the
report, and how to present the report.
The Data Analyzer repository must reside in a relational database. The data for an analytic or operational
schema must also reside in a relational database. The data for a hierarchical schema resides in a web service or
XML document.
Note: If you create a Reporting Service for another reporting source, you need to import the metadata for the
data source manually.
Main Components
Data Analyzer is built on JBoss Application Server and uses related technology and application programming
interfaces (API) to accomplish its tasks. JBoss Application Server is a Java 2 Enterprise Edition (J2EE)-
compliant application server. Data Analyzer uses the application server to handle requests from the web
browser. It generates the requested contents and uses the application server to transmit the content back to the
web browser. Data Analyzer stores metadata in a repository database to keep track of the processes and objects it
needs to handle web browser requests.
Application Server
JBoss Application Server helps the Data Analyzer Application Server manage its processes efficiently. The Java
application server provides services such as database access and server load balancing to Data Analyzer. The Java
Application Server also provides an environment that uses Java technology to manage application, network, and
system resources.
Web Server
Data Analyzer uses an HTTP server to fetch and transmit Data Analyzer pages to web browsers. If the
application server contains a web server, you do not need to install a separate web server. You need a separate
web server to set up a proxy server to enable external users to access Data Analyzer through a firewall.
Data Analyzer
Data Analyzer is a Java application that provides a web-based platform for the development and delivery of
business analytics. In Data Analyzer, you can read data from a data source, create reports, and view the results
on the web browser.
Data Analyzer uses the following Java technology:
Java Servlet API
JavaServer Pages (JSP)
Enterprise Java Beans (EJB)
Data Source
For analytic and operational schemas, Data Analyzer reads data from a relational database. It connects to the
database through JDBC drivers.
For hierarchical schemas, Data Analyzer reads data from an XML document. The XML document can reside on
a web server, or it can be generated by a web service operation. Data Analyzer connects to the XML document
or web service through an HTTP connection.
Supporting Components
Data Analyzer has other components to support its processes, including an API that allows you to integrate
Data Analyzer features into other web applications and security adapters that allow you to use an LDAP server
for authentication. Although you can use Data Analyzer without these components, you can extend the power
of Data Analyzer when you set it up to work with these additional components.
Authentication Server
You use PowerCenter authentication methods to authenticate users logging in to Data Analyzer. You launch
Data Analyzer from the Administration Console, PowerCenter Client tools, or Metadata Manager, or by
accessing the Data Analyzer URL from a browser. For more information about authentication methods, see the
PowerCenter Administrator Guide.
When you use the Administration Console to create native users and groups, the Service Manager stores the
users and groups in the domain configuration database and notifies the Reporting Service. The Reporting
Service copies the users and groups to the Data Analyzer repository.
Note: You cannot create or delete users and groups, or change user passwords in Data Analyzer. You can only
modify the user settings such as the user name or the contact details in Data Analyzer.
PowerCenter
You create and enable a Reporting Service on the Domain page of the PowerCenter Administration Console.
When you enable the Reporting Service, the Administration Console starts Data Analyzer.
You log in to Data Analyzer to create and run reports on data in a relational database or to run PowerCenter
Repository Reports, Data Analyzer Data Profiling Reports, or Metadata Manager Reports.
Mail Server
Data Analyzer uses Simple Mail Transfer Protocol (SMTP) to provide access to the enterprise mail server and
facilitate the following services:
Send report alert notification and SMS/Text Messages to alert devices.
Forward reports through email.
Data Lineage
You can access the data lineage for Data Analyzer reports, attributes, and metrics. Metadata Manager is the
PowerCenter metadata management and analysis tool. Use data lineage to understand the origin of the data,
how it transforms, and where it is used.
Use the PowerCenter Administration Console to configure data lineage for a Reporting Service.
When you access data lineage from Data Analyzer, Data Analyzer connects to a Metadata Manager server. The
Metadata Manager server displays the data lineage in an Internet Explorer browser window. You cannot use
data lineage with the Mozilla Firefox browser.
You can access data lineage for metrics, attributes, and reports from the following areas in Data Analyzer:
Data lineage for a Data Analyzer report, metric, or attribute displays one or more of the following objects:
Data Analyzer repositories. You can load objects from multiple Data Analyzer repositories into Metadata
Manager. In the data lineage, Metadata Manager displays metadata objects for each repository.
Data structures. Data structures group metadata into categories. For a Data Analyzer data lineage, the data
structures include the following:
Reports
Fact tables
Dimension tables
Table definitions
Fields. Fields are objects within data structures that store the metadata. For a Data Analyzer data lineage,
fields include the following:
Metrics in reports
Security 5
Attributes in reports
Columns in tables
Note: The Metadata Manager server must be running when you access data lineage from Data Analyzer. You can
display data lineage on the Internet Explorer browser. You cannot display data lineage on the Mozilla Firefox
browser.
After you access the data lineage, you can view details about each object in the data lineage. You can export the
data lineage to an HTML, Excel, or PDF file. You can also email the data lineage to other users. For more
information, see the Metadata Manager User Guide.
In Figure 1-1, PA5X_RICH_SRC is the repository that contains metadata about the report. In this example, the
following data structures display in the data lineage:
Data Analyzer report: Sales report
Data Analyzer dimension tables: Countries, Regions
Data Analyzer fact table: Costs Data
Data Analyzer table definitions: COUNTRIES, REGIONS, COSTS_DATA
In Figure 1-1, fields are the metrics and attributes in the report. Each field contains the following information:
Parent. Data structure that populates the field. For example, the parent for the Country Name field is the
Countries dimension table.
Field Name. Name of the field.
Repository. Name of the Data Analyzer repository that contains metadata for the report.
The direction of the arrows in the data lineage shows the direction of the data flow. In Figure 1-1, the data
lineage shows that the COUNTRIES table definition populates the Countries dimension table, which provides
the Country Name attribute for the Sales report.
The attribute name is the only field The attribute name (Brand) appears Data structures for reports
that appears in the data lineage. within the data structure for the report. that use the attribute.
Localization
Data Analyzer uses UTF-8 character encoding for displaying in different languages. UTF-8 character encoding
is an ASCII-compatible multi-byte Unicode and Universal Character Set (UCS) encoding method.
Language Settings
When you store data in multiple languages in a database, enable UTF-8 character encoding in the Data
Analyzer repository and data warehouse. For more information about how to enable UTF-8 character encoding,
see the database documentation.
A language setting is a superset of another language setting when it contains all characters encoded in the other
language. To avoid data errors, you must ensure that the language settings are correct when you complete the
following tasks in Data Analyzer:
Back up and restore Data Analyzer repositories. The repositories you back up and restore must have the
same language type and locale setting or the repository you restore must be a superset of the repository you
Localization 7
back up. For example, if the repository you back up contains Japanese data, the repository you restore to
must also support Japanese.
Import and export repository objects. When you import an exported repository object, the repositories
must have the same language type and locale setting or the destination repository must be a superset of the
source repository.
Import table definitions from the data source. When you import data warehouse table definitions into the
Data Analyzer repository, the language type and locale settings of the data warehouse and the Data Analyzer
repository must be the same or the repository must be a superset of the data source.
Overview
You create users, groups, and roles in the PowerCenter domain configuration database. Use the Security page of
the PowerCenter Administration Console to create users, groups, and roles for a Data Analyzer. For more
information about creating users, groups, and roles, see the PowerCenter Administrator Guide.
To secure information in the repository and data sources, Data Analyzer allows login access only to individuals
with user accounts in Data Analyzer. A user must have an active account to perform tasks and access data in
Data Analyzer. Users can perform different tasks based on their privileges.
You can edit some user and group properties in Data Analyzer.
Setting Permissions
You can set permissions to determine the tasks that users can perform on a repository object. You set access
permissions in Data Analyzer.
Authentication Methods
The way you manage users and groups depends on the authentication method you are using:
Native. You create and manage users, groups, and roles in the PowerCenter Administration Console.
PowerCenter stores the users, groups, and roles in the domain configuration database.You can modify some
user and group properties in Data Analyzer.
9
LDAP authentication. You manage the users and groups in the LDAP server but you create and manage the
roles and privileges in the PowerCenter Administration Console.
For more information about authentication methods, see the PowerCenter Administrator Guide.
User Synchronization
You manage users, groups, privileges, and roles on the Security page of the Administration Console. The
Service Manager stores users and groups in the domain configuration database and copies the list of users and
groups to the Data Analyzer repository. The Service Manager periodically synchronizes the list of users and
groups in the repository with the users and groups in the domain configuration database.
Note: If you edit any property of a user other than roles or privileges, the Service Manager does not synchronize
the changes to the Data Analyzer repository. Similarly, if you edit any property of a user in Data Analyzer, the
Service Manager does not synchronize the domain configuration database with the modification.
When you assign privileges and roles to users and groups for the Reporting Service in the Administration
Console or when you assign permissions to users and groups in Data Analyzer, the Service Manager stores the
privilege, role, and permission assignments with the list of users and groups in the Data Analyzer repository.
The Service Manager periodically synchronizes users in the LDAP server with the users in the domain
configuration database. In addition, the Service Manager synchronizes the users in the Data Analyzer repository
with the updated LDAP users in the domain configuration database. For more information, see the
PowerCenter Administrator Guide.
Managing Groups
Groups allow you to organize users according to their roles in the organization. For example, you might
organize users into groups based on their departments or management level. You manage users and groups,
their organization, and which privileges and roles are assigned to them in the PowerCenter Administration
Console. You can restrict data access by group.
Editing a Group
You can see groups with privileges on a Reporting Service when you launch the Data Analyzer instance created
by that Reporting Service. In Data Analyzer, you can edit some group properties such as department, color
schemes, or query governing settings. You cannot add users or roles to the group, or assign a primary group to
users in Data Analyzer.
1. Connect to Data Analyzer from the PowerCenter Administration Console, PowerCenter Client tools,
Metadata Manager, or by accessing the Data Analyzer URL from a browser.
2. Click Administration > Access Management > Groups.
The Groups page appears.
3. Select the group you want to edit and click Edit.
The properties of the group appear.
Property Description
Color Scheme Assignment Assign a color scheme for the group. For more information, see
“Managing Color Schemes and Logos” on page 74.
Query Governing Query governing settings on the Groups page apply to reports that
users in the group can run. If a user belongs to one or more groups in
the same hierarchy level, Data Analyzer uses the largest query
governing settings from each group. For more information, see “Setting
Rules for Queries” on page 85.
Managing Users
Each user must have a user account to access Data Analyzer. To perform Data Analyzer tasks, a user must have
the appropriate privileges for the Reporting Service. You assign privileges to a user, add the user to one or more
groups, and assign roles to the user in the PowerCenter Administration Console.
1. Connect to Data Analyzer from the PowerCenter Administration Console, PowerCenter Client tools,
Metadata Manager, or by accessing the Data Analyzer URL from a browser.
2. Click Administration > Access Management > Users.
The Users page appears.
3. Enter a search string for the user in the Search field and click Find.
Data Analyzer displays the list of users that match the search string you specify.
4. Select the user record you want to edit and click on it.
The properties of the user appear.
5. Edit any of the following properties:
Property Description
Managing Users 11
Property Description
Title Describes the function of the user within the organization or within Data Analyzer.
Titles do not affect roles or Data Analyzer privileges.
Email Address Data Analyzer uses this as the email for the sender when the user emails reports from
Data Analyzer. Data Analyzer sends the email to this address when it sends an alert
notification to the user. You cannot edit this information.
Department Department for the user. You can associate the user with a department to organize
users and simplify the process of searching for users. For more information, see
“Configuring Departments and Categories” on page 89.
Color Scheme Select the color scheme to use when the user logs in to Data Analyzer. If no color
Assignment scheme is selected, Data Analyzer uses the default color scheme when the user logs
in. Color schemes assigned at user level take precedence over color schemes
assigned at group level. Unless users have administrator privileges, they cannot
change the color scheme assigned to them.
For more information, see “Managing Color Schemes and Logos” on page 74.
Note: Users can edit some of the properties of their own accounts in the Manage Account tab.
2. If the full name contains a comma, the full name has the following syntax:
<LastName>, <FirstName> [<MiddleName>]
Any full name that contains a comma is converted to use the syntax without a comma:
[<FirstName>] [<MiddleName>] <LastName>
3. After the conversion, the full name is separated into first, middle, and last names based on the number of
text strings separated by a space:
If the full name has two text strings, there is no middle name.
If the full name has more than three text strings, any string after the third string is included in the last
name.
Overview
You can customize Data Analyzer user access with the following security options:
Access permissions. Restrict user and group access to folders, reports, dashboards, attributes, metrics,
template dimensions, and schedules. Use access permissions to restrict access to a particular folder or object
in the repository.
Data restrictions. Restrict access to data in fact tables and operational schemas using associated attributes.
Use data restrictions to restrict users or groups from accessing specific data when they view reports.
When you create an object in the repository, every user has default Read and Write permission on that object.
By customizing access permissions on an object, you determine which users and groups can Read, Write,
Delete, or Change Access permission on that object.
When you create data restrictions, you determine which users and groups can access particular attribute values.
When a user with a data restriction runs a report, Data Analyzer does not display restricted data associated with
those values.
13
Delete. Allows you to delete a folder or an object from the repository.
Change permission. Allows you to change the access permissions on a folder or object.
By default, Data Analyzer grants Read permission to every user in the repository. Use the General Permissions
area to modify default access permissions for an object.
When you modify the access permissions on a folder, you can override existing access permissions on all objects
in the folder, including subfolders.
Use the following methods to set access permissions:
Inclusive. Permit access to the users and groups that you select. You can also permit additional access
permissions to selected users and groups.
Exclusive. Restrict access from the users and groups that you select. You can completely restrict the selected
users and groups or restrict them to fewer access permissions.
To grant more extensive access to a user or group, use inclusive access permissions. For example, you can grant
the Analysts group inclusive access permissions to delete a report.
To restrict the access of specific users or groups, use exclusive access permissions. For example, you can use
exclusive access permissions to restrict the Vendors group from viewing sensitive reports.
You can use a combination of inclusive, exclusive, and default access permissions to create comprehensive access
permissions for an object. For example, you can select Read as the default access permission for a folder, grant
the Sales group inclusive write permission to edit objects in the folder, and use an exclusive Read permission to
deny an individual in the Sales group access to the folder.
To grant access permissions to users, search for the user name, then set the access permissions for the user you
select.
Setting access permissions for a composite report determines whether the composite report itself is visible but
does not affect the existing security of subreports. Users or groups must also have permissions to view individual
subreports. Therefore, a composite report might contain some subreports that do not display for all users.
Note: Any user with the System Administrator role has access to all Public Folders and to their Personal Folder
in the repository and can override any access permissions you set. If you have reports and shared documents
that you do not want to share, save them to your Personal Folder or your personal dashboard.
Content folder in Public Folders Find > Public Folders > folder name
Content folder in Personal Folder Find > Personal Folder > folder name
Report in Public Folders Find > Public Folders > report name
Report in Personal Folder Find > Personal Folder > report name
Composite Report in Public Folders Find > Public Folders > composite report name
Composite Report in Personal Folder Find > Personal Folder > composite report name
Metric Folder Administration > Schema Design > Schema Directory > Metrics
folder > metric folder name
Attribute Folder Administration > Schema Design > Schema Directory >
Attributes folder > attribute folder name
Template Dimensions Folder Administration > Schema Design > Schema Directory >
Template Dimensions folder > template dimensions folder name
Metric Administration > Schema Design > Schema Directory > Metrics
Folder > metric folder name > metric name
Time-Based Schedule Administration > Scheduling > Time-Based Schedules > time-
based schedule name
Event-Based Schedule Administration > Scheduling > Event-Based Schedules > event-
based schedule name
Filterset Administration > Schema Directory > Filtersets > filterset name
Corporate Sales
group granted
additional write
permission.
In the above example, Data Analyzer allows users to view data which is not included in the North region and
which is in either the Footware category or has the BigShoes brand.
When a user belongs to more than one group, Data Analyzer handles data restrictions differently depending on
the relationship between the two groups.
The following table describes how Data Analyzer handles multiple group situations:
Data Analyzer
A user who belongs to... joins data Example
restrictions with...
Both a group and its AND operator If Group A has the following restriction:
subgroup Region IN ‘East’
And Subgroup B has the following restriction:
Category IN ‘Women’
Data Analyzer joins the restrictions with AND:
Region IN ‘East’ AND Category IN
‘Women’
Two groups that belong to OR operator If Group A has the following restriction:
the same parent group Region IN ‘East’
And Group B has the following restriction:
Category IN ‘Women’
Data Analyzer joins the restrictions with OR:
Region IN ‘East’ OR Category IN ‘Women’
Fact Table Administration > Schema Design > Analytic Schemas > Show
Fact Tables
2. Click the Data Restrictions button ( ) of the object you want to restrict.
The Data Restrictions page appears.
3. Click Select a Group/User.
The Select Group or User window appears.
4. To create a data restriction for a group, select Group. To create a data restriction for a user, select User.
If you select Group and the number of groups is less than 30, a list of available groups appears. If the
number of groups is 30 or more, the group search option appears. If you select User and you know the user
name you want to restrict, enter it in the User field.
Or, search for a user or group. Use the asterisk or percent symbols as wildcard characters.
5. Click Find.
6. Select the user or group you want to restrict and click OK.
7. In the Create Restriction task area, select an attribute from the attribute list.
Recently-used attributes appear in the list. To browse or find other attributes, click Select Other Attributes.
The Attribute Selection window appears. Data Analyzer displays the attributes for the object in the
Attribute Selection window. Navigate to the attribute you want and select an attribute. CLOB attributes
are not available for use in data restrictions.
8. From the condition list, select an operator.
9. Enter attribute values.
You can select attribute values from a list, or you can search for specific values and Ctrl-click to select more
than one. If a global variable contains the attribute values you want to use, you can select a global variable.
You can also manually enter attribute values.
10. To view the SQL query for the restriction, click Advanced.
Data Analyzer displays the SQL query for the restriction in advanced mode.
In advanced mode, you can edit the SQL query for a restriction. Data Analyzer displays buttons for adding
numbers and operators to the SQL query for the data restriction. Click within the SQL query, and then
click the buttons to add numbers or operators to the SQL query.
11. Click Add.
The data restriction appears in the Created Restrictions task area.
Use the Basic or Advanced mode, described in steps 7 to 11, to create more restrictions for the same user or
group.
If you can create more than one data restriction, you can adjust the order of the restrictions and the
operators to use between restrictions.
12. To adjust the restrictions, click Advanced in the Created Restrictions task area.
Click to add left Click to add Click to add right Click to change the order
parenthesis. operators. parenthesis. of the restrictions.
1. To create data restrictions for users, click Administration > Access Management > Users.
-or-
To create data restrictions for groups, click Administration > Access Management > Groups. Then click
Groups to display all groups.
2. Click the Data Restrictions button ( ) of the user or group profile you want to edit.
The Data Restrictions page appears.
3. Select a schema from a list of available schemas.
The page shows a list of fact tables and operational schemas tables. Hierarchical schemas are not available
for use in data restrictions.
To select all schemas, select All Schemas. This applies the data restriction to all data in the repository
associated with the attribute you choose.
4. In the Create Restriction task area, select an attribute from the attribute list.
Recently-used attributes appear in the list. To browse or find an attribute, click Select Other Attributes.
The Attribute Selection window appears. Data Analyzer displays all attribute folders for the object in the
Attribute Selection window. Navigate to the attribute you want and select an attribute. CLOB attributes
are not available for use in data restrictions.
5. From the condition list, select an operator.
Click to add left Click to add Click to add right Click to change the order
parenthesis. operators. parenthesis. of the restrictions.
Overview
A time-based schedule updates reports based on a configured schedule. When Data Analyzer runs a time-based
schedule, it runs each report attached to the schedule. You can attach any cached report to a time-based
schedule.
To use a time-based schedule, complete the following steps:
1. Create a time-based schedule.
Configure the start time, date, and repeating option of the schedule when you create or edit a time-based
schedule.
2. Attach reports to the time-based schedule as tasks.
Attach a report to the time-based schedule when you create or edit the report. Attach imported cached
reports to tasks from the time-based schedule.
You can configure the following types of time-based schedules:
Single-event schedule. Updates report data only on the configured date. Create a single-event schedule for a
one-time update of the report data. For example, if you know that the database administrator will update the
data warehouse on December 1, but do not know when other updates occur, create a single-event schedule
for December 2.
Recurring schedule. Updates report data on a regular cycle, such as once a week or on the first Monday of
each month. Create a recurring schedule to update report data regularly. You might use a recurring schedule
to run reports after a regularly scheduled update of the data warehouse. For example, if you know that the
21
data warehouse is updated the first Friday of every month, create a time-based schedule to update reports on
the second Monday of every month.
After you attach reports to a time-based schedule, you can create indicators and alerts for the reports.
Monitor existing schedules with the Calendar or the Schedule Monitor. The Calendar provides daily, weekly,
or monthly views of all the time-based schedules in the repository. You can set up business days and holidays for
the Data Analyzer Calendar. The Schedule Monitor provides a list of the schedules currently running reports.
If you want to update reports when a PowerCenter session or batch completes, you can create an event-based
schedule.
Field Description
Name Name of the time-based schedule. The name can include any character except a
space, tab, newline character, and the following special characters:
\/:*?“<>|‘&[]
Business Day When selected, the schedule runs reports on business days only. If a scheduled
Only run falls on a non-business day, a weekend or configured holiday, Data Analyzer
waits until the next scheduled run to run attached reports.
Start Date Date the schedule initiates. Default is the current date on Data Analyzer.
Start Time Time the schedule initiates. Default is 12:00 p.m. (noon).
Field Description
Repeat Every Repeats each week on the specified day(s). Use this
(Monday/Tuesday/Wednesday/Thursday/Frida setting to schedule weekly updates of report data.
y/Saturday/Sunday)
Field Description
Always Schedule repeats until disabled or deleted from the repository. Default is
Always.
Until (Month) (Day) (Year) Schedule repeats until the date you specify. Default is the current date on
Data Analyzer.
5. Click OK.
You must attach any cached reports that you import to a schedule. You can attach each imported report
individually or attach multiple imported reports from a list to a single schedule. To attach multiple reports
from the list, you must attach the reports during the same Data Analyzer session. If the session expires or you
log out before attaching multiple reports from the import list, you cannot attach multiple reports. You must
attach the imported reports individually.
You can attach imported cached reports to time-based or event-based schedules.
Defining a Holiday
You can define holidays for the Data Analyzer Calendar. Data Analyzer treats holidays as non-business days.
Time-based schedules configured to run reports only on business days do not run on holidays. When a schedule
falls on a holiday, Data Analyzer runs the reports on the next scheduled day. Time-based schedules that are not
configured to run only on business days still run on configured holidays.
View all configured holidays on the Holidays page. By default, there are no configured holidays.
To define a holiday:
Monitoring a Schedule
The Schedule Monitor provides a list of all schedules that are currently running in the repository. You might
check the Schedule Monitor before you restart Data Analyzer to make sure no schedules are running. You
might also use the Schedule Monitor to verify whether Data Analyzer runs reports at the scheduled time.
Stopping a Schedule
You can stop a running schedule and all attached reports through the Schedule Monitor. You might stop a
schedule when you need to restart the server or when a problem arises with source data.
Overview
PowerCenter Data Analyzer provides event-based schedules and the PowerCenter Integration utility so you can
update reports in Data Analyzer based on the completion of PowerCenter sessions.
To update reports in Data Analyzer when a session completes in PowerCenter, complete the following steps:
1. Create an event-based schedule and attach cached reports to the schedule. For more information, see “Step
1. Create an Event-Based Schedule” on page 32.
2. Configure a PowerCenter session to call the PowerCenter Integration utility as a post-session command
and pass the event-based schedule name as a parameter. For more information, see “Step 2. Use the
PowerCenter Integration Utility in PowerCenter” on page 33.
If the PowerCenter Integration utility is set up correctly, Data Analyzer runs each report attached to the event-
based schedule when a PowerCenter session completes.
You can create indicators and alerts for the reports in an event-based schedule.
You can monitor event-based schedules with the Schedule Monitor. The Schedule Monitor provides a list of
the schedules currently running reports.
You cannot use the PowerCenter Integration utility with a time-based schedule.
31
PowerCenter installs a separate PowerCenter Integration utility for every Reporting Service that you create. You
can find the PowerCenter Integration utility in the following folder:
<PowerCenter_InstallationDirectory>\server\tomcat\jboss\notifyias-<Reporting Service
Name>
PowerCenter suffixes the Reporting Service name to the notifyias folder. For example, if you create a Reporting
Service and call it DA_Test, the notifyias folder would be notifyias-DA_Test.
Before you run the PowerCenter Integration utility, complete the following steps:
1. Open the notifyias.properties file in the notifyias-<Reporting Service Name> folder and set the
logfile.location property to the location and the name of the PowerCenter Integration utility log file.
The PowerCenter Integration utility creates a log file when it runs after the PowerCenter session
completes. The logfile.location property determines the location and the name of the log file.
2. Open the notifyias file in a text editor:
UNIX: notifyias.sh
Windows: notifyias.bat
Back up the notifyias file before you modify it.
3. Set the JAVA_HOME environment variable to the location of the JVM.
Run the PowerCenter Integration utility to update reports in Data Analyzer when a session completes in
PowerCenter.
The PowerCenter Integration utility considers the settings in the notifyias.properties file to update reports in
Data Analyzer. The notifyias.properties file contains information about the Reporting Service URL and the
schedule queue name.
When you create a Reporting Service, PowerCenter sets the properties in the notifyias.properties file to point to
the correct instance of the Reporting Service.
Use the following shell command syntax for PowerCenter installed on UNIX:
notifyias.sh Event-BasedScheduleName
Event-BasedScheduleName is the name of the Data Analyzer event-based schedule that contains the tasks you
want to run when the PowerCenter session completes. If the system path does not include the path of the
PowerCenter Integration utility, you need to prefix the utility file name with the file path.
You can also run the PowerCenter Integration utility as a command task in a PowerCenter workflow. If you
want to run the PowerCenter Integration utility after all other tasks in a workflow complete, you can run it as
the last task in the workflow.
For more information about configuring post-session commands, PowerCenter workflows, or the PowerCenter
Integration utility, see the PowerCenter Workflow Basics Guide.
You must attach each imported cached report to a schedule. You can attach imported reports individually or
attach multiple imported reports from a list to a single schedule. To attach multiple reports from the list, you
must attach them during the same Data Analyzer session. If the session expires or you log out before attaching
the reports from the import list, you cannot attach multiple reports. You must attach the imported reports
individually.
You can attach imported cached reports to time-based or event-based schedules.
4. Click Add.
The Add button appears only when you have unscheduled imported reports in the repository.
The Imported Scheduled Reports window appears.
5. Select the reports that you want to add to the schedule.
If you want to add all available imported reports to the schedule, click the All check box.
6. Click Apply.
The report appears as an item on the task list.
Overview
You can export repository objects to XML files and import repository objects from XML files. You might want
to export objects to archive the repository. You might also want to export and import objects to move Data
Analyzer objects from development to production.
You can export the following repository objects:
Schemas
Time Dimensions
Reports
Global Variables
Dashboards
Security profiles
Schedules
When you export the repository objects, Data Analyzer creates an XML file that contains information about the
exported objects. Use this file to import the repository objects into a Data Analyzer repository. You can view
the XML files with any text editor. However, do not modify the XML file created when you export objects. Any
39
change might invalidate the XML file and prevent you from using it to import objects into a Data Analyzer
repository.
When you save the XML file on a Windows machine, verify that you have enough space available in the
Windows temp directory, usually in the C: drive, for the temporary space typically required when a file is saved.
Schedule exporting and importing tasks so that you do not disrupt Data Analyzer users. Exporting and
importing repository objects uses considerable system resources. If you perform these tasks while users are
logged in to Data Analyzer, users might experience slow response or timeout errors.
You can also export repository objects using the ImportExport command line utility. For more information, see
“Using the Import Export Utility” on page 65.
Exporting a Schema
You can export analytic and operational schemas. When you export a schema from the Data Analyzer
repository, you can select individual metrics within a schema to export or you can select a folder that contains
metrics. You can also choose whether to export only metric definitions or to export all metrics, attributes,
tables, and other schema objects associated with the metric.
Exporting a Schema 41
4. Click Export as XML.
The File Download window appears.
5. Click Save.
The Save As window appears.
6. Navigate to the directory where you want to save the file.
7. Enter a name for the XML file and click Save.
Data Analyzer exports the schema to an XML file.
Exporting a Report
You can export reports from public and personal folders. You can export multiple reports at once. When you
export a folder, Data Analyzer exports all reports in the folder and its subfolders.
You can export cached and on-demand reports. When exporting cached reports, Data Analyzer exports the
report data and the schedule for cached reports.
When you export a report, Data Analyzer always exports the following report components:
Report table
Report charts
Filters
Calculations
Custom attributes
To export a report:
Exporting a Report 43
Exporting a Global Variable
You can export any global variables defined in the repository. When you export multiple global variables, Data
Analyzer creates one XML file for the global variables and their default values.
Exporting a Dashboard
When you export a dashboard, Data Analyzer exports the following objects associated with the dashboard:
Reports
Indicators
Shared documents
Dashboard filters
Discussion comments
Feedback
Data Analyzer does not export the following objects associated with the dashboard:
Access permissions
Attributes and metrics in the report
Real-time objects
When you export a dashboard, the Export Options button is unavailable. Therefore, you cannot select specific
components to export.
You can export any of the public dashboards defined in the repository. You can export more than one
dashboard at a time.
To export a dashboard:
Exporting a Schedule
You can export a time-based or event-based schedule to an XML file. Data Analyzer runs a report with a time-
based schedule on a configured schedule. Data Analyzer runs a report with an event-based schedule when a
PowerCenter session completes.
When you export a schedule, Data Analyzer does not export the history of the schedule.
To export a schedule:
If you double-click the XML file, the operating system tries to open the file with a web browser. The web
browser cannot locate the DTD file Data Analyzer uses for exported objects.
Use a text editor to open the XML file. However, do not edit the file. Changes might invalidate the file.
Troubleshooting 47
48 Chapter 6: Exporting Objects from the Repository
CHAPTER 7
Overview
You can import objects into the Data Analyzer repository from a valid XML file of exported repository objects.
You can import the following repository objects from XML files:
Schemas
Time dimensions
Reports
Global variables
Dashboards
Security profiles
Schedules
Data Analyzer imports objects based on the following constraints:
You can import objects into the same repository or a different repository. When you import a repository
object that was exported from a different repository, both repositories must have the same language type and
locale settings, or, the destination repository must be a superset of the source repository. For more
information, see “Localization” on page 7.
You can import objects from Data Analyzer 5.0 repositories or later. For more information, see “Importing
Objects from a Previous Version” on page 50.
Except for global variables, if you import objects that already exist in the repository, you can choose to
overwrite the existing objects. You cannot overwrite global variables that already exist in the repository.
49
You might want to back up the target repository before you import repository objects into it. You can back up
a Data Analyzer repository in the PowerCenter Administration Console. For more information, see the
PowerCenter Administrator Guide.
Exporting and importing repository objects use considerable system resources. If you perform these tasks while
users are logged in to Data Analyzer, users might experience slow response or timeout errors. Make sure that
you schedule exporting and importing tasks so that you do not disrupt Data Analyzer users.
You can also import repository objects using the ImportExport command line utility.
XML Validation
When you import objects, you can validate the XML file against the DTD provided by Data Analyzer.
Ordinarily, you do not need to validate an XML file that you create by exporting from Data Analyzer.
However, if you are not sure of the validity of an XML file, you can validate it against the Data Analyzer DTD
file when you start the import process.
You must ensure that you do not modify an XML file of exported objects. If you modify the XML file, you
might not be able to use it to import objects into a Data Analyzer repository. If you try to import an invalid
XML file, Data Analyzer stops the import process and displays the following message:
Error occurred when trying to parse the XML file.
Object Permissions
When you import a repository object, Data Analyzer grants you the same permissions to the object as the owner
of the object. Data Analyzer system administrators can access all imported repository objects. When you import
a report, you can limit access to the report for users who are not system administrators by clearing the Publish
to Everyone option. If you publish an imported report to everyone, all users in Data Analyzer have read and
write access to the report. You can then change the access permissions to the report to restrict specific users or
groups from accessing it.
Importing a Schema
You can import schemas from an XML file. A valid XML file can contain definitions of the following schema
objects:
Tables. The schema tables associated with the exported metrics in the XML file. The file might include the
following tables:
Fact table associated with the metric
Dimension tables associated with the fact table
Aggregate tables associated with the dimension and fact tables
Snowflake dimensions associated with the dimension tables
Template dimensions associated with the dimension tables or exported separately
Schema joins. The relationships between tables associated with the exported metrics in the XML file. The
file can include the following relationships:
To import a schema:
Property Description
Name Name of the fact or dimension tables associated with the metric to be imported.
Last Modified Date Date when the table was last modified.
Last Modified By User name of the Data Analyzer user who last modified the table.
Importing a Schema 51
Table 7-2 shows the information that Data Analyzer displays for the schema joins:
Property Description
Table1 Name Name of the fact table that contains foreign keys joined to the primary keys in the
dimension tables. Can also be the name of a dimension table that joins to a
snowflake dimension.
Table2 Name Name of the dimension table that contains the primary key joined to the foreign
keys in the fact table. Can also be the name of a snowflake dimension table
associated with a dimension table.
Join Expression Foreign key and primary key columns that join a fact and dimension table or a
dimension table and a snowflake dimension in the following format:
Table.ForeignKey = Table.PrimaryKey
Table 7-3 shows the information that Data Analyzer displays for the metrics:
Property Description
Last Modified Date Date when the metric was last modified.
Last Modified By User name of the person who last modified the metric.
Analyzer Table Fact table that contains the metric. If the metric is a calculated metric, square
Locations brackets ([]) display in place of a fact table.
Table 7-4 shows the information that Data Analyzer displays for the attributes:
Property Description
Name Name of the attributes found in the fact or dimension tables associated with the
metric to be imported.
Last Modified Date Date when the attribute was last modified.
Last Modified By User name of the person who last modified the attribute.
Table 7-5 shows the information that Data Analyzer displays for the drill paths:
Property Description
Name Name of the drill path that includes attributes in the fact or dimension tables
associated with the metric to be imported.
Last Modified Date Date when the drill path was last modified.
Last Modified By User name of the person who last modified the drill path.
Paths List of attributes in the drill path that are found in the fact or dimension tables
associated with the metric to be imported.
Property Description
Name Name of the time key associated with the fact table.
Table 7-7 shows the information that Data Analyzer displays for the operational schemas:
Property Description
Last Modified Date Date when the operational schema was last modified.
Last Modified By User name of the person who last modified the operational schema.
Table 7-8 shows the information that Data Analyzer displays for the hierarchical schemas:
Property Description
Last Modified Date Date when the hierarchical schema was last modified.
Last Modified By User name of the person who last modified the hierarchical schema.
6. Click Continue.
If objects in the XML file are already defined in the repository, a list of the duplicate objects appears.
To overwrite all the schema objects, select Overwrite All. To overwrite the schema objects of a certain type,
select Overwrite at the top of each section. To overwrite only specific schema objects, select the object.
7. Click Apply.
If you select to overwrite schema objects, confirm that you want to overwrite the objects.
Data Analyzer imports the definitions of all selected schema objects.
Property Description
Last Modified Date Date when the time dimension table was last modified.
Last Modified By User name of the Data Analyzer user who last modified the report.
6. Click Continue.
If you successfully import the time dimensions, Data Analyzer displays a message that you have successfully
imported the time dimensions.
If objects in the XML file are already defined in the repository, a list of the duplicate objects appears.
7. Select the objects you want to overwrite.
8. Click Continue.
Data Analyzer imports the definitions of all selected time dimensions.
Importing a Report
You can import reports from an XML file. Depending on the reports included in the file and the options
selected when exporting the reports, the XML file might not contain all supported metadata. When available,
Data Analyzer imports the following components of a report:
Report table
Report chart
Indicators
Alerts
Filters
Filtersets
Highlighting
Calculations
Custom attributes
All reports in an analytic workflow
Permissions
Report links
Schedules
Data Analyzer imports all data for each component, with the following exceptions:
Gauge indicators. Imported gauge indicators do not keep their original owner. The user who imports the
report becomes the owner of the gauge indicator. If the gauge indicator is personal, it becomes personal to
the user who imports the report.
Alerts. Imported personal and public alerts use the state set for all report subscribers as the default alert state.
Highlighting. Data Analyzer does not export any personal highlighting. Imported public highlighting uses
the state set for all users as the default highlighting state.
To view the data for the report, you first must run the report. You can run imported cached reports in the
background immediately after you import them. Running reports in the background can be a long process, and
the data may not be available immediately. You can also edit the report and save it before you view it to make
sure that Data Analyzer runs the report before displaying the results.
If you import a report and its corresponding analytic workflow, the XML file contains all workflow reports. If
you choose to overwrite the report, Data Analyzer also overwrites the workflow reports. When importing
multiple workflows, Data Analyzer does not import analytic workflows containing the same workflow report
names. Thus, ensure that all imported analytic workflows have unique report names prior to export.
If you import a composite report, the XML file contains all the subreports. You can choose to overwrite the
subreports or composite report if they are already in the repository.
Importing a Report 55
Table 7-10 shows the properties that Data Analyzer displays for the reports:
Property Description
Last Modified Date Date when the report was last modified.
Last Modified By User name of the Data Analyzer user who last modified the report.
6. To allow all users to have access to the reports, select Publish to Everyone.
To immediately update the data for all the cached reports in the list, select Run Cached Reports after
Import. After you import the reports, Data Analyzer runs the cached reports in the background.
For more information about attaching the imported cached reports to a schedule immediately, see
“Attaching Imported Cached Reports to a Time-Based Schedule” on page 26 and “Attaching Imported
Cached Reports to an Event-Based Schedule” on page 37.
7. Click Continue.
If you successfully import the reports, Data Analyzer displays a message that you have successfully
imported them. When necessary, Data Analyzer lists any folders created for the reports. If you import
cached reports, it displays a message that you need to assign the cached reports to a schedule in the target
repository.
If attributes or metrics associated with the report are not defined in the repository, Data Analyzer displays
a list of the undefined objects. If you import the report, you might not be able to run it successfully. To
cancel the import process, click Cancel. Create the required objects in the target repository before
attempting to import the report again.
If reports in the XML file are already defined in the repository, a list of the duplicate reports appears. To
overwrite any of the reports, select Overwrite next to the report name. To overwrite all reports, select
Overwrite at the top of the list.
8. Click Continue.
Data Analyzer imports the definitions of all selected reports.
Property Description
6. Click Continue.
Data Analyzer does not import global variables whose names exist in the repository, even if the values are
different.
If the XML file includes global variables already in the repository, Data Analyzer displays a warning. If you
continue the import process, Data Analyzer imports only the variables that are not in the repository. To
continue the import process, click Continue.
Importing a Dashboard
Dashboards display links to reports, shared documents, and indicators. When you import a dashboard from an
XML file, Data Analyzer imports the following objects associated with the dashboard:
Reports
Indicators
Shared documents
Dashboard filters
Discussion comments
Feedback
Data Analyzer does not import the following objects associated with the dashboard:
Access permissions
Attributes and metrics in the report
Real-time objects
Dashboards are associated with the folder hierarchy. When you import a dashboard, Data Analyzer stores the
imported dashboard in the following manner:
Dashboards exported from a public folder. Data Analyzer imports the dashboards to the corresponding
public folder in the target repository. When Data Analyzer imports a dashboard to a repository that does not
have the same folder as the originating repository, Data Analyzer creates a new folder of that name for the
dashboard.
Dashboards exported from a personal folder. Data Analyzer imports the dashboards to a new Public Folders
> Personal Dashboards (Imported MMDDYY) > Owner folder.
Personal dashboard. Data Analyzer imports a personal dashboard to the Public Folders folder.
Dashboards exported from an earlier version of Data Analyzer. Data Analyzer imports the dashboards to
the Public Folders > Dashboards folder. If the Dashboards folder already exists at the time of import, then
Data Analyzer creates a new Public Folders > Dashboards_n folder to store the dashboards (for example,
Dashboards_1 or Dashboards_2).
When you import a dashboard, Data Analyzer imports all indicators for the originating report and workflow
reports in a workflow. However, indicators for workflow reports do not display on the dashboard after you
import it. You must add those indicators to the dashboard manually.
Importing a Dashboard 57
If an object exists in the repository, Data Analyzer provides an option to overwrite the object.
When you import a dashboard, make sure all the metrics and attributes used in reports associated with the
dashboard are defined in the target repository. If the attributes or metrics in a report associated with the
dashboard do not exist, the report does not display on the imported dashboard.
Data Analyzer does not automatically display imported dashboards in your subscription list on the View tab.
You must manually subscribe to imported dashboards to display them in the Subscription menu.
To import a dashboard:
Property Description
Last Modified Date Date when the dashboard was last modified.
Last Modified By User name of the Data Analyzer user who last modified the dashboard.
6. Click Continue.
Data Analyzer displays a list of the metrics and attributes in the reports associated with the dashboard that
are not in the repository.
Data Analyzer does not import the attributes and metrics in the reports associated with the dashboard. If
the attributes or metrics in a report associated with the dashboard do not exist, the report does not display
on the imported dashboard.
To cancel the import process, click Cancel.
7. To continue the import process, click Apply.
Data Analyzer displays a list of the dashboards, reports, and shared documents already defined in the
repository.
To overwrite a dashboard, report, or shared document, select Overwrite next to the item name. To
overwrite all dashboards, reports, or shared documents, select Overwrite at the top of the list.
8. Click Apply.
Data Analyzer imports the definitions of all selected dashboards and the objects associated with the
dashboard.
Property Description
Object Name Indicates the Schema Directory path of the restricted schema object if the
restricted object is a folder. Indicates the fact or dimension table and attribute
name if the object is an attribute. Indicates the fact table and metric name if the
object is a metric.
Table 7-14 shows the information that Data Analyzer displays for the data restrictions:
Property Description
Schema Table Name Name of the restricted table found in the security profile.
Security Condition Description of the data access restrictions for the table.
Importing a Schedule
You can import a time-based or event-based schedule from an XML file. When you import a schedule, Data
Analyzer does not attach the schedule to any reports.
When you import a schedule from an XML file, you do not import the task history or schedule history.
To import a schedule:
Property Description
Last Modified Date Date when the schedule was last modified.
Last Modified By User name of the person who last modified the schedule.
6. Click Continue.
If the schedules in the XML file are already defined in the repository, a list of the duplicate schedules
appears.
To overwrite a schedule, click the Overwrite check box next to the schedule. To overwrite all schedules,
click the Overwrite check box at the top of the list.
7. Click Continue.
Data Analyzer imports the schedules. You can then attach reports to the imported schedule.
Importing a Schedule 61
Troubleshooting
When I import my schemas into Data Analyzer, I run out of time. Is there a way to raise the transaction time out
period?
The default transaction time out for Data Analyzer is 3600 seconds (1 hour). If you are importing large
amounts of data from XML and the transaction time is not enough, you can change the default transaction time
out value. To change the default transaction time out for Data Analyzer, edit the value of the
import.transaction.timeout.seconds property in the DataAnalyzer.properties file. For more information about
editing the DataAnalyzer.properties file, see “Configuration Files” on page 127.
After you change this value, you must restart the application server. You can now run large import processes
without timing out.
I have an IBM DB2 8.x repository. When I import large XML files, Data Analyzer generates different errors. How can I
import large XML files?
The Data Analyzer installer installs a JDBC driver for IBM DB2 8.x. If you use this driver to connect to a DB2
8.x repository database, Data Analyzer might display error messages when you import large XML files. You can
modify the settings of the application server, the database, or the JDBC driver to solve the problem. You might
need to contact your database system administrator to change some of these settings.
Depending on the error that Data Analyzer generates, you might want to modify the following parameters:
DynamicSections value of the JDBC driver
Page size of the temporary table space
Heap size for the application
The error occurs when the default value of the DynamicSections property of the JDBC driver is too small to
handle large XML imports. The default value of the DynamicSections connection property is 200. You must
increase the default value of DynamicSections connection property to at least 500.
Use the DataDirect Connect for JDBC utility to increase the default value of the DynamicSections connection
property and recreate the JDBC driver package. Download the utility from the Product Downloads page of
DataDirect Technologies web site:
http://www.datadirect.com/download/index.ssp
1. On the Product Downloads page, click the DataDirect Connect for JDBC Any Java Platform link and
complete the registration information to download the file.
The name of the download file is connectjdbc.jar.
2. Extract the contents of the connectjdbc.jar file in a temporary directory and install the DataDirect Connect
for JDBC utility.
Follow the instructions in the DataDirect Connect for JDBC Installation Guide.
3. On the command line, run the following file extracted from the connectjdbc.jar file:
Windows: Installer.bat
UNIX: Installer.sh
ServerName is the name of the machine hosting the repository database. PortNumber is the port number of
the database. DatabaseName is the name of the repository database.
11. In the User Name and Password fields, enter the user name and password you use to connect to the
repository database from Data Analyzer.
12. Click Connect, and then close the window.
13. Restart the application server.
If you continue getting the same error message when you import large XML files, you can run the Test for
JDBC Tool again and increase the value of DynamicSections to 750 or 1000.
This problem occurs when the row length or number of columns of the system temporary table exceeds the
limit of the largest temporary table space in the database.
To resolve the error, create a new system temporary table space with the page size of 32KB. For more
information, see the IBM DB2 documentation.
This problem occurs when there is not enough storage available in the database application heap to process the
import request.
To resolve the problem, log out of Data Analyzer and stop the application server. On the repository database,
increase the value of the application heap size configuration parameter (APPLHEAPSZ) to 512. Restart the
application server.
For more information, see the IBM DB2 documentation.
Troubleshooting 63
64 Chapter 7: Importing Objects to the Repository
CHAPTER 8
Overview
The Import Export utility lets you import and export Data Analyzer repository objects from the command line.
Use the Import Export utility to migrate repository objects from one repository to another. For example, you
can use the utility to quickly migrate Data Analyzer repository objects from a development repository into a
production repository.
You can use the Import Export utility to import objects from Data Analyzer 5.0 repositories or later. You can
also use the utility to archive your repository without using a browser.
When you run the Import Export utility, Data Analyzer imports or exports all objects of a specified type. For
example, you can run the utility to import all reports from an XML file or export all dashboards to an XML file.
You must run the utility multiple times to import or export different types of objects.
Use the utility to import or export the security profile of an individual user or group. You cannot use the utility
to import or export other individual objects. For example, you cannot use the utility to export a specific user or
report to an XML file.
To import or export individual objects, use the Data Analyzer Administration tab. You can also use the Data
Analyzer Administration tab to import or export all objects of a specified type. When you use the Import
Export utility, the same rules as those about import or export from the Data Analyzer Administration tab apply.
For example, with the Import Export utility or the Data Analyzer Administration tab, you can import only
those global variables that do not already exist in the repository.
If Data Analyzer is installed with the LDAP authentication method, you cannot use the Import Export utility to
import users, groups, or roles. With the LDAP authentication method, Data Analyzer does not store user
passwords in the Data Analyzer repository. Data Analyzer authenticates the passwords directly in the LDAP
directory.
65
Running the Import Export Utility
Before you run the Import Export utility to import or export repository objects, you must meet the following
requirements:
To run the utility, you must have the System Administrator role or the Export/Import XML Files privilege.
To import or export users, groups, or roles, you must also have the Manage User Access privilege.
Data Analyzer must be running.
You can import Data Analyzer objects from XML files that were created when you exported repository objects
from Data Analyzer. You can use files exported from Data Analyzer 5.0 or later.
The default transaction time out for Data Analyzer is 3,600 seconds (1 hour). If you are importing large
amounts of data from XML files and the transaction time is not enough, you can change the default transaction
time out value. To change the default transaction time out for Data Analyzer, edit the value of the
import.transaction.timeout.seconds property in DataAnalyzer.properties. After you change this value, you must
restart the application server.
When you run the Import Export utility, you specify options and arguments to import or export different types
of objects. Specify an option by entering a hyphen (-) followed by a letter. The first word after the option letter
is the argument.
To specify the options and arguments, use the following rules:
Specify the options in any order.
Utility name, options, and argument names are case sensitive.
If the option requires an argument, the argument must follow the option letter.
If any argument contains more than one word, enclose the argument in double quotes.
To run the utility on Windows, open a command line window. On UNIX, run the utility as a shell command.
Note: Back up the target repository before you import repository objects into it. You can back up a Data
Analyzer repository with the Repository Backup utility.
UNIX:
ImportExport.sh [-option_1] argument_1 [-option_2] argument_2 ...
Table 8-1 lists the options and arguments you can specify:
Table 8-1. Options and Arguments for the Import Export Utility
-i repository object type Import a repository object type. For more information about
repository object types, see Table 8-2 on page 68.
Use the -i or -e option, but not both.
-e repository object type Export a repository object type. For more information about
repository object types, see Table 8-2 on page 68.
Use the -i or -e option, but not both.
-f XML file name Name of the XML file to import from or export to. The XML file
must follow the naming conventions for the operating system
where you run the utility.
-h No argument Displays a list of all options and their descriptions, and a list of
valid repository objects.
-n user name or group Use to import or export the security profile of a user or group.
name For more information, see Table 8-2 on page 68.
Repository
Description Example
Object Type
schema Schemas To import schemas from the PASchemas.xml file into the
repository, use the following command:
ImportExport -i schema -f c:\PASchemas.xml
-u jdoe -p doe -l
http://localhost:16080/<ReportingServiceName
>
timedim Time dimension To import time dimension tables from the TD.xml file into the
tables repository, use the following command:
ImportExport -i timedim -f TD.xml -u jdoe
-p doe -l
http://localhost:16080/<ReportingServiceName
>
report Reports To import reports from the Reports.xml file into the
repository, use the following command:
ImportExport -i report -f c:\Reports.xml
-u jdoe -p doe -l
http://localhost:16080/<ReportingServiceName
>
variable Global variables. You To export global variables to the GV.xml file, use the
can import global following command:
variables that do not ImportExport -e variable -f c:\xml\GV.xml
already exist in the -u jdoe -p doe -l
repository. http://server:16080/<ReportingServiceName>
dashboard Dashboards To export dashboards to the Dash.xml file, use the following
command:
ImportExport -e dashboard -f c:\Dash.xml
-u jdoe -p doe -l
http://localhost:16080/<ReportingServiceName
>
usersecurity Security profile of a To export the security profile of user jdoe to the
<security user. You must JDsecurity.xml file, use the following command:
profile option> specify the following ImportExport -e usersecurity -n jdoe
security profile -f JDsecurity.xml -u admin -p admin
option: -l
http://localhost:16080/<ReportingServiceName
-n <user name>
>
groupsecurity Security profile of a To export the security profile of group Managers to the
<security group. You must Profiles.xml file, use the following command:
profile option> specify the following ImportExport -e groupsecurity -n Managers
security profile -f Profiles.xml -u admin -p admin
option: -l
http://localhost:16080/<ReportingServiceName
-n <group name>
>
schedule Schedules To export all schedules to the Schedules.xml file, use the
following command:
ImportExport -e schedule -f c:\Schedules.xml
-u jdoe -p doe -l
http://localhost:16080/<ReportingServiceName
>
The Import Export utility runs according to the specified options. If the utility successfully completes the
requested operation, a message indicates that the process is successful. If the utility fails to complete the
requested operation, an error message displays.
Unknown error.
Cause: Utility failed to run for unknown reasons.
Action: Contact the system administrator or Informatica Global Customer Support.
Unknown option.
Cause: You entered an incorrect option letter. For example, you entered -x or -E to export a file.
Action: Check the validity and case sensitivity of the option letters. Check the XML file name.
The import file contains a different repository object type than the repository object type given for the
option -i.
Cause: The XML file specified for the import (-i) option does not contain the correct object type.
Action: Use the correct object type or a different XML file.
Error Messages 69
A communication error has occurred with Data Analyzer. The root cause is: <error message>.
Cause: See the root cause message.
Action: The action depends on the root cause. Check that the URL is correct and try to run the utility
again. Check that Data Analyzer is running and try to run the utility again. If error still occurs,
contact Informatica Global Customer Support.
The configured security realm does not support the import of users, groups and roles.
Cause: Data Analyzer is installed with the LDAP authentication method. You cannot use the Import
Export utility to import users, groups, or roles.
Action: Contact the Data Analyzer system administrator.
Troubleshooting
Importing a Large Number of Reports
If you use the Import Export utility to import a large number of reports (import file size of 16MB or more), the
Java process for the Import Export utility might run out of memory and the utility might display an exception
message. If the Java process for the Import Export utility runs out of memory, increase the memory allocation
for the process. To increase the memory allocation for the Java process, increase the value for the -mx option in
the script file that starts the utility.
Note: Back up the script file before you modify it.
1. Locate the Import Export utility script file in the Data Analyzer utilities directory.
The default directory is <PowerCenter_InstallationDirectory>/DataAnalyzer/import-exportutil/.
2. Open the script file with a text editor:
Windows: ImportExport.bat
UNIX: ImportExport.sh
3. Locate the -mx option in the Java command:
java -ms128m -mx256m -jar repositoryImportExport.jar $*
4. Increase the value for the -mx option from 256 to a higher number depending on the size of the import file.
Tip: Increase the value to 512. If the utility still displays an exception, increase the value to 1024.
If Data Analyzer uses a certificate signed by a CA defined in the default cacerts file, such as Verisign, you do not
need to specify the location of the trusted CA keystore when you run the Import Export utility.
Note: Back up the Import Export script file before you modify it.
1. Locate the Import Export utility script in the Data Analyzer utilities directory:
<PowerCenter_InstallationDirectory>/DataAnalyzer/import-exportutil
Troubleshooting 71
72 Chapter 8: Using the Import Export Utility
CHAPTER 9
Overview
You can configure the following administrative settings:
Color schemes, images, and logos. Modify the color schemes, images, and logos of Data Analyzer to match
those of your organization.
Log files. View Data Analyzer log files for information on user and system activity.
LDAP settings. Register LDAP servers to enable users to access LDAP directory lists from Data Analyzer.
Delivery settings. Register an outbound mail server to allow users to email reports and shared documents,
and receive alerts. You can also configure alert delivery devices.
Contact information. Provide the name, email address, and phone number of the Data Analyzer system
administrator. Users might find the administrator contact information useful in the event of a system
problem.
System information. View the configuration information of the machine hosting Data Analyzer.
Query governing. Define upper limits on query time, report processing time, and number of table rows
displayed.
Report settings. Determine whether scroll bars appear in report tables.
73
Report header and footer. Create the headers and footers printed in Data Analyzer reports.
Metadata configuration. Create department and category names for your organization. You can associate
repository objects with a department or category to help you organize the objects. When you associate
repository objects with a department or category, you can search for these objects by department or category
on the Find tab.
Display Settings. Control display settings for users and groups.
The URL can point to a logo file in the Data Analyzer machine or in another web server. If you specify a URL,
use the forward slash (/) as a separator.
Data Analyzer uses all the colors and images of the selected predefined color scheme with your logo or login
page image. If you modify a predefined color scheme, you might lose your changes when you upgrade to future
versions of Data Analyzer.
1. Click Administration > System Management > Color Schemes and Logos.
The Color Schemes and Logos page displays the list of available color schemes.
2. To edit the settings of a color scheme, click the name of the color scheme.
The Color Scheme page displays the settings of the color scheme. It also displays the directory for the
images and the URL for the background, login, and logo image files.
3. Optionally, enter file and directory information for color scheme images:
Images Directory. Name of the color scheme directory where you plan to store the color and image files.
If blank, Data Analyzer looks for the images in the default image directory.
Background Image URL. Name of a background image file in the color scheme directory or the URL to
a background image on a web server.
Logo Image URL. Name of a logo file image in the color scheme directory or the URL to a logo image
on a web server.
Login Page Image URL. Name of the login page image file in the color scheme directory or the URL to
a login image on a web server. To display the login page properly, the width of your login page image
must be approximately 1600 pixels, or the width of your monitor setting. The height of your login page
image must be approximately 240 pixels.
All file names are case sensitive. If you specify a URL, use the forward slash (/) as a separator.
4. Enter hexadecimal color codes to represent the colors you want to use.
The color scheme uses the hexadecimal color codes for each display item. For more information about
hexadecimal color codes, see “HTML Hexadecimal Color Codes” on page 119.
Table 9-1 shows the display items you can modify in the Color Scheme page:
Heading Section heading such as the container heading on the View tab.
Selected Rows Rows you select in the report table or on tabs such as the Find tab.
Primary Navigation Tab Colors Alerts, View, Find, Analyze, Administration, Create, and Manage
Account tabs.
Secondary Navigation Colors Menu items on the Administration tab, including Schema Design, XML
Export/Import, System Management, Real-time Configuration,
Scheduling, and Access Management.
Tab Colors Tabs under the Primary Navigation tab. Tabs include items such as
the Define Report Properties tab in Step 5 of the Create Report wizard
and the toolbar on the Analyze tab.
Use the same color in Section for the Selected field in Tab Colors so
that color flows evenly for each tab under the Primary Navigation tab.
1. Click Administration > System Management > Color Schemes and Logos.
The Color Schemes and Logos page displays the list of available color schemes.
2. Click Add.
The Color Scheme page appears.
3. Enter the name and description of the new color scheme.
4. In the Images Directory field, enter the name of the color scheme folder you created.
5. In the Background Image URL field, enter the file name of the background image you want to use.
All file names are case sensitive. Make sure the image file is saved in the color scheme folder you created
earlier.
6. In the Logo Image URL field, enter the file name of the logo image to use.
7. In the Login Page Image URL field, enter the file name of the login page image to use.
8. Enter the hexadecimal codes for the colors you want to use in the new color scheme.
If you do not set up new colors for the color scheme, Data Analyzer uses a default set of colors that may not
match the colors of your image files. For more information about display items on the Color Scheme page,
see Table 9-1 on page 75. For more information about hexadecimal color codes, see “HTML Hexadecimal
Color Codes” on page 119.
9. Click Preview to preview the new color scheme colors.
10. Click OK to save the new color scheme.
1. Click Administration > System Management > Color Schemes and Logos.
The Color Schemes and Logos page appears.
2. To set the default color scheme for Data Analyzer, select Default next to the color scheme name.
3. Click Apply.
Data Analyzer uses the selected color scheme as the default for the repository.
1. Click Administration > System Management > Color Schemes and Logos.
2. Click the name of the color scheme you want to assign.
3. To assign the color scheme to a user or group, click Edit.
The Assign Color Scheme window appears.
4. Use the search options to produce a list of users or groups.
5. In the Query Results area, select the users or groups you want to assign to the color scheme, and click Add.
To assign additional users or groups, repeat steps 3 to 5.
6. Click OK to close the dialog box.
7. Click OK to save the color scheme.
Managing Logs
Data Analyzer provides the following logs to track events and information:
User log. Lists the location and login and logout times for each user.
Activity log. Lists Data Analyzer activity, including the success or failure of the activity, activity type, the
user requesting the activity, the objects used for the activity, and the duration of the request and activity.
You can also configure it to log report queries.
System log. Lists error, warning, informational, and debugging messages.
Global cache log. Lists error, warning, informational, and debugging messages about the size of the Data
Analyzer global cache.
JDBC log. Lists all repository connection activities.
Managing Logs 79
To view the activity log, click Administration > System Management > Activity Log.
By default, Data Analyzer displays up to 1,000 rows in the activity log. You can change the number of rows by
editing the value of the logging.activity.maxRowsToDisplay property in the DataAnalyzer.properties file.
If you sort the activity log by a column, Data Analyzer sorts on all activity log data, not just the currently
displayed rows.
You can configure the activity log to provide the query used to perform the activity and the database tables
accessed to complete the activity. This additional information appears in the XML file generated when you save
the activity log.
By default, the System log displays error and warning messages. You can choose to display the following
messages in the system log:
Errors
Warnings
Information
Debug
The above folder is available after you enable the Reporting Service and the Data Analyzer instance is
started.
2. Open the file with a text editor and locate the following lines:
<appender name="IAS_LOG" class="org.jboss.logging.appender.DailyRollingFileAppender">
<param name="File" value="${jboss.server.home.dir}/log/<Reporting Service
Name>/ias.log"/>
3. Modify the value of the File parameter to specify the name and location for the log file.
If you specify a path, use the forward slash (/) or two backslashes (\\) in the path as the file separator. Data
Analyzer does not support a single backslash as a file separator.
For example, if you want to save the Data Analyzer system logs to a file named mysystem.log in a folder
called Log_Files in the D: drive, modify the File parameter to include the path and file name:
<param name=”File” value=”d:/Log_Files/mysystem.log”/>
You can change the name of the file and the directory where it is saved by editing the jdbc.log.file property in
the DataAnalyzer.properties file. You can also determine whether Data Analyzer appends data to the file or
overwrites the existing JDBC log file by editing the jdbc.log.append property in DataAnalyzer.properties.
The following example lists the values you need to enter on the LDAP Settings page for an LDAP server
running a directory service other than Microsoft Active Directory:
Name: Test
URL: ldap:// machine.company.com
BaseDN: dc=company_name,dc=com
Authentication: Anonymous
Setting Description
BaseDN Base distinguished name entry identifies the type of information stored in the
LDAP directory. If you do not know the BaseDN, contact your LDAP system
administrator.
Authentication Authentication method your LDAP server uses. Select Anonymous if the LDAP
server allows anonymous authentication. If your LDAP server requires system
authentication, select System.
Select System if you use Microsoft Active Directory as an LDAP directory.
System Name System name of the LDAP server. Required when using System
authentication.
System Password System password for the LDAP server. Required when using System
authentication.
Setting Description
Query Time Limit Maximum amount of time for each SQL query. Default is 240 seconds.
Report Processing Maximum amount of time allowed for the application server to run the
Time Limit report. You may have more than one SQL query for the report. Report
Processing Time includes time to run all queries for the report.
Default is 600 seconds.
Row Limit Maximum number of rows SQL returns for each query. If a query returns
more rows than the row limit, Data Analyzer displays a warning message
and drops the excess rows. Default is 20,000 rows.
3. Click Apply.
Data Analyzer does not consider Group 2 in determining the group query governing settings to use for the user
reports. For the row limit, Data Analyzer uses the setting for Group 1 since it is the least restrictive setting. For
query time limit, Data Analyzer uses the setting for Group 3 since it is the least restrictive setting.
The image files you display in the left header or the right footer of a report can be any image type supported by
your browser. By default, Data Analyzer looks for the header and footer image files in the image file directory
for the current Data Analyzer color scheme.
The report header and footer image files are stored with the color scheme files in the EAR directory. If you want
to modify or use a new image for the left header or right footer, you must update the images in the EAR
directory.
If you want to use an image file in a different location, enter the complete URL for the image when you
configure the header or footer. For example, if the host name of the web server where you saved the
Header_Logo.gif image file is http://monet.PaintersInc.com, port 16080, enter the following URL:
http://monet.PaintersInc.com:16080/Header_Logo.gif
If Data Analyzer cannot find the header or footer image in the color scheme directory or the URL, Data
Analyzer does not display any image for the report header or footer.
You can use the PDF.HeaderFooter.ShrinktoWidth property in the DataAnalyzer.properties file to determine
how Data Analyzer handles long headers and footers. When you enter a large amount of text in a header or
footer, Data Analyzer shrinks the font to fit the text in the allotted space by default. You can also configure
Data Analyzer to keep header and footer text the configured font size, allowing Data Analyzer to display only
the text that fits in the header or footer.
Select an
option
and enter
text to
display.
Select report Select an option and enter text, or select report Select to display text or
properties to display. property to display. Or select to display both. image. Enter the text or
image file name to display.
1. Open the /custom/properties/web.xml file with a text editor and locate the line containing the following
property:
showSearchThreshold
The value of the showSearchThreshold property is the number of groups or users Data Analyzer displays
without providing the Search box.
2. Change the value of the showSearchThreshold property according to your requirements.
<init-param>
<param-name>
InfUserAdminUIConfigurationStartup.com.informatica.ias.
useradmin.showSearchThreshold
</param-name>
<param-value>100</param-value>
</init-param>
The value of the searchLimit property is the maximum number of groups or users in the search result
before you must refine the search criteria.
4. Change the value of the searchLimit property according to your requirements.
<init-param>
<param-name>
InfUserAdminUIConfigurationStartup.com.informatica.ias.
useradmin.searchLimit
</param-name>
<param-value>1000</param-value>
</init-param>
Overview
Data Analyzer provides a set of administrative reports that enable system administrators to track user activities
and monitor processes. The reports provide a view into the information stored in the Data Analyzer repository.
They include details on Data Analyzer usage and report schedules and errors.
The Data Analyzer administrative reports use an operational schema based on tables in the Data Analyzer
repository. They require a data source that points to the Data Analyzer repository. They also require a data
connector that includes the Data Analyzer administrative reports data source and operational schema.
After you set up the Data Analyzer administrative reports, you can view and use the reports just like any other
set of reports in Data Analyzer. If you need additional information in a report, you can modify it to add metrics
or attributes. You can add charts or indicators, or change the format of any report. You can enhance the reports
to suit your needs and help you manage the users and processes in Data Analyzer more efficiently.
You can view the administrative reports in two areas:
Administrator’s Dashboard. On the Administrator’s Dashboard, you can quickly see how well Data
Analyzer is working and how often users log in.
Data Analyzer Administrative Reports folder. You can access all administrative reports in the Data Analyzer
Administrative Reports public folder under the Find tab.
Administrator’s Dashboard
The Administrator’s Dashboard displays the indicators associated with the administrative reports. The
Administrator’s Dashboard has the following containers:
Today’s Usage. Provides information on the number of users who logged in for the day, the number of
reports accessed in each hour for the day, and any errors encountered when Data Analyzer runs cached
reports.
91
Historical Usage. Displays the users who logged in the most number of times during the month, the longest
running on-demand reports, and the longest running cached reports for the current month.
Future Usage. Lists the cached reports in Data Analyzer and when they are scheduled to run next.
Admin Reports. Provides a report on the Data Analyzer users who have never logged in. Also provides
reports on the most and least accessed reports for the year.
To add the administrative reports data source to the system data connector:
Report Schedule
The Hourly Refresh schedule is one of the schedules installed by the PowerCenter Reports installer. The
Midnight Daily schedule is one of the schedules created when you install Data Analyzer.
After you complete the steps to add the reports to the schedules, you might want to review the list of
reports in the Data Analyzer Administrative Reports folder to make sure that the cached reports have been
added to the correct schedule.
10. To review the schedule for a report in the Data Analyzer Administrative Reports folder, select a report and
look at the Report Properties section.
After you schedule the administrative reports, you need to create a data source for the repository.
Performance Tuning
This chapter includes the following topics:
Overview, 97
Database, 97
Operating System, 99
Application Server, 104
Data Analyzer Processes, 109
Overview
Data Analyzer requires the interaction of several components and services, including those that may already
exist in the enterprise infrastructure, such as the enterprise data warehouse and authentication server.
Data Analyzer is built on JBoss Application Server and uses related technology and application programming
interfaces (APIs) to accomplish its tasks. JBoss Application Server is a Java 2 Enterprise Edition (J2EE)-
compliant application server. Data Analyzer uses the application server to handle requests from the web
browser. It generates the requested contents and uses the application server to transmit the content back to the
web browser. Data Analyzer stores metadata in a repository database to keep track of the processes and objects it
needs to handle web browser requests.
You can tune the following components to optimize the performance of Data Analyzer:
Database
Operating system
Application server
Data Analyzer
Database
Data Analyzer has the following database components:
Data Analyzer repository
Data warehouse
97
The repository database contains the metadata that Data Analyzer uses to construct reports. The data
warehouse contains the data for the Data Analyzer reports.
The data warehouse is where the report SQL queries are executed. Typically, it has a very high volume of data.
The execution time of the reports depends on how well tuned the database and the report queries are. Consult
the database documentation on how to tune a high volume database for optimal SQL execution.
The Data Analyzer repository database contains a smaller amount of data than the data warehouse. However,
since Data Analyzer executes many SQL transactions against the repository, the repository database must also
be properly tuned to optimize the database performance. This section provides recommendations for tuning the
Data Analyzer repository database for best performance.
Note: Host the Data Analyzer repository and the data warehouse in separate database servers. The following
repository database tuning recommendations are valid only for a repository that resides on a database server
separate from the data warehouse. If you have the Data Analyzer repository database and the data warehouse in
the same database server, you may need to use different values for the parameters than those recommended
here.
Oracle
This section provides recommendations for tuning the Oracle database for best performance.
Statistics
To ensure that the repository database tables have up-to-date statistics, periodically run the following command
for the repository schema:
EXEC DBMS_STATS.GATHER_SCHEMA_STATS(ownname=><RepositorySchemaName>,
cascade=>true,estimate_percent=>100);
For more information about tuning an Oracle database, see the Oracle documentation.
User Connection
For an Oracle repository database running on HP-UX, you may need to increase the number of user
connections allowed for the repository database so that Data Analyzer can maintain continuous connection to
the repository.
To enable more connections to the Oracle repository, complete the following steps:
1. At the HP-UX operating system level, raise the maximum user process (maxuprc) limit from the default of
75 to at least 300.
Use the System Administration Manager tool (SAM) to raise the maxuprc limit. Raising the maxuprc limit
requires root privileges. You need to restart the machine hosting the Oracle repository for the changes to
take effect.
2. In Oracle, raise the values for the following database parameters in the init.ora file:
Raise the value of the processes parameter from 150 to 300.
Raise the value of the pga_aggregate_target parameter from 32 MB to 64 MB (67108864).
Updating the database parameters requires database administrator privileges. You need to restart Oracle for the
changes to take effect.
If the Data Analyzer instance has a high volume of usage, you may need to set higher limits to ensure that Data
Analyzer has enough resources to connect to the repository database and complete all database processes.
Analysis of table statistics is important in DB2. If you do not update table statistics periodically, you may
encounter transaction deadlocks during times of high concurrency usage.
For optimal performance, set the following parameter values for the Data Analyzer repository database:
LOCKLIST = 600
MAXLOCKS=40
DBHEAP = 4000
LOGPRIMARY=100
LOGFILSIZ=2000
For more information about DB2 performance tuning, refer to the following IBM Redbook:
http://publib-b.boulder.ibm.com/Redbooks.nsf/RedbookAbstracts/sg246432.html?Open
Operating System
For all UNIX operating systems, make sure the file descriptor limit for the shell running the application server
process is set to at least 2048. Use the ulimit command to set the file descriptor limit.
The following recommendations for tuning the operating system are based on information compiled from
various application server vendor web sites.
Linux
To optimize Data Analyzer on Linux, you need to make several changes to your Linux environment. You must
modify basic system and kernel settings to allow the Java component better access to the resources of your
system:
Enlarge the shared memory and shared memory segments.
Enlarge the maximum open file descriptors.
Enlarge the maximum per-process open file descriptors.
These changes only affect the system as it is running now. Enter the following commands to make them
permanent:
# echo '#Tuning kernel parameters' >> /etc/rc.d/rc.local
Operating System 99
# echo 'echo "2147483648" > /proc/sys/kernel/shmmax' >> /etc/rc.d/rc.local
# echo 'echo "250 32000 100 128" > /proc/sys/kernel/sem' >> /etc/rc.d/rc.local
These changes affect the system as it is currently running. Enter the following commands to make them
permanent:
# echo 'echo "65536" > /proc/sys/fs/file-max' >> /etc/rc.d/rc.local
kernel.msgmni 1024
net.ipv4.tcp_max_syn_backlog 8192
HP-UX
You can tune the following areas in the HP-UX operating system to improve overall Data Analyzer
performance:
Kernel
Java Process
Network
Kernel Tuning
HP-UX has a Java-based configuration utility called HPjconfig which shows the basic kernel parameters that
need to be tuned and the different patches required for the operating system to function properly. You can
download the configuration utility from the following HP web site:
http://h21007.www2.hp.com/dspp/tech/tech_TechDocumentDetailPage_IDX/1,1701,1620,00.html
The HPjconfig recommendations for a Java-based application server running on HP-UX 11 include the
following parameter values:
Max_thread_proc = 3000
Maxdsiz = 2063835136
Maxfiles=2048
Maxfiles_lim=2048
Maxusers=512
Note: For Java processes to function properly, it is important that the HP-UX operating system is on the proper
patch level as recommended by the HPjconfig tool.
For more information about kernel parameters affecting Java performance, see the HP documentation. For
more information about tuning the HP-UX kernel, see the document titled “Tunable Kernel Parameters” on
the following HP web site:
http://docs.hp.com/hpux/onlinedocs/TKP-90203/TKP-90203.html
Java Process
You can set the JVM virtual page size to improve the performance of a Java process running on an HP-UX
machine. The default value for the Java virtual machine instruction and data page sizes is 4 MB. Increase the
value to 64 MB to optimize the performance of the application server that Data Analyzer runs on.
To set the JVM virtual page size, use the following command:
chatr +pi64M +pd64M <JavaHomeDir>/bin/PA_RISC2.0/native_threads/java
Network Tuning
For network performance tuning, use the ndd command to view and set the network parameters.
Table 11-2 provides guidelines for ndd settings:
tcp_conn_request_max 16384
tcp_xmit_hiwater_def 1048576
tcp_time_wait_interval 60000
tcp_recv_hiwater_def 1048576
tcp_fin_wait_2_timeout 90000
For example, to set the tcp_conn_request_max parameter, use the following command:
ndd -set /dev/tcp tcp_conn_request_max 1024
Solaris
You can tune the Solaris operating system to optimize network and TCP/IP operations in the following ways:
Use the ndd command.
Set parameters in the /etc/system file.
Set parameters on the network card.
/dev/tcp tcp_naglim_def 1
/dev/ce instance 0
/dev/ce rx_intr_time 32
Note: Prior to Solaris 2.7, the tcp_time_wait_interval parameter was called tcp_close_
wait_interval. This parameter determines the time interval that a TCP socket is kept alive after issuing a close
call. The default value of this parameter on Solaris is four minutes. When many clients connect for a short
period of time, holding these socket resources can have a significant negative impact on performance. Setting
this parameter to a value of 60000 (60 seconds) has shown a significant throughput enhancement when running
benchmark JSP tests on Solaris. You might want to decrease this setting if the server is backed up with a queue
of half-opened connections.
Table 11-4 lists the /etc/system parameters that you can tune and the recommended values:
rlim_fd_cur 8192
rlim_fd_max 8192
tcp:tcp_conn_hash_size 32768
semsys:seminfo_semume 1024
semsys:seminfo_semopm 200
*shmsys:shminfo_shmmax 4294967295
autoup 900
tune_t_fsflushr 1
ce:ce_bcopy_thresh 256
ce:ce_dvma_thresh 256
ce:ce_taskq_disable 1
ce:ce_ring_size 256
ce:ce_comp_ring_size 1024
ce:ce_tx_ring_size 4096
For more information about Solaris tuning options, see the Solaris Tunable Parameters Reference Manual.
AIX
If an application on an AIX machine transfers large amounts of data, you can increase the TCP/IP or UDP
buffer sizes. Use the no and nfso commands to set the buffer sizes.
For example, to set the tcp_sendspace parameter, use the following command:
/usr/sbin/no -o tcp_sendspace=262144
Table 11-6 lists the no parameters that you can set and their recommended values:
Table 11-6. Recommended Buffer Size Settings for no Command for AIX
tcp_sendspace 262144
tcp_recvspace 262144
rfc1323 1
tcp_keepidle 600
Table 11-7 lists the nfso parameters that you can set and their recommended values:
Table 11-7. Recommended Buffer Size Settings for nfso Command for AIX
nfs_socketsize 200000
nfs_tcp_socketsize 200000
To permanently set the values when the system restarts, add the commands to the /etc/rc.net file.
For more information about AIX tuning options, see the Performance Management Guide on the IBM web
site:
http://publib16.boulder.ibm.com/pseries/en_US/aixbman/prftungd/prftungd.htm
Application Server
JBoss Application Server consists of several components, each of which has a different set of configuration files
and parameters that can be tuned. The following are some of the JBoss Application Server components and
recommendations for tuning parameters to improve the performance of Data Analyzer running on JBoss
Application Server.
Servlet/JSP Container
JBoss Application Server uses the Apache Tomcat 5.5 Servlet/JSP container. You can tune the Servlet/JSP
container to make an optimal number of threads available to accept and process HTTP requests.
To tune the Servlet/JSP container, modify the following configuration file:
<PowerCenter_InstallationDirectory>/server/tomcat/jboss/server/informatica/deploy/jbossw
eb-tomcat55.sar/server.xml
Although the Servlet/JSP container configuration file contains additional properties, Data Analyzer may
generate unexpected results if you modify properties that are not documented in this section. For additional
information about configuring the Servlet/JSP container, see the Apache Tomcat Configuration Reference on
the Apache Tomcat website:
http://tomcat.apache.org/tomcat-5.5-doc/config/index.html
JSP Optimization
Data Analyzer uses JavaServer Pages (JSP) scripts to generate content for the web pages used in Data Analyzer.
Typically, the JSP scripts must be compiled when they are executed for the first time. To avoid having the
application server compile JSP scripts when they are executed for the first time, Informatica ships Data Analyzer
with pre-compiled JSPs.
If you find that you need to compile the JSP files either because of customizations or while patching, you can
modify the following configuration file to optimize the JSP compilation:
<PowerCenter_InstallationDirectory>/server/tomcat/jboss/server/informatica/deploy/jbossw
eb-tomcat55.sar/conf/web.xml
Note: Make sure that the checkInterval is not too low. In production environment, set it to 600 seconds.
EJB Container
Data Analyzer uses Enterprise Java Beans extensively. It has over 50 stateless session beans (SLSB) and over 60
entity beans (EB). There are also six message-driven beans (MDBs) used for scheduling and real-time processes.
Aggregation
Data Analyzer can run more efficiently if the data warehouse has a good schema design that takes advantage of
aggregate tables to optimize query execution. Data Analyzer performance improves if the data warehouse
contains good indexes and is properly tuned.
Ranked Reports
Data Analyzer supports two-level ranking. If the report has one level of ranking, Data Analyzer delegates the
ranking task to the database by doing a multi-pass query to first get the ranked items and then running the
actual query with ranking filters. If the ranking is defined on a calculation that is performed in the middle tier,
Data Analyzer has to pull all the data before it evaluates the calculation expression and ranks the data and filter.
If you have a data warehouse with a large volume of data, avoid creating reports with ranking defined on
custom attributes or custom metrics. These types of reports consume resources and may slow down other Data
Analyzer processes.
A report with second level ranking, such as the top 10 products and the top five customers for each product,
requires a multi-pass SQL query to first get the data to generate the top 10 products and then get the data for
each product and corresponding top five customers. If the report is defined to show Total Others at End of
Table, Data Analyzer runs another SQL query to get the aggregated values for the rows not shown in the report.
For optimal performance, create reports with two levels of ranking based on smaller schemas or on schemas that
have good aggregate tables and indexes. Also, consider making the report cached so that it can run in the
background.
Date Columns
By default, Data Analyzer performs date manipulation on any column with a datatype of Date. If a report
includes a column that contains date and time information but the report requires a daily granularity, Data
Analyzer includes conversion functions in the WHERE clause and SELECT clause to get the proper
aggregation and filtering by date only, not including time. However, conversion functions in a query prevent
the use of database indexes and makes the SQL query inefficient.
Use the Data Source is Timestamp property for an attribute to have Data Analyzer include conversion functions
in the SQL query. If a column contains date and time information, set the Data Source is Timestamp attribute
property so that Data Analyzer includes conversion functions in the SQL query for any report the uses the
column. If a column contains date information only, clear the Data Source is Timestamp attribute property so
that Data Analyzer does not include conversion functions in the SQL query for any report the uses the column.
Interactive Charts
An interactive chart uses less application server resources than a regular chart. On the machine hosting the
application server, an interactive chart can use up to 25% less CPU resources than a regular chart. On a typical
workstation with a CPU speed greater than 2.5 GHz, interactive charts display at about the same speed as
regular charts. Use interactive charts whenever possible to improve performance.
For more information about editing your general preferences to enable interactive charts, see the Data Analyzer
User Guide.
ProviderContext.maxInMemory
When a user runs a report, Data Analyzer saves the dataset returned by the report query in the user session until
the user terminates the session. If there are a large number of concurrent users on Data Analyzer and each runs
multiple reports, the memory requirements can be considerable. By default, Data Analyzer keeps two reports in
the user session at a time. It uses a first in first out (FIFO) algorithm to overwrite reports in memory with more
recent reports.
You can edit the providerContext.maxInMemory property in DataAnalyzer.properties to set the number of
reports that Data Analyzer keeps in memory. Set the value as low as possible to conserve memory. The value
must be greater than or equal to 2. Typically, the default value of 2 is sufficient.
Data Analyzer retains report results that are part of a workflow or drill path in memory irrespective of the value
set in this property. Data Analyzer keeps the datasets for all reports in a workflow in the user session. Include
only reports that have small datasets in a workflow.
Note: A user must log out of Data Analyzer to release the user session memory. Closing a browser window does
not release the memory immediately. When a user closes a browser window without logging out, Data Analyzer
releases the memory after the expiration of session-timeout, which, by default, is 30 minutes.
ProviderContext.abortThreshold
When a user runs a report that involves calculation or building large result sets, Data Analyzer might run out of
memory that results in the users getting a blank page. Before Data Analyzer starts calculating the report or
building the tabular result set, it checks the amount of available memory. If the amount of free memory does
not meet a pre-defined percentage, Data Analyzer displays an error and stops processing the report request.
You can edit the providerContext.abortThreshold property in the DataAnalyzer.properties file to set the
maximum percentage of memory that is in use before Data Analyzer stops building report result sets and
executing report queries.
To calculate the percentage, divide the used memory by the total memory configured for the JVM. For
example, if the used memory is 1,000 KB, and the total memory configured for the JVM is 2,000 KB, the
percentage of memory that is in use is 50%. If the percentage is below the threshold, Data Analyzer continues
with the requested operation. If the percentage is above the threshold, then Data Analyzer displays an error.
Typically, you can set a threshold value between 50% and 99%. The default value is 95%.
Indicators in Dashboard
Data Analyzer uses two parallel threads to load indicators in the dashboards. These parallel threads are default
threads spawned by the browser.
Data Analyzer has been optimized to handle the way multiple indicators are queued up for loading:
In a dashboard with indicators based on cached and on-demand reports, Data Analyzer loads all indicators
based on cached reports before it loads indicators based on on-demand reports.
Gauges based on cached reports load the fastest because gauges have only one data value and they are cached
in the database along with the report model. Data Analyzer obtains the report model and the datapoint for
the gauge at the same time and can immediately create the gauge.
When there are multiple indicators based on a single report, Data Analyzer runs the underlying report once.
All indicators on a dashboard based on the same report use the same resultset. Both for cached and on-
demand reports.
Chart Legends
When Data Analyzer displays charts with legends, the Data Analyzer charting engine must perform many
complex calculations to fit the legends in the limited space available on the chart. Depending on the number of
legends in a chart, it might take Data Analyzer from 10% to 50% longer to render a chart with legends. If
legends are not essential in a chart, consider displaying the chart without legends to improve Data Analyzer
performance.
Server Location
Data Analyzer runs on an application server and reads data from a database server. For optimal performance,
these servers must have enough CPU power and RAM. There should also be minimal network latency between
these servers.
Overview
You can customize the Data Analyzer user interface so that it meets the requirements for web applications in
your organization. Data Analyzer provides several ways to allow you to modify the look and feel of Data
Analyzer.
You can use the following techniques to customize Data Analyzer:
Use the URL API to display Data Analyzer web pages on a portal.
Use the Data Analyzer API single sign on (SSO) scheme to access Data Analyzer web pages without a user
login.
Set up custom color schemes and logos on the Data Analyzer Administration tab.
Set the user interface (UI) configuration properties in the DataAnalyzer.properties file to display or hide the
Data Analyzer header or navigation bar.
115
Using the Data Analyzer API Single Sign-On
When you access Data Analyzer, the login page appears. You must enter a user name and password.
Ordinarily, if you display Data Analyzer web pages in another web application or portal, the Data Analyzer
login appears even if you have already logged in to the portal where the Data Analyzer pages are displayed. To
avoid multiple logins, you can set up an SSO mechanism that allows you to log in once and be authenticated in
all subsequent web applications that you access.
The Data Analyzer API provides an SSO mechanism that you can use when you display Data Analyzer pages in
another web application or portal. You can configure Data Analyzer to accept the portal authentication and
bypass the Data Analyzer login page. For more information about the Data Analyzer API SSO, see the Data
Analyzer SDK Guide.
The properties determine what displays in the header section of the Data Analyzer user interface which includes
the logo, the logout and help links, and the navigation bar:
Default UI Configuration
By default, when a user logs in to Data Analyzer through the Login page, the logo, logout and help links, and
navigation bar display on all the Data Analyzer pages. To hide the navigation bar or the header section on the
Data Analyzer pages, you can add a UI configuration named default to DataAnalyzer.properties and set the
properties to false.
To hide the whole header section, add the following property:
uiconfig.default.ShowHeader=false
2. Include the parameter <UICONFIG> and the configuration name in the URL when you call the Data
Analyzer Administration page from the portal:
http://HostName:PortNumber/InstanceName/jsp/api/ShowAdministration.jsp?<UICONFIG>=Fred
For more information about the Data Analyzer URL API, see the Data Analyzer SDK Guide.
The default settings determine what Data Analyzer displays after the Login page. If you access a Data Analyzer
page with a specific configuration through the URL API and the session expires, the Login page appears. After
you login, Data Analyzer displays the Data Analyzer pages based on the default configuration, not the
configuration passed through the URL. To avoid this, complete one of the following tasks:
Change the values of the default configuration instead of adding a new configuration.
Set the default configuration to the same values as your customized configuration.
Customize the Data Analyzer login page to use your customized configuration after user login.
Configuration Settings
Use the following guidelines when you set up a configuration in DataAnalyzer.properties:
The default configuration properties are not required in DataAnalyzer.properties. Add them only if you
want to modify the default configuration settings or create new UI configurations.
The configuration name can be any length and is case sensitive. It can include only alphanumeric characters.
It cannot include special characters.
Setting the ShowHeader property to false implicitly sets the ShowNav property to false.
For more information about modifying the settings in DataAnalyzer.properties, see “Configuration Files” on
page 127.
The following examples show what appears on the Data Analyzer header when the UI configuration properties
are set to different values:
ShowHeader=true and ShowNav=true (default setting)
Note: Data Analyzer stores DataAnalyzer.properties in the Data Analyzer EAR file.
119
Table A-1. HTML Color Codes for Color Schemes
yellow FFFF00
Configuration Files
This appendix includes the following topics:
Overview, 127
Modifying the Configuration Files, 127
Properties in DataAnalyzer.properties, 128
Properties in infa-cache-service.xml, 135
Properties in web.xml, 139
Overview
To customize Data Analyzer for your organization, you can modify the Data Analyzer configuration files. The
configuration files define the appearance and operational parameters of Data Analyzer.
You can modify the following configuration files:
DataAnalyzer.properties. Contains the configuration settings for an instance of Data Analyzer. They are
stored in the Data Analyzer EAR directory.
infa-cache-service.xml. Contains the global cache configuration settings for Data Analyzer. Although infa-
cache-service.xml contains many settings, you only need to modify specific settings. They are stored in the
Data Analyzer EAR directory.
web.xml. Contains additional configuration settings for an instance of Data Analyzer. Although web.xml
contains many settings, you only need to modify specific settings. They are stored in the Data Analyzer EAR
directory.
127
To change the settings in the configuration files stored in the Data Analyzer EAR directory, complete the
following steps:
1. With a text editor, open the configuration file you want to modify and search for the setting you want to
customize.
2. Change the settings and save the configuration file.
3. Restart Data Analyzer.
Properties in DataAnalyzer.properties
The DataAnalyzer.properties file contains the configuration settings for an instance of Data Analyzer. You can
modify DataAnalyzer.properties to customize the operation of an instance of Data Analyzer.
You must customize some properties in DataAnalyzer.properties together to achieve a specific result. In the
following groups of properties, you may need to modify more than one property to effectively customize Data
Analyzer operations:
Dynamic Data Source Pool Properties. Data Analyzer internally maintains a pool of JDBC connections to
the data source. Several properties in DataAnalyzer.properties control the processes within the connection
pool. To optimize the database connection pool for a data source, modify the following properties:
dynapool.minCapacity
dynapool.maxCapacity
dynapool.evictionPeriodMins
dynapool.waitForConnectionSeconds
dynapool.connectionIdleTimeMins
datamart.defaultRowPrefetch
For more information, see “Connection Pool Size for the Data Source” on page 112.
Security Adapter Properties. If you use LDAP authentication, Data Analyzer periodically updates the list of
users and groups in the repository with the list of users and groups in the LDAP directory service. Data
Analyzer provides a synchronization scheduler that you can customize to set the schedule for these updates
based on the requirements of your organization. To customize the synchronization scheduler, you can
modify the following properties.
securityadapter.frequency
securityadapter.syncOnSystemStart
Property Description
alert.fromaddress From address used for alerts sent by Data Analyzer. If you
use an SMTP mail server, you must enter an email address
that includes a domain.
Default is alert@informatica.com. Leaving the default value
does not affect alert functionality. However, you need to
enter a valid email address for your organization.
Property Description
Chart.Fontsize Maximum font size to use on the chart axis labels and
legend. Data Analyzer determines the actual font size, but
will not use a font size larger than the value of this property.
Default is 10.
Chart.Minfontsize Minimum font size to use on the chart axis labels and
legend. The value must be smaller than the value of
Chart.Fontsize. Data Analyzer determines the actual font
size, but will not use a font size smaller than the value of this
property.
Default is 7.
compression.alwaysCompressMimeTypes MIME types for dynamic content that Data Analyzer always
compresses, without verifying that the browser can support
compressed files of this MIME type. Some MIME types are
handled by plug-ins that decompress natively. These MIME
types may work with compression regardless of whether the
browser supports compression or if an intervening proxy
would otherwise break compression. Enter a comma-
separated list of MIME types. Using this property may result
in marginally better performance than using
compressionFilter.compressableMimeTypes. However, if
Data Analyzer compresses a MIME type not supported by
the browser, the browser might display an error.
By default, no MIME types are listed. Data Analyzer
compresses only the MIME types listed in
compressionFilter.compressableMimeTypes after verifying
browser support.
Property Description
datamart.transactionIsolationLevel. Transaction isolation level for each data source used in your
DataSourceName Data Analyzer instance. Add a property for each data source
and then enter the appropriate value for that data source.
Supported values are:
- NONE. Transactions are not supported.
- READ_COMMITTED. Dirty reads cannot occur. Non-
repeatable reads and phantom reads can occur.
- READ_UNCOMMITTED. Dirty reads, non-repeatable
reads, and phantom reads can occur.
- REPEATABLE_READ. Dirty reads and non-repeatable
reads cannot occur. Phantom reads can occur.
- SERIALIZABLE. Dirty reads, non-repeatable reads, and
phantom reads cannot occur.
If no property is set for a data source, Data Analyzer uses
the default transaction level of the database.
dynapool.poolNamePrefix String to use as a prefix for the dynamic JDBC pool name.
Default is IAS_.
Property Description
help.files.url URL for the location of Data Analyzer online help files. By
default, the installation process installs online help files on
the same machine as Data Analyzer and sets the value of
this property.
host.url URL for the Data Analyzer instance. By default, the Data
Analyzer installation sets the value of this property in the
following format:
http://Hostname:PortNumber/InstanceName/
Property Description
Maps.Directory Directory where the XML files that represent maps for the
Data Analyzer geographic charts are located. The directory
must be located on the machine where Data Analyzer is
installed. The default location is in the following directory:
<PowerCenter_InstallationDirectory>/Da
taAnalyzer/maps
Property Description
Property Description
Properties in infa-cache-service.xml
A cache is a memory area where frequently accessed data can be stored for rapid access. The
Cache.GlobalCaching property in DataAnalyzer.properties determines whether global caching is enabled for
Data Analyzer. For more information about enabling global caching, see “Properties in
DataAnalyzer.properties” on page 128.
Attribute Description
wakeUpIntervalSeconds Frequency in seconds that Data Analyzer checks for objects to remove from
the global cache. You can decrease this value to have Data Analyzer run
the eviction policy more frequently.
Default is 60 seconds.
maxNodes Maximum number of objects stored in the specified region of the global
cache. Set the value to 0 to have Data Analyzer cache an infinite number of
objects. Data Analyzer writes informational messages to a global cache log
file when a region approaches its maxNodes limit.
Default varies for each region.
Attribute Description
timeToLiveSeconds Maximum number of seconds an object can remain idle in the global cache.
Defined for each region of the global cache. Set the value to 0 to define no
time limit. Default varies for each region.
By default, infa-cache-service.xml defines an idle time limit only for regions
that contain user specific data. For example, the /Users region has a
timeToLiveSeconds value of 1,800 seconds (30 minutes). Data Analyzer
removes cached user data if it has not been accessed for 30 minutes. If
Data Analyzer runs on a machine with limited memory, you can define idle
time limits for the other regions so that Data Analyzer removes objects from
the cache before the maxNodes limit is reached.
maxAgeSeconds Maximum number of seconds an object can remain in the global cache.
Defined for each region of the global cache. Set the value to 0 to define no
time limit. Default varies for each region.
By default, infa-cache-service.xml defines a maximum age limit for only the
/_default_ region. If Data Analyzer runs on a machine with limited memory,
you can define maximum age limits for the other regions so that Data
Analyzer removes objects from the cache before the maxNodes limit is
reached.
Data Analyzer checks for objects to remove from the global cache at the following times:
The wakeUpIntervalSeconds time period ends. Data Analyzer removes objects that have reached the
timeToLiveSeconds or maxAgeSeconds limits.
A global cache region reaches its maxNodes limit. Data Analyzer removes the least recently used object
from the region. Data Analyzer also removes objects from any region that have reached the
timeToLiveSeconds or maxAgeSeconds limits.
6. Change the attribute values for the region according to your requirements.
For example, to change the attribute values for the /Dashboards region, modify the following lines:
<region name="/Dashboards">
<attribute name="maxNodes">200</attribute>
<attribute name="timeToLiveSeconds">0</attribute>
<attribute name="maxAgeSeconds">0</attribute>
</region>
7. Repeat steps 5 to 6 for each of the global cache regions whose eviction policy you want to modify.
8. Save and close infa-cache-service.xml.
Property Description
enableGroupSynchronization If you use LDAP authentication, this property determines whether Data
Analyzer updates the groups in the repository when it synchronizes the list of
users and groups in the repository with the LDAP directory service. By default,
during synchronization, Data Analyzer deletes the users and groups in the
repository that are not found in the LDAP directory service. If you want to keep
user accounts in the LDAP directory service but keep the groups in the Data
Analyzer repository, set this property to false so that Data Analyzer does not
delete or add groups to the repository during synchronization.
When this property is set to false, Data Analyzer synchronizes only user
accounts, not groups. You must maintain the group information within Data
Analyzer.
Default is true.
login-session-timeout Session timeout, in minutes, for an inactive session on the Login page. If the
user does not successfully log in and the session remains inactive for the
specified time period, the session expires. After the user successfully logs in,
Data Analyzer resets the session timeout to the value of the session-timeout
property.
Default is 5.
searchLimit Maximum number of groups or users Data Analyzer displays in the search
results before requiring you to refine your search criteria.
Default is 1000.
session-timeout Session timeout, in minutes, for an inactive session. Data Analyzer terminates
sessions that are inactive for the specified time period.
Default is 30.
showSearchThreshold Maximum number of groups or users Data Analyzer displays before displaying
the Search box so you can find a group or user.
Default is 100.
TemporaryDir Directory where Data Analyzer stores temporary files. The directory must be a
shared file system that all servers in the cluster can access. If you specify a
new directory, Data Analyzer creates the directory in the following default
directory:
<PowerCenter_InstallationDirectory>/server/tomcat/jboss/bin/
To specify a path, use the forward slash (/) or two backslashes (\\) as the file
separator. Data Analyzer does not support a single backslash as a file
separator. You can specify a full directory path such as D:/temp/DA.
Default is tmp_ias_dir.
A B
access permissions background image URL
change permission 14 background image location 75
creating 14 business days
defined 13 default 29
Delete permission 14 setting 29
exclusive 14
inclusive 14
read permission 13 C
schedules 24
cache
setting 9, 13
See global cache
using wildcards 14
Cache.GlobalCaching property
write permission 13
configuring 129
activity log
Cache.Report.Subscription.NoOfDaysToExpire property
configuring maximum rows 80, 133
configuring 129
saving 79
cached reports
viewing and clearing 79
adding administrative reports to schedules 94
administrative reports
attaching to schedule after importing 26
adding to schedules 94
importing 55
Administrator’s Dashboard 91
Calendar
description 91
business days 29
list and description 95
daily view 28
public folder 92
holidays 29
setting up 92
leap years 28
Administrator’s Dashboard
monthly view 28
dashboard for administrative reports 91
viewing 28
AIX
weekly view 28
performance tuning 103
change permission
alert.fromaddress property
See access permissions
configuring 129
Chart.Fontname property
alerts
configuring 129
modifying From email address 129
Chart.Fontsize property
analytic workflows
configuring 130
See also Data Analyzer User Guide
Chart.MaxDataPoints property
importing reports 54
configuring 130
AND operator
Chart.Minfontsize property
multiple data restrictions 16
configuring 130
api.compatibility.level property
clearing
configuring 129
activity log 79
application server
event-based schedule histories 34
description 2
time-based schedule histories 24
arguments
user log 78
Import Export utility 66
color schemes
attaching
assigning 78
imported reports to event-based schedule 37
background image URL 75
reports to event-based schedule 32
creating 76
customizing 74, 116
images directory 75
141
list of color codes 119 DataRestriction.OldBehavior property
login page image URL 75 configuring 131
logo image URL 75 datatype.CLOB.datalength property
primary 75 configuring 131
primary navigation 76 date/time formats
secondary 75 in localization 8
secondary navigation 76 DB2 database
selecting 77 performance tuning 99
using default 74 default color scheme
viewing 77 using 74
compression.alwaysCompressMimeTypes property delete permission
configuring 130 See access permissions
compressionFilter.compressableMimeTypes property deleting
configuring 130 data restrictions 19, 20
compressionFilter.compressThreshold property event-based schedule histories 34
configuring 130 event-based schedules 35
configuration files scheduled reports 27, 37
DataAnalyzer.properties 128 time-based schedule histories 24
infa-cache-service.xml 135 time-based schedules 25
web.xml 139 disabling
contact information event-based schedules 35
specifying for system administrator 84 time-based schedules 25
creating
event-based schedules 32
holidays 29 E
time-based schedules 22
enableGroupSynchronization property
CustomLayout.MaximumNumberofColumns property
configuring 139
configuring 130
error messages
Import Export utility 69
D event-based schedules
access permissions 24
daily view attaching imported reports 37
Calendar 28 attaching reports 32
dashboards creating 32
exporting 44 defined 21
importing 57 disabling 35
data access enabling 35
restricting 10, 16 histories 34
Data Analyzer managing reports 35
performance tuning 109 removing 35
data lineage schedule monitoring 29
using 5 starting immediately 34
data restrictions stopping 35
AND operator 16 stopping immediately 30
by fact table 17 using PowerCenter Integration utility 33
by user or group 19 exclusive permissions
deleting 19, 20 See access permissions
exporting 45 exporting Data Analyzer objects
importing 59 dashboards 44
OR operator 16 data restrictions 45
data sources global variables 44
creating 92 group security profile 45
creating for Metadata Reporter 92 metrics 40
description 3 overview 39
data warehouses reports 42
performance tuning 97 security profile 45
DataAnalyzer.properties time dimension tables 42
configuring 128 user security profile 45
datamart.defaultRowPrefetch property using Import Export utility 65
configuring 130 external URL
datamart.transactionIsolationLevel property defined 83
configuring 131 registering 83
142 Index
F imported reports
attaching to schedules 26
fact tables importing
restricting data access 17 dashboards 57
footers data in multiple languages 8
configuring report footers 87 data restrictions 59
display options 87 global variables 56
group security profile 60
large XML files 62
G overview 49
global cache reports 54
configuring 135 schema objects 50
eviction policy 137 security profile 59
lock acquisition timeout 136 transaction timeout 62, 132
sizing 137 user security profile 59
global variables using Import Export utility 65
exporting 44 inclusive permissions
importing 56 See access permissions
GroupBySuppression.GroupOnAttributePair property Indicator.pollingIntervalSeconds property
configuring 132 configuring 132
groups infa-cache-service.xml file
displaying 90 configuring 135
removing from the repository 10, 11, 12
restricting data access 19
searchLimit parameter 90, 139 J
showSearchThreshold parameter 90, 139 Java environment
viewing 84
JBoss Application Server
H description 2
header section JDBC
UI configuration 116 log file 81, 132
headers jdbc.log.append property
configuring report headers 87 configuring 132
display options 87 jdbc.log.file property
heap size configuring 133
importing large XML files 63
help.files.url property
configuring 132 L
histories language settings
clearing 34 backing up and restoring Data Analyzer repositories 7
clearing schedule 24 Data Analyzer repository 7
holidays data warehouse 7
creating 29 import and export repository objects 8
host.url property importing table definitions 8
configuring 132 language support
HP-UX display 7
performance tuning 100 LDAP authentication
server settings 81
synchronizing user list 134
I leap years
images directory Calendar 28
color scheme location 75 Linux
Import Export utility performance tuning 99
error messages 69 localization
format 66 Data Analyzer display language 7
options and arguments 66 date and number formats 8
repository objects 68 displaying reports in Chinese or Japanese when exporting to PDF
running 66 8
using 65 language settings 7
import.transaction.timeout.seconds property overview 7
configuring 132 setting metric or attribute default values 8
Index 143
log files
JDBC 81
P
managing 78 PDF.HeaderFooter.ShrinkToWidth property
logging.activity.maxRowsToDisplay property configuring 133
configuring 80, 133 using 88, 133
logging.user.maxRowsToDisplay property performance tuning
configuring 79, 133 AIX 103
login page image URL Data Analyzer processes 109
login page image location 75 database 97
login-session-timeout property DB2 database 99
configuring 139 HP-UX 100
logo image Linux 99
customizing 74 Microsoft SQL Server 2000 99
logo image location 75 operating system 99
Oracle database 98
Solaris 101
M Windows 104
permissions
mail servers
See access permissions
configuring 83
setting 9
Maps.Directory property
post-session command
configuring 133
using the PowerCenter Integration utility 33
metrics
PowerCenter Integration utility
exporting 40
using in a post-session command 33
importing 50
PowerCenter Workflow Manager
Microsoft SQL Server 2000
using the PowerCenter Integration utility 33
performance tuning 99
predefined color scheme
monitoring
using 74
schedules 29
previewing
monthly view
report headers and footers 89
Calendar 28
primary display item
multiple instances of Data Analyzer
color scheme 75
configuration files 127
properties
defining in DataAnalyzer.properties 128
N defining in infa-cache-service.xml 135
defining in web.xml 139
navigation providerContext.abortThresHold property
color schemes 76 configuring 133
navigation bar providerContext.maxInMemory property
UI configuration 116 configuring 133
notifyias
using in PowerCenter post-session command 33
Q
O queries
setting rules 85
operating system query governing
performance tuning 99 query time limit 85
viewing 84 report processing time limit 85
operational schemas row limit 85
setting data restrictions 17 setting rules 85
operators specifying for users 12
AND 16 query time limit
OR 16 defined 85
options queryengine.estimation.window property
Import Export utility 66 configuring 133
OR operator
multiple data restrictions 16
Oracle R
performance tuning 98
read permissions
See access permissions
recurring schedules
See time-based schedules
144 Index
removing for cached administrative reports 94
See deleting stopping 30
report processing time limit scheduling
defined 85 business days 29
report.maxRowsPerTable property Calendar 28
configuring 134 holidays 29
report.maxSectionSelectorValues property schemas
configuring 134 restricting data access 17
report.maxSectionsPerPage property scroll bars
configuring 134 report table option 87
report.showSummary property searchLimit property
configuring 134 configuring 139
report.userReportDisplayMode property secondary display item
configuring 134 color schemes 75
ReportingService.batchsize security
configuring 134 access permissions 13
reports security profiles
See also Data Analyzer User Guide exporting 45
adding administrative reports to schedules 94 exporting user 45
administrative reports overview 91 group 45
attached to time-based schedules 25 importing 59
attaching imported reports to event-based schedule 37 importing group 60
attaching to event-based schedule 32 importing user 59
attaching to schedule after importing 26 securityadapter.frequency property
deleting from time-based schedules 27 configuring 134
displaying scroll bars in tables 87 securityadapter.syncOnSystemStart property
exporting Data Analyzer objects 42 configuring 134
header and footer display options 87 servlet.compress property
importing 54 configuring 134
in event-based schedules 35 servlet.compress.jscriptContentEncoding property
list of administrative reports 95 configuring 135
previewing headers and footers 89 servlet.useCompressionThroughProxies property
removing from event-based schedules 37 configuring 135
setting headers and footers 87 session-timeout property
viewing in event-based schedule 36 configuring 139
viewing properties 27 showSearchThreshold property
repository database configuring 139
performance tuning 97 single sign-on
restore See also Data Analyzer SDK Guide
repository language settings 7 with Data Analyzer API 116
row limit single-event schedules
query governing 85 See time-based schedules
row-level security Solaris
restricting data access 16 performance tuning 101
running SQL queries
Import Export utility 66 row limit 85
setting rules 85
time limit 85
S starting
event-based schedules 34
saving
time-based schedules 24
activity log 79
stopping
system log 80
event-based schedules 35
user log 78
running schedules 30
schedule monitoring
time-based schedules 25
defined 29
synchronization scheduler
scheduled reports
customizing 134
deleting 27
system administrator
viewing 26, 36
using Import Export utility 65
schedules
system information
See also event-based schedules
viewing 84
See also time-based schedules
system log
attaching imported reports to schedules 26
configuring 80
Index 145
saving 80
viewing 80
V
viewing
activity log 79
T histories for event-based schedules 34
reports attached to event-based schedules 36
tasks
reports attached to time-based schedules 26
properties 27
system information 84
temporary table space
system log 80
importing large XML files 63
time-based schedule histories 24
TemporaryDir property
user log 78
configuring 139
time dimension tables
exporting Data Analyzer objects 42
time-based schedules
W
access permissions 24 web.xml
creating 22 configuring 139
defined 21 weekly view
deleting 25 Calendar 28
disabling and enabling 25 wildcards
histories 24 searching user directory 14
managing reports 25 Windows
schedule monitoring 29 performance tuning 104
starting immediately 24 work days
stopping immediately 25 scheduling 29
viewing the Calendar 28 write permissions
TimeDimension.useDateConversionOnPrimaryDate property See access permissions
configuring 135
timeout
changing default for transactions 62 X
configuring for Data Analyzer session 4 XML files
transaction timeout heap size for application 63
changing the default 62 importing large files 62
temporary table space 63
U
UI configuration
default 116
properties 129
setting up 116, 129
URL parameter 117
UICONFIG
URL parameter 117
URL
background image for color schemes 75
login page image for color schemes 75
logo image for color schemes 75
URL API
See also Data Analyzer SDK Guide
using 115
user log
configuring maximum rows 79, 133
saving 78
viewing and clearing 78
users
displaying 90
restricting data access 19
searchLimit parameter 90, 139
showSearchThreshold parameter 90, 139
UTF-8 character encoding
Data Analyzer support 7
146 Index
NOTICES
This Informatica product (the “Software”) includes certain drivers (the “DataDirect Drivers”) from DataDirect Technologies, an operating company of Progress Software Corporation (“DataDirect”)
which are subject to the following terms and conditions:
1. THE DATADIRECT DRIVERS ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.
2. IN NO EVENT WILL DATADIRECT OR ITS THIRD PARTY SUPPLIERS BE LIABLE TO THE END-USER CUSTOMER FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
CONSEQUENTIAL OR OTHER DAMAGES ARISING OUT OF THE USE OF THE ODBC DRIVERS, WHETHER OR NOT INFORMED OF THE POSSIBILITIES OF DAMAGES IN
ADVANCE. THESE LIMITATIONS APPLY TO ALL CAUSES OF ACTION, INCLUDING, WITHOUT LIMITATION, BREACH OF CONTRACT, BREACH OF WARRANTY,
NEGLIGENCE, STRICT LIABILITY, MISREPRESENTATION AND OTHER TORTS.