Data Analyzer Administrator Guide

Informatica® PowerCenter®
(Version 8.6)

Informatica PowerCenter Data Analyzer Administrator Guide Version 8.6 June 2008 Copyright © 2001-2008 Informatica Corporation. All rights reserved. Printed in the USA. This software and documentation contain proprietary information of Informatica Corporation and are provided under a license agreement containing restrictions on use and disclosure and are also protected by copyright law. Reverse engineering of the software is prohibited. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise) without prior consent of Informatica Corporation. This Software may be protected by U.S. and/or international Patents and other Patents Pending. Use, duplication, or disclosure of the Software by the U.S. Government is subject to the restrictions set forth in the applicable software license agreement and as provided in DFARS 227.7202-1(a) and 227.7702-3(a) (1995), DFARS 252.227-7013(c)(1)(ii) (OCT 1988), FAR 12.212(a) (1995), FAR 52.227-19, or FAR 52.227-14 (ALT III), as applicable. The information in this product or documentation is subject to change without notice. If you find any problems in this product or documentation, please report them to us in writing. Informatica, PowerCenter, PowerCenterRT, PowerCenter Connect, PowerCenter Data Analyzer, PowerExchange, PowerMart, Metadata Manager, Informatica Data Quality, Informatica Data Explorer, Informatica Complex Data Exchange and Informatica On Demand Data Replicator are trademarks or registered trademarks of Informatica Corporation in the United States and in jurisdictions throughout the world. All other company and product names may be trade names or trademarks of their respective owners. Portions of this software and/or documentation are subject to copyright held by third parties, including without limitation: Copyright DataDirect Technologies. All rights reserved. Copyright © Sun Microsystems. All rights reserved. Copyright © Aandacht c.v. All rights reserved. Copyright 2007 Isomorphic Software. All rights reserved. This product includes software developed by the Apache Software Foundation (http://www.apache.org/) and other software which is licensed under the Apache License, Version 2.0 (the "License"). You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. This product includes software which was developed by Mozilla (http://www.mozilla.org/), software copyright The JBoss Group, LLC, all rights reserved; software copyright, Red Hat Middleware, LLC, all rights reserved; software copyright © 1999-2006 by Bruno Lowagie and Paulo Soares and other software which is licensed under the GNU Lesser General Public License Agreement, which may be found at http://www.gnu.org/licenses/lgpl.html. The materials are provided free of charge by Informatica, “as-is”, without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and fitness for a particular purpose. This product includes software copyright (C) 1996-2006 Per Bothner. All rights reserved. Your right to use such materials is set forth in the license which may be found at http://www.gnu.org/software/ kawa/Software-License.html. This product includes software licensed under the terms at http://www.bosrup.com/web/overlib/?License. This product includes software developed by the Indiana University Extreme! Lab. For further information please visit http://www.extreme.indiana.edu/. This product includes software licensed under the Academic Free License (http://www.opensource.org/licenses/afl-3.0.php). This Software is protected by Patents including US Patents Numbers 6,640,226; 6,789,096; 6,820,077; and 6,823,373 and other Patents Pending. DISCLAIMER: Informatica Corporation provides this documentation “as is” without warranty of any kind, either express or implied, including, but not limited to, the implied warranties of noninfringement, merchantability, or use for a particular purpose. Informatica Corporation does not warrant that this software or documentation is error free. The information provided in this software or documentation may include technical inaccuracies or typographical errors. The information in this software and documentation is subject to change at any time without notice.

Part Number: DA-ADG-86000-0001

Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Informatica Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Informatica Customer Portal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Informatica Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Informatica Web Site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Informatica Knowledge Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Informatica Global Customer Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x

Chapter 1: Data Analyzer Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Data Analyzer Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Main Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Supporting Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Data Analyzer Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Using Data Analyzer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Configuring Session Timeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Data Lineage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Data Lineage for a Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Data Lineage for a Metric or Attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Data Analyzer Display Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Language Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Setting the Default Expression for Metrics and Attributes . . . . . . . . . . . . . . . . . . . . . . . . . 8 Date and Number Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Exporting Reports with Japanese Fonts to PDF Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Chapter 2: Managing Users and Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Restricting User Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Setting Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Authentication Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 User Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Managing Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Editing a Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Managing Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Editing a User Account . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Adding Data Restrictions to a User Account . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Chapter 3: Setting Permissions and Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Setting Access Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
iii

31 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Managing Reports in a Time-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Managing Reports in an Event-Based Schedule . . . . . . . . . . . . . . . . 25 Attaching Reports to a Time-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Understanding Data Restrictions for Multiple Groups . . . . . . . . . . 34 Viewing or Clearing an Event-Based Schedule History . . . . 27 Removing a Report from a Time-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Starting a Time-Based Schedule Immediately . . . . . . . . . . . . . . . . 19 Chapter 4: Managing Time-Based Schedules . . . . . . . . . . . . 22 Managing Time-Based Schedules . . . . . . . . . . . . 35 Viewing Attached Reports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Updating Reports When a PowerCenter Session Completes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Managing Event-Based Schedules . . . . 34 Stopping an Event-Based Schedule Immediately . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Stopping a Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Editing a Time-Based Schedule . . . . . . . . . . . . . . . . . . . . . 36 Viewing Task Properties . . . . . . . . . . . . . . . . . . . . . 25 Disabling a Time-Based Schedule . 36 iv Table of Contents . . . . . . . . . . . .Restricting Data Access . Use the PowerCenter Integration Utility in PowerCenter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Step 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Starting an Event-Based Schedule Immediately . . . . . . . . . . . 27 Viewing or Clearing a Task History . . . . . . . . . . . . . 23 Editing Access Permissions for a Time-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Defining a Business Day . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Stopping a Time-Based Schedule Immediately . . . . . . . . . . . . . . . . . . . Create an Event-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Monitoring a Schedule . . . . . . . . . . . . . . . . . . . . 25 Removing a Time-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Viewing Task Properties . . . . . . . . . . . . . . . 35 Removing an Event-Based Schedule . . . . . . . 27 Using the Calendar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Restricting Data Access by Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Chapter 5: Managing Event-Based Schedules . . . . . . . . . . . . . . 32 Step 2. . . . . 21 Creating a Time-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Disabling an Event-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Restricting Data Access by User or Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Viewing or Clearing a Time-Based Schedule History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Defining a Holiday . . . . . . . . 26 Viewing Attached Reports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Editing an Event-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Navigating the Calendar . . . . . . . . . . . . . . . . . . . 16 Using Global Variables . . . . . . . . . . . . . . . . . . 33 Editing Access Permissions for an Event-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . 42 Exporting a Global Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Importing a Schema . . . . . . . . . . . 55 Steps for Importing a Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Exporting a Dashboard . . . . . . . . . . 39 Overview . . 45 Exporting a Group Security Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 XML Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Chapter 6: Exporting Objects from the Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Attaching Imported Cached Reports to an Event-Based Schedule . . 40 Exporting Metrics and Associated Schema Objects . . . . . . . 56 Importing a Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Importing a Large Number of Reports . . . . . . . . 45 Exporting a User Security Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Importing a User Security Profile . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Running the Import Export Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Viewing or Clearing a Report History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Object Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Exporting a Report . . . . . . . . . . . . . . . . . . . . . 54 Importing Reports from Public or Personal Folders . . . . . . . . . . . . . . . . . . . . . . . . 47 Chapter 7: Importing Objects to the Repository . . . . . . . . . . . . . . . . 40 Exporting Metric Definitions Only . . . . 50 Importing a Time Dimension . . . . . . . . . . . . . . . . . . . . . . . 46 Troubleshooting . . . . . . . . . . . 50 Importing Objects from a Previous Version . . . . . . . . . . . . . . . . . . 62 Chapter 8: Using the Import Export Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Exporting a Schedule . . . . . . . . . 39 Exporting a Schema . . . . . . . . . . . . . . . . . 44 Exporting a Security Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Troubleshooting . . . . . . . . . . . . . . 49 Overview . . . . . . . . . . . . . 60 Importing a Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Importing a Group Security Profile . . . . . . . . . . . . . . . . . . . . . . . . . 57 Importing a Security Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Removing a Report from an Event-Based Schedule . . . . . . . . . . . 40 Exporting a Time Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Using SSL with the Import Export Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Importing a Report . . . . . . . 55 Importing a Global Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Using a Predefined Color Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Setting Query Rules at the User Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Editing a Predefined Color Scheme . . . . . . . . . . . . . . . . . 94 Step 2. . 80 Configuring the JDBC Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Configuring Display Settings for Groups and Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Creating a Color Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 Setting Query Rules at the Report Level . . . . . . . . . . . . . . . . . . . 83 Configuring the Mail Server . . . . . . . . . . . . . . . . . . . . . . . . . . .Chapter 9: Managing System Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Setting Rules for Queries . . . . . . . . . 100 vi Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Viewing the User Log . . . . . . 95 Step 3. . . . . . . . . . . 93 Administrator’s Dashboard . . . . . . . . . . . . . . 99 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Managing Delivery Settings. . . . . . . 97 Chapter 11: Performance Tuning. . . . . . . . . . . 85 Setting Query Rules at the System Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Viewing System Information . . . . . . . . . . . . . . . . 88 Configuring Report Headers and Footers . . . . . . . . . . . . . . . Import the Data Analyzer Administrative Reports . . . . . . . . . . 83 Specifying Contact Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Step 4. . . . . . . . . . . . . . . . . 88 Configuring Departments and Categories . . . . . . . . . 78 Managing Logs . . . . 96 Using the Data Analyzer Administrative Reports . . 78 Configuring and Viewing the Activity Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Database . . . . 91 Chapter 10: Working with Data Analyzer Administrative Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Set Up a Data Source for the Data Analyzer Repository . . . . . . . . . . . . . . . . . . . . . . 93 Overview . . . . . . . . . . . . . 83 Configuring SMS/Text Messaging and Mobile Carriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Configuring the External URL . . . 73 Managing Color Schemes and Logos . . . . 86 Configuring Report Table Scroll Bars. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add the Administrative Reports to Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . Add the Data Source to a Data Connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Setting Up the Data Analyzer Administrative Reports . . . . . . . . . . . . . . . 85 Setting Query Rules at the Group Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Step 1. . 99 Oracle . . . . . . . . . . . . . . . . . . . . . . 93 Data Analyzer Administrative Reports Folder . . 73 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Managing LDAP Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Assigning a Color Scheme . . . . . . . . . . . . . . 79 Configuring the System Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Selecting a Default Color Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Connection Pool Size for the Data Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Frequency of Schedule Runs . . . . . . . . . . . . 115 Chart Legends . . . 111 Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Using the Data Analyzer API Single Sign-On . . . . 113 Scheduler and User-Based Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Application Server . . . . . . . . . . . . . . . . . .IBM DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Data Analyzer Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 JavaScript on the Analyze Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Servlet/JSP Container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 EJB Container . . . . . . . . . . 106 JSP Optimization . . . . . . . . . . . . . . . . . . . 107 Repository Database Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Default UI Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Chapter 12: Customizing the Data Analyzer Interface . . . . . . . 111 Datatype of Table Columns . . . . 101 Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Using the Data Analyzer URL API . . . . . . . . . . 111 Ranked Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Number of Charts in a Report . . . 112 Date Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Setting the UI Configuration Properties . . . . . 113 Indicators in Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Setting Up Color Schemes and Logos. . . . . 103 AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Solaris . . . . . . . . . . . 101 Operating System . . . . . . . . . . . . . . 101 HP-UX . . . . . . . . . 118 UI Configuration Parameter in Data Analyzer URL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Purging of Activity Log . . . . . . . . . . . . . . . . . . . 114 Recommendations for Dashboard Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Table of Contents vii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Appendix A: Hexadecimal Color Codes . . . . . . . . . . . . . . . . . . . 119 Configuration Settings . . . 121 HTML Hexadecimal Color Codes . . . . . . . . . . . . . . . . . . . . . . . . 112 Interactive Charts . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Microsoft SQL Server 2000 . . . . . . . . . . . . . . . . . 115 Server Location . . . . . . . . . . . . 113 Row Limit for SQL Queries . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .properties . . . . . . . . . . . . . . . . . . 141 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Properties in DataAnalyzer. . . . . . . . . . . 130 Properties in infa-cache-service. . . . . . . . . 129 Overview . . . . . 129 Modifying the Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xml . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Appendix B: Configuration Files . . . 143 viii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xml . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Properties in web. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Configuring the Eviction Policy . . . . . . . . . . . . . 137 Configuring the Lock Acquisition Timeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

including managing user access and report schedules and exporting and importing objects in a Data Analyzer repository.com. It assumes that you have knowledge of relational databases. If you have questions.Preface The Data Analyzer Administrator Guide provides information on administering Data Analyzer. The site contains product information. and implementation services. SQL. you can access the Informatica Customer Portal site at http://my. you can access the Informatica Knowledge Base at http://my. Let us know if we can contact you regarding your comments.informatica. Informatica Web Site You can access the Informatica corporate web site at http://www. newsletters. Informatica Documentation Center.informatica. Informatica Knowledge Base As an Informatica customer. You can also find answers to frequently asked questions. and sales offices. You will also find product and partner information. ix . The Data Analyzer Administrator Guide is written for system administrators. and technical tips. Informatica Documentation The Informatica Documentation team takes every effort to create accurate.informatica. Use the Knowledge Base to search for documented solutions to known technical issues about Informatica products.com. upcoming events. comments. user group information. We will use your feedback to improve our documentation. or ideas about this documentation. access to the Informatica customer support case management system (ATLAS). technical white papers. usable documentation. and access to the Informatica user community. The services area of the site includes important information about technical support. and web technology. contact the Informatica Documentation team through email at infa_documentation@informatica. its background.com. It also discusses performance tuning and server clusters. the Informatica Knowledge Base.com. training and education. The site contains information about Informatica. Informatica Resources Informatica Customer Portal As an Informatica customer.

Use the following telephone numbers to contact Informatica Global Customer Support: North America / South America Informatica Corporation Headquarters 100 Cardinal Way Redwood City. Diamond District Tower B. You can request a user name and password at http://my. You can contact a Customer Support Center through telephone.Informatica Global Customer Support There are many ways to access Informatica Global Customer Support. 6 Waltham Park Waltham Road.com for technical inquiries support_admin@informatica. Use the following email addresses to contact Informatica Global Customer Support: ♦ ♦ support@informatica. Ltd. email.com for general customer service requests WebSupport requires a user name and password. Berkshire SL6 3TN United Kingdom Asia / Australia Informatica Business Solutions Pvt.informatica.com. or the WebSupport Service. 3rd Floor 150 Airport Road Bangalore 560 008 India Toll Free Australia: 1 800 151 830 Singapore: 001 800 4632 4357 Standard Rate India: +91 80 4112 5738 Toll Free +1 877 463 2435 Toll Free 00 800 4632 4357 Standard Rate Brazil: +55 11 3523 7761 Mexico: +52 55 1168 9763 United States: +1 650 385 5800 Standard Rate Belgium: +32 15 281 702 France: +33 1 41 38 92 26 Germany: +49 1805 702 702 Netherlands: +31 306 022 797 United Kingdom: +44 1628 511 445 x Preface . White Waltham Maidenhead. California 94063 United States Europe / Middle East / Africa Informatica Software Ltd.

you define the fact and dimension tables and the metrics and attributes in the star schema. The Reporting Service is the application service that runs the Data Analyzer application in a PowerCenter domain. ♦ ♦ Data Analyzer supports the Java Message Service (JMS) protocol to access real-time messages as data sources. format. Metadata Manager Reports. You can create a Reporting Service in the PowerCenter Administration Console. 2 Data Analyzer Basics. Identify which tables contain the metrics and attributes for the schema. Operational schema. or create and run custom reports. 5 Data Lineage. Each schema must contain all the metrics and attributes that you want to analyze together. Based on an operational data store in a relational database. You can use Data Analyzer to run PowerCenter Repository Reports. and define the relationship among the tables. filter. To display real-time data in a Data Analyzer real-time report. For more information about the Reporting Service. you define the tables in the schema. Data Profiling Reports. you can extract. Data Analyzer uses the characteristics of a dimensional data warehouse model to assist you to analyze data. you create a Data Analyzer real-time message stream 1 . When you set up an analytic schema in Data Analyzer. Based on a dimensional data warehouse in a relational database. or other data storage models. With Data Analyzer. and analyze corporate information from data stored in a data warehouse.CHAPTER 1 Data Analyzer Overview This chapter includes the following topics: ♦ ♦ ♦ ♦ ♦ ♦ Introduction. operational data store. 7 Introduction PowerCenter Data Analyzer provides a framework for performing business analytics on corporate data. 1 Data Analyzer Framework. A hierarchical schema contains attributes and metrics from an XML document on a web server or an XML document returned by a web service operation. 5 Localization. Data Analyzer uses the familiar web browser interface to make it easy for a user to view and analyze business information at any level. Hierarchical schema. Based on data in an XML document. When you set up an operational schema in Data Analyzer. Use the operational schema to analyze data in relational database tables that do not conform to the dimensional data model. Data Analyzer works with the following data models: ♦ Analytic schema. 4 Security. see the PowerCenter Administrator Guide.

and how to present the report. reports. and other objects in the Data Analyzer repository. It includes components and services that may already exist in an enterprise infrastructure. Data Analyzer uses the application server to handle requests from the web browser. you need to import the metadata for the data source manually. If the application server contains a web server. The Java Application Server also provides an environment that uses Java technology to manage application. network. When you run reports for any data source. you need to specify the Data Analyzer repository details. such as an enterprise data warehouse and authentication server. Main Components Data Analyzer is built on JBoss Application Server and uses related technology and application programming interfaces (API) to accomplish its tasks. The Java application server provides services such as database access and server load balancing to Data Analyzer. The data for an analytic or operational schema must also reside in a relational database. queries. Data Analyzer Data Analyzer is a Java application that provides a web-based platform for the development and delivery of business analytics. Data Analyzer Framework Data Analyzer works within a web-based framework that requires the interaction of several components. you can read data from a data source. metrics and attributes.with the details of the metrics and attributes to include in the report. JBoss Application Server is a Java 2 Enterprise Edition (J2EE)compliant application server. In Data Analyzer. It generates the requested contents and uses the application server to transmit the content back to the web browser. Note: If you create a Reporting Service for another reporting source. Data Analyzer uses the following Java technology: ♦ ♦ ♦ Java Servlet API JavaServer Pages (JSP) Enterprise Java Beans (EJB) 2 Chapter 1: Data Analyzer Overview . The Reporting Service configures the Data Analyzer repository with the metadata corresponding to the selected data source. and system resources. and view the results on the web browser. create reports. Application Server JBoss Application Server helps the Data Analyzer Application Server manage its processes efficiently. You need a separate web server to set up a proxy server to enable external users to access Data Analyzer through a firewall. Data Analyzer stores metadata for schemas. you do not need to install a separate web server. Data Analyzer uses the metadata in the Data Analyzer repository to determine the location from which to retrieve the data for the report. Data Analyzer updates the report when it reads JMS messages. Data Analyzer stores metadata in a repository database to keep track of the processes and objects it needs to handle web browser requests. When you create a Reporting Service. Web Server Data Analyzer uses an HTTP server to fetch and transmit Data Analyzer pages to web browsers. user profiles. The Data Analyzer repository must reside in a relational database. The data for a hierarchical schema resides in a web service or XML document.

You can create reports based on the schemas without accessing the data warehouse directly. PowerCenter You create and enable a Reporting Service on the Domain page of the PowerCenter Administration Console. and other objects and processes. The XML document can reside on a web server. For hierarchical schemas. For more information about authentication methods. PowerCenter Client tools. you can extend the power of Data Analyzer when you set it up to work with these additional components. Data Analyzer connects to the repository with JDBC drivers. The metadata includes information on schemas. user profiles. Forward reports through email. You log in to Data Analyzer to create and run reports on data in a relational database or to run PowerCenter Repository Reports. the Service Manager stores the users and groups in the domain configuration database and notifies the Reporting Service. Mail Server Data Analyzer uses Simple Mail Transfer Protocol (SMTP) to provide access to the enterprise mail server and facilitate the following services: ♦ ♦ Send report alert notification and SMS/Text Messages to alert devices. Supporting Components Data Analyzer has other components to support its processes. You launch Data Analyzer from the Administration Console.♦ ♦ ♦ Java Database Connectivity (JDBC) Java Message Service (JMS) Java Naming and Directory Interface (JNDI) Data Analyzer Repository The repository stores the metadata necessary for Data Analyzer to track the objects and processes that it requires to effectively handle user requests. or Metadata Manager. personalization. When you enable the Reporting Service. Data Analyzer Data Profiling Reports. reports and report delivery. When you use the Administration Console to create native users and groups. The Reporting Service copies the users and groups to the Data Analyzer repository. see the PowerCenter Administrator Guide. Data Analyzer reads data from an XML document. the Administration Console starts Data Analyzer. or Metadata Manager Reports. Note: You cannot create or delete users and groups. Data Analyzer Framework 3 . You can only modify the user settings such as the user name or the contact details in Data Analyzer. Data Analyzer reads data from a relational database. Authentication Server You use PowerCenter authentication methods to authenticate users logging in to Data Analyzer. It connects to the database through JDBC drivers. or by accessing the Data Analyzer URL from a browser. or change user passwords in Data Analyzer. including an API that allows you to integrate Data Analyzer features into other web applications and security adapters that allow you to use an LDAP server for authentication. Data Analyzer connects to the XML document or web service through an HTTP connection. Data Source For analytic and operational schemas. or it can be generated by a web service operation. Although you can use Data Analyzer without these components.

Data Analyzer has many more features you can use to analyze and get the most useful information from your corporate data. 2. 5. reports. Using Data Analyzer When you use Data Analyzer to analyze business information. The API specifies the functions available to developers to access Data Analyzer dashboards. or XML documents. set up a time dimension.xml file. You need system administrator privileges to set up data connectors. set up the tables for an operational schema. Create analytic workflows to analyze the data.Web Portal The Data Analyzer API enables you to integrate Data Analyzer into other web applications and portals. Define the fact and dimension tables for an analytic schema. web service. Set up alerts on reports based on events and threshold values that you define. Create a dashboard and customize it to display the indicators and links to reports and shared documents to which you want immediate access. Set up schedules to run reports regularly. Create indicators and alerts for the report. Set up the connectivity information so that Data Analyzer can access the data warehouse. Define the data source. 4. but you do not use it for 30 minutes. To preserve application resources. Define the rowsets and columns for web services or XML data sources. the session terminates or times out. You need system administrator privileges to define data sources. Data Analyzer Basics This section lists the steps you need to complete to access analytic data in Data Analyzer. Data Analyzer terminates a user session if it does not have any activity for a length of time. 3. or define a hierarchical schema. You need system administrator privileges to import table definitions or define rowsets. Configuring Session Timeout By default. complete the following steps: 1. You can set the session timeout period according to the Data Analyzer usage in your organization. Import the table definitions from JDBC data sources or set up rowsets and columns from XML sources. 6. The system administrator can change the session timeout period by editing the value of the session-timeout property in the web. and other objects and display them in any web application or portal. operational. if you log in to Data Analyzer. For more information. see “Configuration Files” on page 129. Define an analytic. If you set up an analytic schema. Create dashboards. Create a data connector to identify which data source to use when you run reports. Import table definitions from the data warehouse or operational data store into the Data Analyzer repository. 7. Define the metrics and attributes in the schemas. Create and run reports. You need system administrator privileges to define the schemas in Data Analyzer. 4 Chapter 1: Data Analyzer Overview . You can configure Data Analyzer to access more than one data source. Create reports based on the metrics and attributes you define. Set up the data connector. or hierarchical schema. This book presents the tasks that a system administrator typically performs in Data Analyzer.

Use data lineage to understand the origin of the data.. You cannot use data lineage with the Mozilla Firefox browser. You manage users and groups in the PowerCenter Administration Console. It also provides system administrators a way to control access to Data Analyzer tasks and data based on privileges and roles granted to users and groups. For a Data Analyzer data lineage. Metadata Manager is the PowerCenter metadata management and analysis tool. fields include the following: − Metrics in reports Security 5 . Data Analyzer depends on database servers to provide their own security and data integrity facilities. For more information about the PowerCenter authentication methods. Data Analyzer reads data from the data warehouse and stores data in a repository to support its different components. Data Lineage You can access the data lineage for Data Analyzer reports. When you access data lineage from Data Analyzer. Metadata Manager displays metadata objects for each repository. Data structures..Security Data Analyzer provides a secure environment in which to perform business analytics. how it transforms. It supports standard security protocols like Secure Sockets Layer (SSL). or attribute displays one or more of the following objects: ♦ ♦ Data Analyzer repositories. metric. attributes. and where it is used. and reports from the following areas in Data Analyzer: Data Analyzer Object Report Metric Access Data Lineage From. Use the PowerCenter Administration Console to configure data lineage for a Reporting Service. In the data lineage. and metrics. Data Analyzer connects to a Metadata Manager server. You can load objects from multiple Data Analyzer repositories into Metadata Manager. attributes. Find tab Schema Directory > Metrics page Create > Report > Select Metrics page Analyze tab Schema Directory > Attributes page Create > Report > Select Attributes page Analyze tab Attribute Data lineage for a Data Analyzer report. The Metadata Manager server displays the data lineage in an Internet Explorer browser window. You can access data lineage for metrics. Fields are objects within data structures that store the metadata. the data structures include the following: − − − − Reports Fact tables Dimension tables Table definitions ♦ Fields. see the PowerCenter Administrator Guide. Data Analyzer uses the PowerCenter authentication methods to authenticate users set up in the PowerCenter domain configuration database. Data structures group metadata into categories. For a Data Analyzer data lineage. Security and data integrity in the database servers that contain the data warehouse and the repository are essential for a reliable system environment.

You can display data lineage on the Internet Explorer browser. Data structure that populates the field. Repository. In Figure 1-1. Regions Data Analyzer fact table: Costs Data Data Analyzer table definitions: COUNTRIES. Data Lineage for a Report You can access data lineage for cached and on-demand reports. Figure 1-1 shows the data lineage for a Data Analyzer report: Figure 1-1. which provides the Country Name attribute for the Sales report. In Figure 1-1. the parent for the Country Name field is the Countries dimension table. For example. or PDF file. REGIONS. In this example. COSTS_DATA Parent. Name of the field. fields are the metrics and attributes in the report. Field Name.− − Attributes in reports Columns in tables Note: The Metadata Manager server must be running when you access data lineage from Data Analyzer. the data lineage shows that the COUNTRIES table definition populates the Countries dimension table. After you access the data lineage. the following data structures display in the data lineage: ♦ ♦ ♦ ♦ Data Analyzer report: Sales report Data Analyzer dimension tables: Countries. Each field contains the following information: ♦ ♦ ♦ The direction of the arrows in the data lineage shows the direction of the data flow. see the Metadata Manager User Guide. Data lineage shows the flow of the data displayed in a report. You can also email the data lineage to other users. For more information. 6 Chapter 1: Data Analyzer Overview . Name of the Data Analyzer repository that contains metadata for the report. PA5X_RICH_SRC is the repository that contains metadata about the report. you can view details about each object in the data lineage. You cannot display data lineage on the Mozilla Firefox browser. You can export the data lineage to an HTML. Excel. Data Lineage for a Report Fields Data Structures Repository In Figure 1-1.

Localization Data Analyzer uses UTF-8 character encoding for displaying in different languages. For more information. The attribute name (Brand) appears within the data structure for the report. To avoid data errors. see the Data Analyzer User Guide. Data Lineage for an Attribute The attribute name is the only field that appears in the data lineage. Data Analyzer Display Language You can change the display language for the Data Analyzer client regardless of the locale of the Data Analyzer server. For more information about how to enable UTF-8 character encoding. Metadata Manager displays the data flow for that metric or attribute only. see the database documentation. The repositories you back up and restore must have the same language type and locale setting or the repository you restore must be a superset of the repository you Localization 7 . A language setting is a superset of another language setting when it contains all characters encoded in the other language. Language Settings When you store data in multiple languages in a database.Data Lineage for a Metric or Attribute The data lineage for a metric or attribute is similar to the data lineage for a report. The data lineage also shows data structures for reports that use the metric or attribute. Figure 1-2 show the data lineage for an attribute: Figure 1-2. UTF-8 character encoding is an ASCII-compatible multi-byte Unicode and Universal Character Set (UCS) encoding method. For a metric or attribute. you must ensure that the language settings are correct when you complete the following tasks in Data Analyzer: ♦ Back up and restore Data Analyzer repositories. Data structures for reports that use the attribute. You must change the display language for the Data Analyzer login page separately in the browser. You change the display language for Data Analyzer in the Manage Accounts tab in Data Analyzer. enable UTF-8 character encoding in the Data Analyzer repository and data warehouse.

and time formats Data Analyzer displays. the language type and locale settings of the data warehouse and the Data Analyzer repository must be the same or the repository must be a superset of the data source.adobe.back up. When you import data warehouse table definitions into the Data Analyzer repository.com/products/acrobat/acrrasianfontpack.html 8 Chapter 1: Data Analyzer Overview . When you enter a date in an SQL expression or define a date value for a global variable. if the repository you back up contains Japanese data. ♦ Import and export repository objects. Exporting Reports with Japanese Fonts to PDF Files If a report contains Japanese fonts and you export the report to a PDF file. date. You can find the Asian Font Package from the following web site: http://www. you must change the default expression in the metric or attribute property. Data Analyzer uses the format for the repository database language setting. ♦ Setting the Default Expression for Metrics and Attributes When you set the default expression for metrics and attributes. If you want to use a different default expression for a different locale. For example. Import table definitions from the data source. For more information. the repository you restore to must also support Japanese. When Data Analyzer performs date calculations in calculated or custom metrics. the repositories must have the same language type and locale setting or the destination repository must be a superset of the source repository. Save the Asian Font Package on the machine where you want to view the PDF file. Date and Number Formats The language setting for your Data Analyzer user account determines the numeric. enter the date in the same format used in the data warehouse. see the Data Analyzer Schema Designer Guide. When you import an exported repository object. Data Analyzer uses the same expression regardless of the locale of the Data Analyzer server. you must download the Asian Font Package from the Adobe Acrobat web site to view the PDF file.

groups. Users can perform different tasks based on their privileges.You can modify some user and group properties in Data Analyzer. and roles in the PowerCenter domain configuration database. 9 . Restricting User Access You can limit user access to Data Analyzer to secure information in the repository and data sources. Use the Security page of the PowerCenter Administration Console to create users. You set access permissions in Data Analyzer. You create and manage users. 9 Managing Groups. A user must have an active account to perform tasks and access data in Data Analyzer. groups. PowerCenter stores the users. and roles for a Data Analyzer. Users can perform different tasks based on their privileges. Setting Permissions You can set permissions to determine the tasks that users can perform on a repository object.CHAPTER 2 Managing Users and Groups This chapter includes the following topics: ♦ ♦ ♦ Overview. and roles in the domain configuration database. For more information about creating users. 11 Overview You create users. groups. groups. see the PowerCenter Administrator Guide. 10 Managing Users. Authentication Methods The way you manage users and groups depends on the authentication method you are using: ♦ Native. groups. You assign privileges to users. To secure information in the repository and data sources. Users in Data Analyzer need their own accounts to perform tasks and access data. and roles in the PowerCenter Administration Console. Data Analyzer allows login access only to individuals with user accounts in Data Analyzer. and roles. You can edit some user and group properties in Data Analyzer. groups and roles in the Security page of the PowerCenter Administration Console.

Managing Groups Groups allow you to organize users according to their roles in the organization. Select the group you want to edit and click Edit. You cannot add users or roles to the group. you might organize users into groups based on their departments or management level. The Service Manager periodically synchronizes the list of users and groups in the repository with the users and groups in the domain configuration database. or by accessing the Data Analyzer URL from a browser. User Synchronization You manage users. the Service Manager does not synchronize the changes to the Data Analyzer repository. see the PowerCenter Administrator Guide.♦ LDAP authentication. or query governing settings. 2. you can edit some group properties such as department. Editing a Group You can see groups with privileges on a Reporting Service when you launch the Data Analyzer instance created by that Reporting Service. For example. In addition. You manage the users and groups in the LDAP server but you create and manage the roles and privileges in the PowerCenter Administration Console. Similarly. and which privileges and roles are assigned to them in the PowerCenter Administration Console. In Data Analyzer. 3. and permission assignments with the list of users and groups in the Data Analyzer repository. 10 Chapter 2: Managing Users and Groups . see the PowerCenter Administrator Guide. The Groups page appears. color schemes. You can restrict data access by group. or assign a primary group to users in Data Analyzer. PowerCenter Client tools. The Service Manager stores users and groups in the domain configuration database and copies the list of users and groups to the Data Analyzer repository. and roles on the Security page of the Administration Console. Click Administration > Access Management > Groups. When you assign privileges and roles to users and groups for the Reporting Service in the Administration Console or when you assign permissions to users and groups in Data Analyzer. privileges. You manage users and groups. the Service Manager synchronizes the users in the Data Analyzer repository with the updated LDAP users in the domain configuration database. Note: If you edit any property of a user other than roles or privileges. For more information about authentication methods. the Service Manager stores the privilege. To edit a group in Data Analyzer: 1. if you edit any property of a user in Data Analyzer. Metadata Manager. their organization. The properties of the group appear. the Service Manager does not synchronize the domain configuration database with the modification. For more information. The Service Manager periodically synchronizes users in the LDAP server with the users in the domain configuration database. groups. role. Connect to Data Analyzer from the PowerCenter Administration Console.

Click Administration > Access Management > Users. If a user belongs to one or more groups in the same hierarchy level. Managing Users Each user must have a user account to access Data Analyzer. Enter a search string for the user in the Search field and click Find. or by accessing the Data Analyzer URL from a browser. You cannot assign a group to the user or define a primary group for a user in Data Analyzer. The properties of the user appear. You can edit a user account in Data Analyzer to change the color scheme. Select the user record you want to edit and click on it. PowerCenter Client tools. a user must have the appropriate privileges for the Reporting Service. Connect to Data Analyzer from the PowerCenter Administration Console. add the user to one or more groups. Edit any of the following properties: Property Department Description Choose the department for the group. Managing Users 11 . 5. or last name of the user. You assign privileges to a user. Color Scheme Assignment Query Governing 5. see “Configuring Departments and Categories” on page 90. When the Service Manager synchronizes with the Data Analyzer repository. For more information. see “Setting Rules for Queries” on page 85. For more information. Metadata Manager. middle name. Click OK to return to the Groups page. The Users page appears. To edit a user account in Data Analyzer: 1. Edit any of the following properties: Property First Name Middle Name Last Name Description First name of the user. 3. see “Managing Color Schemes and Logos” on page 74. Assign a color scheme for the group. Last name of the user. Data Analyzer uses the largest query governing settings from each group. Middle name of the user. For more information. For more information about these properties. Query governing settings on the Groups page apply to reports that users in the group can run. it does not update the records in the domain configuration database. Data Analyzer saves the modification in the Data Analyzer repository. Data Analyzer displays the list of users that match the search string you specify. see “Full Name for Data Analyzer Users” on page 12.4. 2. If you edit the first name. and assign roles to the user in the PowerCenter Administration Console. To perform Data Analyzer tasks. Editing a User Account You can see users with privileges on a Reporting Service when you launch the Data Analyzer instance created by that Reporting Service. or modify other properties of the account. 4.

Data Analyzer sends the email to this address when it sends an alert notification to the user. The query governing settings on the User page apply to all reports that the user can run. see “Configuring Departments and Categories” on page 90. Specify query governing settings for the user.Property Title Email Address Description Describes the function of the user within the organization or within Data Analyzer. see “Setting Rules for Queries” on page 85. If no color scheme is selected. Titles do not affect roles or Data Analyzer privileges. and then click the Data Restrictions button ( ) for the data for which you want to restrict user access. and last name based on the following rules: 1. Adding Data Restrictions to a User Account You can restrict access to data based on user accounts. For more information. Data Analyzer uses this as the email for the sender when the user emails reports from Data Analyzer. If the full name has more than three text strings. the full name is separated into first. see “Restricting Data Access” on page 16. Select the color scheme to use when the user logs in to Data Analyzer. Color schemes assigned at user level take precedence over color schemes assigned at group level. If the full name contains a comma. You cannot edit this information. Full Name for Data Analyzer Users Data Analyzer displays the full name property in the PowerCenter domain as the following user account properties: ♦ ♦ ♦ First name Middle name Last name If the full name does not contain a comma. Data Analyzer uses the default color scheme when the user logs in. Department for the user. middle. click Administration > Access Management > Users. there is no middle name. the full name has the following syntax: [<FirstName>] [<MiddleName>] <LastName> Data Analyzer determines the full name as first. Unless users have administrator privileges. and last names based on the number of text strings separated by a space: ♦ ♦ If the full name has two text strings. they cannot change the color scheme assigned to them. 12 Chapter 2: Managing Users and Groups . any string after the third string is included in the last name. You can associate the user with a department to organize users and simplify the process of searching for users. <FirstName> [<MiddleName>] Any full name that contains a comma is converted to use the syntax without a comma: [<FirstName>] [<MiddleName>] <LastName> 3. 2. After the conversion. For more information. middle. see “Managing Color Schemes and Logos” on page 74. For more information. the full name has the following syntax: <LastName>. For more information. To add data restrictions to a user account. Department Color Scheme Assignment Query Governing Note: Users can edit some of the properties of their own accounts in the Manage Account tab.

Write. When you create data restrictions. every user has default Read and Write permission on that object. metrics. you determine which users and groups have access to folders and repository objects. Delete. Restrict access to data in fact tables and operational schemas using associated attributes. You can assign the following types of access permissions to repository objects: ♦ ♦ Read. 13 Restricting Data Access. Setting Access Permissions Access permissions determine the tasks you can perform for a specific repository object. reports. or Change Access permission on that object. you determine which users and groups can Read. Use access permissions to restrict access to a particular folder or object in the repository. 13 Setting Access Permissions. Data restrictions. Data Analyzer does not display restricted data associated with those values. you determine which users and groups can access particular attribute values. Use data restrictions to restrict users or groups from accessing specific data when they view reports. When a user with a data restriction runs a report. Also allows you to create and edit folders and objects within a folder. Allows you to view a folder or object. 13 .CHAPTER 3 Setting Permissions and Restrictions This chapter includes the following topics: ♦ ♦ ♦ Overview. 16 Overview You can customize Data Analyzer user access with the following security options: ♦ Access permissions. By customizing access permissions on an object. Allows you to edit an object. When you set access permissions. Restrict user and group access to folders. dashboards. ♦ When you create an object in the repository. template dimensions. attributes. Write. and schedules.

For example. save them to your Personal Folder or your personal dashboard. Content folder in Public Folders Content folder in Personal Folder Report in Public Folders Report in Personal Folder Composite Report in Public Folders Composite Report in Personal Folder Public Dashboard Personal Dashboard Metric Folder Attribute Folder Template Dimensions Folder Click. Allows you to delete a folder or an object from the repository. grant the Sales group inclusive write permission to edit objects in the folder. For example. including subfolders. You can completely restrict the selected users and groups or restrict them to fewer access permissions. a composite report might contain some subreports that do not display for all users. you can override existing access permissions on all objects in the folder. Navigate to a repository object you want to modify. use exclusive access permissions. search for the user name.. Find > Public Folders > folder name Find > Personal Folder > folder name Find > Public Folders > report name Find > Personal Folder > report name Find > Public Folders > composite report name Find > Personal Folder > composite report name Find > Public Folders > dashboard name Find > Personal Folder > dashboard name Administration > Schema Design > Schema Directory > Metrics folder > metric folder name Administration > Schema Design > Schema Directory > Attributes folder > attribute folder name Administration > Schema Design > Schema Directory > Template Dimensions folder > template dimensions folder name 14 Chapter 3: Setting Permissions and Restrictions . To grant access permissions to users. If you have reports and shared documents that you do not want to share. Setting access permissions for a composite report determines whether the composite report itself is visible but does not affect the existing security of subreports... Exclusive. Permit access to the users and groups that you select. you can select Read as the default access permission for a folder. You can also permit additional access permissions to selected users and groups. you can use exclusive access permissions to restrict the Vendors group from viewing sensitive reports. Use the General Permissions area to modify default access permissions for an object. To restrict the access of specific users or groups. By default. Restrict access from the users and groups that you select. Users or groups must also have permissions to view individual subreports.. Note: Any user with the System Administrator role has access to all Public Folders and to their Personal Folder in the repository and can override any access permissions you set. You can use a combination of inclusive. and use an exclusive Read permission to deny an individual in the Sales group access to the folder. you can grant the Analysts group inclusive access permissions to delete a report.♦ ♦ Delete. The following table shows how to navigate to the repository object you want to modify: To set access permissions on. then set the access permissions for the user you select. Change permission. and default access permissions to create comprehensive access permissions for an object. Therefore. exclusive. Data Analyzer grants Read permission to every user in the repository. use inclusive access permissions. Allows you to change the access permissions on a folder or object. When you modify the access permissions on a folder. Use the following methods to set access permissions: ♦ ♦ Inclusive. For example. To set access permissions: 1. To grant more extensive access to a user or group.

8. 9. such as a report or shared document. Click the Permissions button ( ) for the repository object. You can also select Replace Permissions on All Items in Folder to apply access permission changes to the reports and shared documents in the folder. Metric Attribute Template Dimension Click. you can select Replace Permissions on Subfolders to apply access permission changes to all subfolders. Refine the selection by choosing the search criteria for the group or user.. You can select groups or users by criteria such as name or department. 7. 5. 4. Click Include to include the user or group in the access permissions you select. If you click Yes. -orClick Exclude to exclude the user or group from the access permissions you select... Select the access permissions you want to include or exclude.To set access permissions on. skip to step 4. 6. If you are editing access permissions on a folder. The Access Permissions page appears. The object name appears in quotes. Setting Access Permissions 15 . Note: Permissions set on composite reports do not affect permissions on the subreports. Click Yes to allow all users to receive the default access permissions you select. The Query Results field displays groups or users that match the search criteria. set the default access permissions. From the General Permissions area. click No to prevent all repository users from receiving default access permissions. If you are editing access permissions on an item. 3.. Click Make a Selection to search for a group or user. Administration > Schema Design > Schema Directory > Metrics Folder > metric folder name > metric name Administration > Schema Design > Schema Directory > Attributes folder > attribute folder name > attribute name Administration > Schema Design > Schema Directory >Template Dimensions folder > template dimension folder name > template dimension name Administration > Scheduling > Time-Based Schedules > timebased schedule name Administration > Scheduling > Event-Based Schedules > eventbased schedule name Administration > Schema Directory > Filtersets > filterset name Time-Based Schedule Event-Based Schedule Filterset 2. Only those subreports where a user or group has access permissions display in a composite report. Select the group or user in the Query Results field.

Data Analyzer uses the AND operator to apply all restrictions. This allows you to make the data restriction as specific as required. Select the fact table or operational schema that contains the metric data you want to restrict and specify the associated attributes for which to restrict the metric data. Create data restrictions to keep sensitive data from appearing in reports. use the OR or AND operator to group the data restrictions. By default. Red text and a minus sign indicate that the user Hansen is not permitted to read the Sales folder. Corporate Sales group granted additional write permission. You can apply the data restriction to any user or group in the repository. In the advanced grouping mode. Data Analyzer applies the data restrictions in the order in which they appear in the Created Restrictions task area. Create data restrictions by user or group. Access the fact table or operational schema that contains the metric data you want to restrict and specify the associated attributes for which to restrict the metric data. the following condition allows users to view data from every March and from the entire year of 2007: IN March OR IN 2007 16 Chapter 3: Setting Permissions and Restrictions . they view sales data for their region only. Everyone has Read permission on the Sales folder. Use this method to apply the same data restriction to more than one user or group. When a report contains restricted data. Use this method to apply multiple data restrictions to the same user or group or to restrict all data associated with specified attribute values. In this mode.Data Analyzer displays a minus sign (-) next to users or groups you exclude. you can create a data restriction that restricts the Northeast Sales group to sales data for stores in their region. Click OK to save the access permissions settings. For example. You can create data restrictions using one of the following methods: ♦ Create data restrictions by object. You can apply the data restriction to a single fact table or operational schema or to all related data in the repository. Access the user or group you want to restrict. unless restricted below. Data Analyzer displays the data restrictions in simple grouping mode. When users in the Northeast Sales group run reports that include the SALES fact table and Region attribute. Restricting Data Access You can restrict access to data associated with specific attribute values. When you create a data restriction. ♦ If you have multiple data restrictions. a Data Restrictions button appears in the report. you can create a complex expression with nested conditions. They cannot see sales data for western or southern regions. For example. you specify the users or groups to be restricted. 10. If you have multiple data restrictions.

Both a group and its subgroup Example If Group A has the following restriction: Region IN ‘East’ And Subgroup B has the following restriction: Category IN ‘Women’ Data Analyzer joins the restrictions with AND: Region IN ‘East’ AND Category IN ‘Women’ Two groups that belong to the same parent group OR operator If Group A has the following restriction: Region IN ‘East’ And Group B has the following restriction: Category IN ‘Women’ Data Analyzer joins the restrictions with OR: Region IN ‘East’ OR Category IN ‘Women’ Restricting Data Access by Object Create data restrictions by object when you want to apply the restriction to more than one user or group or to create more than one data restriction for the object.. Data Analyzer joins the two restrictions and returns no data: Region IN ‘West’ AND Region NOT IN ‘West’ When a user belongs to more than one group.. Data Analyzer handles data restrictions differently depending on the relationship between the two groups. The following table describes how Data Analyzer handles multiple group situations: Data Analyzer joins data restrictions with. Using Global Variables You can use global variables when you define data restrictions. AND operator A user who belongs to. if the user has the restriction Region IN ‘West’ and the user’s group has the restriction Region NOT IN ‘West’. you can group three data restrictions: Region NOT IN ‘North’ AND (Category IN ‘Footware’ OR Brand IN ‘BigShoes’) In the above example... Data Analyzer joins the restrictions with the AND operator. Understanding Data Restrictions for Multiple Groups A restricted user assigned to a restricted group is subject to both individual and group restrictions. When you use a global variable in a data restriction. Restricting Data Access 17 .You can also use parentheses to create more complex groups of restrictions. For example. you cannot create data restrictions on fact tables or operational schemas using CLOB attributes. For example. Data Analyzer updates the data restriction when you update the global variable value. Data Analyzer allows users to view data which is not included in the North region and which is in either the Footware category or has the BigShoes brand. You can restrict access to data in the following objects: ♦ ♦ Fact tables Operational schemas You cannot create data restrictions for hierarchical schemas. Also.

enter it in the User field. If the number of groups is 30 or more. 6. 12.. 10. If you can create more than one data restriction. Use the Basic or Advanced mode. click Select Other Attributes. The data restriction appears in the Created Restrictions task area. 8. Navigate to the attribute you want and select an attribute. click Advanced. Enter attribute values. To create a data restriction for a user. click Advanced in the Created Restrictions task area. select an operator. In advanced mode. a list of available groups appears. Use the asterisk or percent symbols as wildcard characters. Click Add. To view the SQL query for the restriction. you can adjust the order of the restrictions and the operators to use between restrictions. select an attribute from the attribute list. Data Analyzer displays the attributes for the object in the Attribute Selection window. Click Select a Group/User. ) of the object you want to restrict.. To create data restrictions for. Select the user or group you want to restrict and click OK. Navigate to the object you want to restrict. To adjust the restrictions. 9. To browse or find other attributes. If you select Group and the number of groups is less than 30. you can edit the SQL query for a restriction. Fact Table Operational Schema Click. Recently-used attributes appear in the list. 4. 7. 11.To create data restrictions by object: 1.. In the Create Restriction task area. To create a data restriction for a group. you can select a global variable. If you select User and you know the user name you want to restrict. Click the Data Restrictions button ( The Data Restrictions page appears. select User. Data Analyzer displays buttons for adding numbers and operators to the SQL query for the data restriction. and then click the buttons to add numbers or operators to the SQL query. Administration > Schema Design > Analytic Schemas > Show Fact Tables Administration > Schema Design > Operational Schemas 2. The Select Group or User window appears. From the condition list. 5. CLOB attributes are not available for use in data restrictions.. or you can search for specific values and Ctrl-click to select more than one. select Group. 3. Click Find. You can also manually enter attribute values. to create more restrictions for the same user or group. search for a user or group. described in steps 7 to 11. Click within the SQL query. Or. the group search option appears. If a global variable contains the attribute values you want to use. You can select attribute values from a list. The Attribute Selection window appears. Data Analyzer displays the SQL query for the restriction in advanced mode. 18 Chapter 3: Setting Permissions and Restrictions .

Data restrictions limit the data that appears in the reports. To create data restrictions for users. Click to add operators. click Select Other Attributes. Applied restrictions appear in the Current Data Restrictions area. To remove all data restrictions. select All Schemas. select an operator. 4. 13.In advanced mode. The Attribute Selection window appears. You cannot create data restrictions for hierarchical schemas. 14. When the attribute is associated with other fact tables or operational schemas in the repository. click Administration > Access Management > Groups. 2. This applies the data restriction to all data in the repository associated with the attribute you choose. 3. To create data restrictions for users or groups: 1. When you edit a user or group. click the Remove button. Also. you can create a single data restriction to restrict all sales and salary information from Europe. if the Region attribute is associated with both the Sales fact table and Salary fact table. click Apply Restrictions. Restricting Data Access by User or Group Edit a user or group to restrict data when you want to create more than one restriction for the user or group. CLOB attributes are not available for use in data restrictions. Click OK to save the changes. Data Analyzer displays all attribute folders for the object in the Attribute Selection window. When you have completed adding data restrictions for the user or group. you cannot create data restrictions on fact tables or operational schemas using CLOB attributes. For example. Click the Data Restrictions button ( The Data Restrictions page appears. Click to add left parenthesis. 15. select an attribute from the attribute list. 5. Navigate to the attribute you want and select an attribute. Recently-used attributes appear in the list. Then click Groups to display all groups. Click to add right parenthesis. you can restrict all data related to the attribute values you select. you can create data restrictions for metrics in any fact table or operational schema. ) of the user or group profile you want to edit. To browse or find an attribute. Hierarchical schemas are not available for use in data restrictions. To select all schemas. From the condition list. To remove a data restriction. click Administration > Access Management > Users. Click the appropriate list to group the restrictions. Restricting Data Access 19 . Click to change the order of the restrictions. In the Create Restriction task area. Select a schema from a list of available schemas. Data Analyzer displays lists for adding parentheses and operators. You can restrict data in a single fact table or operational schema for an associated attribute. -orTo create data restrictions for groups. The page shows a list of fact tables and operational schemas tables. click Cancel.

click the Remove button. Applied restrictions appear in the Current Data Restrictions area. Click to change the order of the restrictions. 20 Chapter 3: Setting Permissions and Restrictions . To adjust the restrictions. Click the appropriate list to group the restrictions. you can adjust the order of the restrictions and the operators to use between restrictions. 11. The data restriction appears in the Created Restrictions task area. Click to add right parenthesis. click Apply Restrictions. Data Analyzer displays buttons for adding numbers and operators to the SQL query for the data restriction. operators. You can also manually enter attribute values. To view the SQL query for the restriction. you can edit the SQL query for a restriction. Click to add left Click to add parenthesis. When you have completed adding data restrictions for the user or group. 8. 12. To remove all data restrictions. or you can search for specific values and Ctrl-click to select more than one. click Cancel. You can select attribute values from a list. In advanced mode. If you create more than one data restriction. To remove a data restriction. and then click the buttons to add numbers or operators to the SQL query. In advanced mode. you can select a global variable. Click OK to save the changes. Enter attribute values. If a global variable contains the attribute values you want to use. click Advanced. click Advanced in the Created Restrictions task area.6. 7. Click within the SQL query. Use the Basic or Advanced mode. Click Add. Data Analyzer displays the SQL query for the restriction in advanced mode. the Created Restrictions task area displays lists for adding parentheses and operators. described in steps 3 to 8. 10. to create more restrictions for the same user or group. 9.

For example. Updates report data only on the configured date. 25 Using the Calendar. if you know that the database administrator will update the data warehouse on December 1. To use a time-based schedule. Configure the start time. Attach a report to the time-based schedule when you create or edit the report.CHAPTER 4 Managing Time-Based Schedules This chapter includes the following topics: ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ Overview. 29 Overview A time-based schedule updates reports based on a configured schedule. date. 2. 28 Defining a Business Day. Attach reports to the time-based schedule as tasks. When Data Analyzer runs a time-based schedule. You can attach any cached report to a time-based schedule. complete the following steps: 1. 21 Creating a Time-Based Schedule. if you know that the ♦ 21 . Create a single-event schedule for a one-time update of the report data. Recurring schedule. 29 Defining a Holiday. create a single-event schedule for December 2. Create a time-based schedule. 22 Managing Time-Based Schedules. You might use a recurring schedule to run reports after a regularly scheduled update of the data warehouse. such as once a week or on the first Monday of each month. 23 Managing Reports in a Time-Based Schedule. but do not know when other updates occur. You can configure the following types of time-based schedules: ♦ Single-event schedule. For example. Create a recurring schedule to update report data regularly. Updates report data on a regular cycle. it runs each report attached to the schedule. 29 Monitoring a Schedule. and repeating option of the schedule when you create or edit a time-based schedule. Attach imported cached reports to tasks from the time-based schedule.

monthly. When selected. If a scheduled run falls on a non-business day. The Schedule Monitor provides a list of the schedules currently running reports. you can create an event-based schedule. Recurring schedules can repeat every minute. If you want to update reports when a PowerCenter session or batch completes. Day.data warehouse is updated the first Friday of every month. weekly. You can set up business days and holidays for the Data Analyzer Calendar. (noon). Repeat Every (Monday/Tuesday/Wednesday/Thursday/Friday /Saturday/Sunday) 22 Chapter 4: Managing Time-Based Schedules . For a single-event schedule. select one of the following repeat options: Field Repeat Every (Number) (Minute/Hour/Day/Week/Month/Year) Description Repeats every specified number of units of time.m. tab. or monthly views of all the time-based schedules in the repository. After you attach reports to a time-based schedule. you can create indicators and alerts for the reports. a weekend or configured holiday. The name can include any character except a space. Default is 12:00 p. newline character. or Year as a unit of time. Data Analyzer waits until the next scheduled run to run attached reports. create a time-based schedule to update reports on the second Monday of every month. the schedule runs reports on business days only. Select minutes in increments of five. Repeats each week on the specified day(s). select Do Not Repeat. To create a time-based schedule: 1. and the following special characters: \/:*?“<>|‘&[] Description of the time-based schedule. Select a repeat option. or hourly. or quarterly. Single-event schedules run tasks once. Use this setting to schedule recurring updates of report data. 2. For a repeating schedule. daily. The Properties page appears. Description Business Day Only Start Date Start Time 3. You can select Minute. Monitor existing schedules with the Calendar or the Schedule Monitor. Use this setting to schedule weekly updates of report data. Click Administration > Scheduling > Time-Based Schedules > Add. Default is the current date on Data Analyzer. Month. Enter the following information: Field Name Description Name of the time-based schedule. weekly. Date the schedule initiates. Time the schedule initiates. The Calendar provides daily. Week. Hour. Creating a Time-Based Schedule You can create single-event or recurring schedules to run reports as tasks.

Click OK. When you update the schedule of a time-based schedule. 4. it runs each attached report. 5. You can complete the following tasks for a time-based schedule: ♦ ♦ ♦ ♦ ♦ ♦ Edit a schedule. Schedule repeats until the date you specify. Click Administration > Scheduling > Time-Based Schedules. Repeats every specified number of days from the beginning or end of the specified month. You can attach any cached report to a time-based schedule. Use this setting to schedule quarterly updates of report data. 3. 2. Managing Time-Based Schedules After you create a time-based schedule. Start the schedule immediately. you can attach reports to the schedule. Click the name of the schedule you want to edit. you can edit schedule properties. Disable the schedule. Description Repeats on the specified day of the week of every month or year. The Time-Based Schedules page appears. and then click OK. Stop the schedule immediately. Edit schedule properties if necessary. Edit schedule access permissions. To edit a time-based schedule: 1. the change impacts all attached reports and alerts. Managing Time-Based Schedules 23 . When Data Analyzer runs a time-based schedule.Field Repeat the (First/Second/Third/Fourth) (Monday/Tuesday/Wednesday/Thursday/Friday /Saturday/Sunday) of every (Month/Year) Repeat on (Number) of days from the (Beginning of/End) of the (First/Second/Third Month) of each Quarter 4. Default is Always. The Properties page appears. View or clear the schedule history. Default is the current date on Data Analyzer. Click Tasks to remove reports from the schedule. You can also remove reports or change the order in which they run. Select the repeat condition: Field Always Until (Month) (Day) (Year) Description Schedule repeats until disabled or deleted from the repository. Use this setting to schedule monthly or yearly updates of report data. Editing a Time-Based Schedule After you create a time-based schedule.

Click to change access permissions. You might clear a schedule history at the end of a quarter or to save space in the repository. Each time-based schedule has a history containing the following information: ♦ ♦ ♦ Start time.Editing Access Permissions for a Time-Based Schedule Access permissions determine which users and groups can attach reports to the schedule. The Properties page appears. 4. The Time-Based Schedules page appears. When you view schedule histories. 24 Chapter 4: Managing Time-Based Schedules . The schedule name appears in parentheses. You can change the access permissions for a schedule to protect the security of the schedule. Lists whether the schedule or task completed successfully or the number of errors that occurred. Starting a Time-Based Schedule Immediately You can start a time-based schedule immediately instead of waiting for its next scheduled run. Click OK. To set access permissions. You might start a time-based schedule immediately to test attached reports. click Clear. the number of successfully completed schedule runs. 3. End time. You can also clear the history of a schedule. By default every user with the appropriate privileges can edit a schedule. To clear the history of the schedule. 5. The date and time Data Analyzer started running the schedule. you can determine how long all tasks attached to the schedule take to update. To view or clear the history of a time-based schedule: 1. Viewing or Clearing a Time-Based Schedule History You can view the history of a time-based schedule. Click History. or change access permissions to the schedule. or the number of recurring errors during the run. modify the schedule. You might also start the schedule if errors occurred during the previously scheduled run. The Schedule History page appears. The date and time Data Analyzer stops running the schedule. click the Permissions button. Status. Select the schedule you want to view. Click Administration > Scheduling > Time-Based Schedules. 2.

you can enable the schedule. You can attach any cached report to a time-based schedule. When you want to enable the schedule again. To remove a time-based schedule: 1. Remove a report from a time-based schedule. Click OK. 2. Click the Enabled button for the schedule you want to disable. Click the Remove button for the schedule you want to delete. click the Disabled button. Click Administration > Scheduling > Time-Based Schedules. aborting all attached reports. View a list of attached reports. you can attach reports to the schedule. To disable a time-based schedule: 1.To start a time-based schedule immediately: 1. For the time-based schedule you want to start. View task properties. The status of the schedule changes to Disabled. Before you remove any schedule from the repository. Disabling a Time-Based Schedule You can disable a time-based schedule when you do not want it to run. Data Analyzer recommends that you reassign all tasks attached to the schedule. When you want the schedule to resume. 2. Stopping a Time-Based Schedule Immediately You can stop a time-based schedule immediately. Removing a Time-Based Schedule You can remove time-based schedules from the repository. Managing Reports in a Time-Based Schedule After you create a time-based schedule. When Data Analyzer runs a time-based schedule. see “Stopping a Schedule” on page 30. click Run Now. Click Administration > Scheduling > Time-Based Schedules. You can stop a schedule immediately when you need to restart the server. it runs each attached report. You can complete the following schedule-related tasks for a report: ♦ ♦ ♦ ♦ ♦ Attach a report to a time-based schedule. Managing Reports in a Time-Based Schedule 25 . For more information. 2. Data Analyzer starts the schedule and runs the attached reports. Click Administration > Scheduling > Time-Based Schedules. View or clear a task history. 3. You might disable a schedule when it has no attached reports or when the update of source data is temporarily interrupted. The Time-Based Schedules page appears.

Attaching Reports to a Time-Based Schedule You can attach a report to a time-based schedule using one of the following methods: ♦ ♦ ♦ Save a new report as cached. Data Analyzer attaches the rules to the schedule but does not display the rules on the list of tasks for the schedule. You can view these tasks on the Tasks page for the schedule. and change the scheduling options. 26 Chapter 4: Managing Time-Based Schedules . Although the rules do not display on the Tasks page for the schedule. To attach multiple reports from the list. 4. but not to an eventbased schedule. You can attach multiple reports to a single schedule. If you want to add all available imported reports as a task for the schedule. 2. Select a schedule and use the add task option to attach multiple imported cached reports to an existing schedule. If you attach a report that has alerts on a predefined schedule to a time-based schedule. 6. To make troubleshooting easier. If you attach multiple reports to a schedule. select the All check box next to Select Reports. you cannot attach multiple reports. When a user selects broadcast or an alert rules for a time-based schedule. Data Analyzer runs the reports concurrently. The Add button appears only when you have unscheduled imported reports in the repository. To attach an imported cached report to a time-based schedule: 1. Data Analyzer applies the rules when it runs the report on the time-based schedule. Click Administration > Scheduling > Time-Based Schedules. You can attach imported cached reports to time-based or event-based schedules. Set up multiple schedules to run a large number of reports. Select the imported reports that you want to add to the schedule. You must attach any cached reports that you import to a schedule. Select the schedule option when you save a new report to the repository. You can attach each imported report individually or attach multiple imported reports from a list to a single schedule. Click Tasks. 5. the report schedule must update more often than the alert schedule updates. Add an imported report to a schedule. Click the time-based schedule that you want to use. Select Save As on an existing report. Please assign the reports to schedules immediately. Save an existing report as a cached report. The list of the tasks attached to the schedule appears. the following message appears: Some of the imported reports must be put on schedules. If the session expires or you log out before attaching multiple reports from the import list. Viewing Attached Reports All reports that are attached to a time-based schedule display as a list of tasks for the schedule. You must attach the imported reports individually. The Imported Scheduled Reports window appears. attach a small number of reports to a schedule. you must attach the reports during the same Data Analyzer session. 3. Attaching Imported Cached Reports to a Time-Based Schedule When you import cached reports to the repository. Click Apply. You can attach reports that have alerts on a predefined schedule to a time-based schedule. Click Add. The report appears as an item on the task list.

4. 6. To view or clear a task history: 1. you must attach it to another schedule to ensure it updates in a timely manner. You can clear a task history at the end of a quarter or to save space in the repository. Click Tasks. 5. click OK. Click Administration > Scheduling > Time-Based Schedules. Removing a Report from a Time-Based Schedule You can remove a report from a time-based schedule. 2. The Task Properties page appears. The Properties page appears. You can view a task history to compare the number of successful runs on different schedules. Click Tasks. To view task properties: 1. Click the name of the report. Click the name of the schedule that runs the report. The Task Properties page appears. Click OK to close the Task Properties page. or recurring errors when running the report. the number of successfully completed runs. When you remove a task. View report histories to determine how long the report takes to update. 2. 3. Click Administration > Scheduling > Time-Based Schedules. Click the name of the report. All attached reports display. Click the schedule you want to view. 3. Click History. 5. Viewing Task Properties You can view the task properties for any report attached to a time-based schedule. and then click OK. The Time-Based Schedules page appears. 2. Click the name of the schedule that runs the report. To return to Task Properties. click Clear. 3. You can also clear the history of a report. Click Tasks. Click Administration > Scheduling > Time-Based Schedules.To view a report attached to a time-based schedule: 1. 7. To clear the task history. Managing Reports in a Time-Based Schedule 27 . You cannot modify the task properties. Viewing or Clearing a Task History You can view a task history for reports attached to time-based schedules. Remove a report when you plan to disable the schedule or when the report requires a new update strategy. 4.

Click Weekly or Monthly to change the view of the Calendar. The Properties page appears. Navigating the Daily View The Calendar opens to the Daily view by default. click a week. The Weekly view displays all time-based schedules for the week. The Calendar lists schedules by day. respectively. Click Remove. select the check box in the title bar next to Name. respectively. Click the name of the schedule you want to edit. 28 Chapter 4: Managing Time-Based Schedules . Using the Calendar Use the Calendar in the Scheduling section to view all enabled time-based schedules in the repository. You can navigate from one view to another. To access a Daily view. Navigating the Calendar The Calendar provides daily. To view the Calendar: 1. Use the left and right arrows to navigate to the previous and following weeks. To access a Daily view. or month. The Monthly view displays all time-based schedules for the month. Use the left and right arrows to navigate to the previous and next day. The default Calendar display is a view of the current day. and monthly views. The Calendar appears. click the specific date. Click Administration > Scheduling > Time-Based Schedules. 2. If you want to remove all attached reports. and then click OK. To access a Weekly view. The Calendar recognizes leap years. Navigating the Monthly View The Monthly view opens to the current month by default. weekly. The Daily view displays the current day and organizes the time-based schedules for the current day by hour. 4. click a date. Navigating the Weekly View The Weekly view opens to the current week by default.To remove a report from a time-based schedule: 1. 5. Use the left and right arrows to navigate to the previous and following months. To view a different date. select a different date or month in the calendar. Select the check box for the report you want to remove. Click Tasks. week. respectively. The Time-Based Schedules page appears. 2. Click Administration > Scheduling > Calendar. 3.

You might also use the Schedule Monitor to verify whether Data Analyzer runs reports at the scheduled time. Enter the name. April 1. The Holiday Properties page appears. 3. Data Analyzer postpones the schedule to run attached reports on the next scheduled day. 2. Defining a Holiday You can define holidays for the Data Analyzer Calendar. By default. Click OK. The Business Days page appears. You can change these business days to fit your work schedule. the configured business days are Monday through Friday. you can create time-based schedules that run only on those days. date. Select the days you want to define as business days. When a schedule falls on a holiday. Monitoring a Schedule The Schedule Monitor provides a list of all schedules that are currently running in the repository. Click Administration > Scheduling > Holidays. The default business days are Monday through Friday. Business days are the days Data Analyzer treats as regular working days. like a weekend or holiday. Time-based schedules that are not configured to run only on business days still run on configured holidays. Data Analyzer waits until the next scheduled day. Time-based schedules configured to run reports only on business days do not run on holidays. to run the schedule. there are no configured holidays. The business day setting overrides all other recurring schedule settings you create. and configure the schedule to run only on business days. To define business days: 1. 3. View all configured holidays on the Holidays page. To define a holiday: 1. You might check the Schedule Monitor before you restart Data Analyzer to make sure no schedules are running. The Holidays page appears. Data Analyzer treats holidays as non-business days. Click Administration > Scheduling > Business Days. Clear the days you do not want defined as business days. Click Apply. Defining a Business Day 29 . If March 1 falls on a Sunday. If the schedule falls on a nonbusiness day. You create a schedule to run reports on the first of the month.Defining a Business Day You can define business days for the Data Analyzer Calendar. 2. Click Add. 4. After you define business days. and a brief description of the holiday. Data Analyzer runs the reports on the next scheduled day. For example.

2. You might stop a schedule when you need to restart the server or when a problem arises with source data. Click Administration > Scheduling > Schedule Monitoring. Click Remove to stop a running schedule. To stop a running schedule: 1. Data Analyzer displays schedules that are currently running. 30 Chapter 4: Managing Time-Based Schedules . Stopping a Schedule You can stop a running schedule and all attached reports through the Schedule Monitor. The Schedule Monitor lists all currently running schedules.To monitor a schedule. 3. Click OK. click Administration > Scheduling > Schedule Monitoring.

PowerCenter installs the PowerCenter Integration utility. If the PowerCenter Integration utility is set up correctly.CHAPTER 5 Managing Event-Based Schedules This chapter includes the following topics: ♦ ♦ ♦ ♦ Overview. 33 Managing Reports in an Event-Based Schedule. Use the PowerCenter Integration Utility in PowerCenter” on page 33. Create an event-based schedule and attach cached reports to the schedule. For more information. 35 Overview PowerCenter Data Analyzer provides event-based schedules and the PowerCenter Integration utility so you can update reports in Data Analyzer based on the completion of PowerCenter sessions. The Schedule Monitor provides a list of the schedules currently running reports. see “Step 2. Configure a PowerCenter session to call the PowerCenter Integration utility as a post-session command and pass the event-based schedule name as a parameter. 31 Managing Event-Based Schedules. Data Analyzer runs each report attached to the eventbased schedule when a PowerCenter session completes. 31 . You can create indicators and alerts for the reports in an event-based schedule. complete the following steps: 1. For more information. You cannot use the PowerCenter Integration utility with a time-based schedule. You can monitor event-based schedules with the Schedule Monitor. 2. Create an Event-Based Schedule” on page 32. see “Step 1. To update reports in Data Analyzer when a session completes in PowerCenter. 31 Updating Reports When a PowerCenter Session Completes. Updating Reports When a PowerCenter Session Completes When you create a Reporting Service in the PowerCenter Administration Console.

properties file in the notifyias-<Reporting Service Name> folder and set the logfile. The PowerCenter Integration utility considers the settings in the notifyias. 3. Before you run the PowerCenter Integration utility. Enter a name and description for the schedule. you can attach it to a cached report when you save the report.sh Windows: notifyias. Click Administration > Scheduling > Event-Based Schedules.PowerCenter installs a separate PowerCenter Integration utility for every Reporting Service that you create. you need to provide a name and description of the schedule. Select the cached report option and a specific schedule when you save a new report to the repository. To create an event-based schedule: 1. The Add an Event-Based Schedule page appears. The notifyias. Attaching Reports to an Event-Based Schedule You can attach a report to an event-based schedule with one of the following methods: ♦ Save a new report as a cached report. Run the PowerCenter Integration utility to update reports in Data Analyzer when a session completes in PowerCenter. Set the JAVA_HOME environment variable to the location of the JVM. create an event-based schedule in Data Analyzer and attach the reports that you want to run after the PowerCenter session completes. 4. The logfile. complete the following steps: 1. Creating an Event-Based Schedule When you create an event-based schedule. You can find the PowerCenter Integration utility in the following folder: <PCInstallationfolder>\server\tomcat\jboss\notifyias-<Reporting Service Name> PowerCenter suffixes the Reporting Service name to the notifyias folder.location property to the location and the name of the PowerCenter Integration utility log file. Create an Event-Based Schedule To run reports in Data Analyzer after a session completes in PowerCenter. 2. PowerCenter sets the properties in the notifyias. You do not need to provide information about the PowerCenter session you want to use.properties file to update reports in Data Analyzer. Step 1. After you create the event-based schedule.properties file to point to the correct instance of the Reporting Service.bat Back up the notifyias file before you modify it. Open the notifyias. Click Add. Click OK. 3.location property determines the location and the name of the log file. Open the notifyias file in a text editor: UNIX: notifyias. For example. The Event-Based Schedules page appears. the notifyias folder would be notifyias-DA_Test. if you create a Reporting Service and call it DA_Test. 2. When you create a Reporting Service.properties file contains information about the Reporting Service URL and the schedule queue name. The PowerCenter Integration utility creates a log file when it runs after the PowerCenter session completes. 32 Chapter 5: Managing Event-Based Schedules .

Use the following post-session command syntax for PowerCenter installed on Windows: notifyias. If you attach multiple reports to a schedule. Edit schedule access permissions. Disable a schedule. Select Save As on a report. Use the PowerCenter Integration Utility in PowerCenter Before you can use the PowerCenter Integration utility in a PowerCenter post-session command. To make troubleshooting easier. When you use the PowerCenter Integration utility in the post-session command. you must configure the PowerCenter session to call the PowerCenter Integration utility as a post-session command. Step 2. Start a schedule immediately.sh Event-BasedScheduleName Event-BasedScheduleName is the name of the Data Analyzer event-based schedule that contains the tasks you want to run when the PowerCenter session completes. and specify the name of the event-based schedule that you want to associate with the PowerCenter session. Remove a schedule. PowerCenter workflows. create an event-based schedule as outlined in the previous step. Set up multiple schedules to run a large number of reports. Stop a schedule immediately. Data Analyzer then connects to the PowerCenter data warehouse to retrieve new data to update reports. see the PowerCenter Workflow Administration Guide. you need to navigate to the correct notifyias-<Reporting Service name> folder. then change the scheduling options. you can run it as the last task in the workflow. or the PowerCenter Integration utility. You can set up the post-session command to send Data Analyzer notification when the session completes successfully. Managing Event-Based Schedules You can perform the following tasks to manage an event-based schedule: ♦ ♦ ♦ ♦ ♦ ♦ ♦ Edit a schedule. attach a small number of reports to a schedule. View or clear the schedule history. Editing an Event-Based Schedule After you create an event-based schedule. Managing Event-Based Schedules 33 .bat Event-BasedScheduleName Use the following shell command syntax for PowerCenter installed on UNIX: notifyias. You can also run the PowerCenter Integration utility as a command task in a PowerCenter workflow. You can attach multiple reports to a single schedule. If you want to run the PowerCenter Integration utility after all other tasks in a workflow complete. Data Analyzer runs the reports concurrently. you need to prefix the utility file name with the file path. For more information about configuring post-session commands.♦ Save an existing report as a cached report. you can edit its name and description. In the PowerCenter Workflow Manager. If the system path does not include the path of the PowerCenter Integration utility.

You might start an event-based schedule immediately to test attached reports and report alerts.To edit an event-based schedule: 1. Editing Access Permissions for an Event-Based Schedule Access permissions determine which users and groups can attach reports to the schedule. or the number of recurring errors. 4. click Clear. Status. The Event-Based Schedules page appears. Lists the successful completion of the schedule or the number of errors that have occurred. click the Permissions button. 2. the number of successfully completed runs of the schedule. Click History. The Event-Based Schedules page appears. Viewing or Clearing an Event-Based Schedule History You can view the history of an event-based schedule to see the following information: ♦ ♦ ♦ Start time. Click Administration > Scheduling > Event-Based Schedules. 34 Chapter 5: Managing Event-Based Schedules . 4. You can also clear the history of an event-based schedule. To view an event-based schedule histor y: 1. The Edit an Event-Based Schedule page appears. 5. 3. To secure a schedule. click History. Click the name of the schedule you want to edit. To clear the schedule history. If you want to view the reports assigned as tasks to the schedule. click Tasks. You might clear a schedule history at the end of a quarter or to save space in the repository. Edit the name or description of the event-based schedule. Click OK. or change access permission for the schedule. End time. Click the schedule you want to view. The date and time the schedule completes. View schedule histories to determine how long attached reports take to complete. If you want to view the history of the schedule. The date and time Data Analyzer started the schedule. The Schedule History page appears with the schedule name in parentheses. 3. You might start the schedule if errors occurred during the last run of the schedule. Click Administration > Scheduling > Event-Based Schedules. you can change the access permissions for the schedule. modify the schedule. 2. the system administrator and users with the Set Up Schedules and Tasks privilege and Write permission on the schedule can edit an event-based schedule. By default. Click OK. To edit access permissions. Starting an Event-Based Schedule Immediately You can start an event-based schedule immediately instead of waiting for the related PowerCenter session to complete.

For the event-based schedule you want to start. 2. To remove an event-based schedule: 1. You can stop a schedule immediately when you need to restart the server. For more information. You can perform the following tasks to manage reports in an event-based schedule: ♦ ♦ ♦ ♦ ♦ View a list of attached reports. Click OK. The Event-Based Schedules page appears. View task properties. you can attach any cached reports to the schedule. which stops all attached reports.To start an event-based schedule immediately: 1. You might want to remove an event-based schedule when the PowerCenter session is no longer valid. Managing Reports in an Event-Based Schedule After you create an event-based schedule. Data Analyzer starts the schedule and runs the attached reports. Click the Enabled button for the schedule you want to disable. it runs each attached report. Attach imported cached reports to a schedule. The Status of the schedule changes to Disabled. The Event-Based Schedules page appears. To disable an event-based schedule: 1. Removing an Event-Based Schedule You can remove event-based schedules from the repository. 2. Disabling an Event-Based Schedule You can disable an event-based schedule when you do not want it to run. To enable the schedule again. You might disable a schedule when it has no attached reports or when the update of source data has been interrupted. Click Administration > Scheduling > Event-Based Schedules. see “Stopping a Schedule” on page 30. Click the Remove button for the schedule you want to delete. When you want the schedule to resume. 2. 3. Before removing a schedule from the repository. you can enable the schedule. View or clear a report history. Managing Reports in an Event-Based Schedule 35 . reassign all attached reports to another schedule. Remove a report from an event-based schedule. Stopping an Event-Based Schedule Immediately You can stop an event-based schedule immediately. Click Administration > Scheduling > Event-Based Schedules. click Run Now. click Disabled. When Data Analyzer runs an event-based schedule. Click Administration > Scheduling > Event-Based Schedules.

You can also clear report the history. and then click OK. The Event-Based Schedules page appears. Data Analyzer displays the report history. Viewing or Clearing a Report History You can view a report history for the reports attached to an event-based schedule. The schedule properties display. You might want to view a report history to compare the number of successful runs on different schedules. the number of successfully completed runs. To view task properties: 1. Click Tasks. or recurring errors when running the report. To return to Task Properties. 7. Viewing Task Properties You can view the properties of any report attached to an event-based schedule. Click the name of the schedule you want to edit. You might clear history at the end of a quarter or to make space in the repository. Click Administration > Scheduling > Event-Based Schedules. Click Administration > Scheduling > Event-Based Schedules. 5. Click Tasks. 2. 3. The Task Properties page appears. The Task Properties page appears. Click the name of the schedule that runs the report. To view or clear a report history: 1. 36 Chapter 5: Managing Event-Based Schedules . 4.Viewing Attached Reports You can view all reports attached to an event-based schedule. Click Tasks. Click History. 6. click OK. click Clear. To clear the history. 3. Click OK. View report histories to determine how long a report takes to update. Click Administration > Scheduling > Event-Based Schedules. 2. 5. 4. Click the name of the report. To view tasks attached to an event-based schedule: 1. 3. Data Analyzer displays all attached reports. 2. Click the name of the schedule that runs the report. Click the name of the report.

The Imported Scheduled Reports window appears. To attach an imported cached report to an event-based schedule: 1. When you remove a cached report. select the check box in the title bar next to Name. You must attach the imported reports individually. Click the event-based schedule that you want to use. Managing Reports in an Event-Based Schedule 37 . Data Analyzer displays the following message: Some of the imported reports must be put on schedules. you cannot attach multiple reports. and then click OK. Select the check box for the report you want to remove. You might want to remove a report when you plan to disable the schedule or when the report requires a new update strategy. Click the name of the schedule you want to edit and then click Tasks. You must attach each imported cached report to a schedule. 3. Please assign the reports to schedules immediately. Click Remove. attach it to another schedule to ensure it updates in a timely manner. Click Administration > Scheduling > Event-Based Schedules. You can attach imported cached reports to time-based or event-based schedules. Click to add the reports to existing schedules. Attaching Imported Cached Reports to an Event-Based Schedule When you import cached reports to the repository. To attach multiple reports from the list. The list of the tasks assigned to the schedule appears: Appears when imported reports are not yet scheduled. Click Tasks. Click Add. If the session expires or you log out before attaching the reports from the import list.Removing a Report from an Event-Based Schedule You can remove a report from an event-based schedule. 3. 4. 2. The Add button appears only when you have unscheduled imported reports in the repository. you must attach them during the same Data Analyzer session. You can attach imported reports individually or attach multiple imported reports from a list to a single schedule. Click Administration > Scheduling > Event-Based Schedules. If you want to remove all attached reports. 2. 4. To remove a report from an event-based schedule: 1.

If you want to add all available imported reports to the schedule. 6. 38 Chapter 5: Managing Event-Based Schedules . The report appears as an item on the task list.5. Click Apply. click the All check box. Select the reports that you want to add to the schedule.

42 Exporting a Global Variable. 45 Exporting a Schedule. 44 Exporting a Dashboard. You can export the following repository objects: ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ Schemas Time Dimensions Reports Global Variables Dashboards Security profiles Schedules Users Groups Roles 39 . 44 Exporting a Security Profile. You might also want to export and import objects to move Data Analyzer objects from development to production. 46 Troubleshooting. You might want to export objects to archive the repository. 40 Exporting a Time Dimension. 39 Exporting a Schema. 47 Overview You can export repository objects to XML files and import repository objects from XML files.CHAPTER 6 Exporting Objects from the Repository This chapter includes the following topics: ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ Overview. 42 Exporting a Report.

and other schema objects associated with the metric. Schedule exporting and importing tasks so that you do not disrupt Data Analyzer users. You can view the XML files with any text editor. Exporting Metrics and Associated Schema Objects When Data Analyzer exports a metric or schema and the associated objects. 40 Chapter 6: Exporting Objects from the Repository . attributes. attributes. − When exporting a calculated metric. If you perform these tasks while users are logged in to Data Analyzer. Use this file to import the repository objects into a Data Analyzer repository. see “Using the Import Export Utility” on page 65. including the calculated metric and those used to calculate it. for the temporary space typically required when a file is saved. tables. It does not export the definition of the table or schema that contains the metrics or any other schema object associated with the metric or its table or schema. When you save the XML file on a Windows machine. and tables in the operational schema and the join expressions for the operational schema tables. Exporting Analytic Schemas When exporting a metric from an analytic schema. Exporting a Schema You can export analytic and operational schemas. Data Analyzer exports the metrics you select. it exports different objects based on the type of schema you select. verify that you have enough space available in the Windows temp directory. you can select individual metrics within a schema to export or you can select a folder that contains metrics. For more information. it also exports all metrics. Data Analyzer exports the definitions of the following schema objects associated with the metric: ♦ Fact tables associated with the exported metric. However. When you export a schema from the Data Analyzer repository. Data Analyzer creates an XML file that contains information about the exported objects. Exporting Metric Definitions Only When you export only metric definitions. do not modify the XML file created when you export objects. You can also export repository objects using the ImportExport command line utility. You can also choose whether to export only metric definitions or to export all metrics. Data Analyzer also exports all fact tables associated with any of the exported metrics. You can export the following metrics and schemas: ♦ ♦ ♦ ♦ Operational schemas or metrics in operational schemas Analytic schemas or metrics in analytic schemas Hierarchical schemas or metrics in hierarchical schemas Calculated metrics Exporting Operational Schemas When Data Analyzer exports a metric from an operational schema. usually in the C: drive. Data Analyzer also exports all associated metrics that are used to calculate the calculated metric. Exporting and importing repository objects uses considerable system resources.When you export the repository objects. Any change might invalidate the XML file and prevent you from using it to import objects into a Data Analyzer repository. users might experience slow response or timeout errors.

Exporting a Schema 41 . Attributes in the exported dimension tables. template. Base metric 4 (BaseMetric4) is a metric from a different operational schema (OpSch2). In addition. which is calculated from BaseMetric1 and BaseMetric2. Data Analyzer exports the fact table associated with each metric. If you export a calculated metric. you can select Metrics to select all folders and metrics in the list.− ♦ ♦ ♦ ♦ ♦ ♦ When exporting a fact table associated with a time dimension. its associated fact table. which is calculated from BaseMetric1 and BaseMetric3. Aggregate fact tables associated with the exported fact tables. and the schema objects associated with the metric in that fact table. 2. Exporting Hierarchical Schemas When Data Analyzer exports a metric from a hierarchical schema. To export schema objects: 1. and BaseMetric4 and its entire operational schema. It does not export any associated schema object. Data Analyzer exports BaseMetric3 and its entire associated operational schema. To export the metric definitions and associated tables and attributes. The Export Schemas page displays all the folders and metrics in the Metrics folder of the Schema Directory. Click Refresh Schema to display the latest list of folders and metrics in the Schema Directory. Data Analyzer exports all schema objects associated with the metrics in these fact tables. If you export a calculated metric. 3. Data Analyzer exports only one definition of the template dimension. Aggregate. Data Analyzer exports BaseMetric1. Data Analyzer exports only the template dimension and its attributes. or template dimensions that you want to export. and snowflake dimension tables associated with the dimension tables. the changes may not immediately display in the Schema Directory export list. select Export Metric Definitions Only. Data Analyzer does not export the time dimension. If you export a template dimension table associated with the exported metric. select Export the Metrics with the Associated Schema Tables and Attributes. In addition. For example. If you export only a template dimension. Data Analyzer exports BaseMetric3 and its entire operational schema. Click Administration > XML Export/Import > Export Schemas. Select the type of information you want to export. At the top of the Metrics section. it also exports all metrics and attributes in the hierarchical schema. which is calculated from BaseMetric3 and BaseMetric4. Dimension tables associated with the exported fact tables. Select the folders. You can export the time dimensions separately. Dimension keys in the exported fact table. Exporting Calculated Metrics Calculated metrics are derived from two or more base metrics from analytic. You can also export template dimensions separately. you have the following metrics: ♦ ♦ ♦ Base metric 1 (BaseMetric1) and base metric 2 (BaseMetric2) are metrics from fact tables in an analytic schema. To export only metric definitions. Drill paths associated with any of the attributes in the dimension tables. metrics. or hierarchical schemas. If you export a calculated metric. operational. Base metric 3 (BaseMetric3) is a metric from an operational schema (OpSch1). If you define a new object in the repository or if you create a new folder or move objects in the Schema Directory.

The Save As window appears. Navigate to the directory where you want to save the file. The File Download window appears. You can export cached and on-demand reports. Exporting a Report You can export reports from public and personal folders.and time-related attributes that describe the occurrence of a metric. When exporting cached reports. 5. Click Save. 6. Exporting a Time Dimension You can export time dimension tables to an XML file. To export a time dimension table: 1. Data Analyzer exports the report data and the schedule for cached reports. When you export a folder. You can export multiple reports at once. Data Analyzer exports all reports in the folder and its subfolders. Data Analyzer prompts you to overwrite the file or rename the new file. Click Administration > XML Export/Import > Export Time Dimensions. 6. Navigate to the directory where you want to save the file. You can also select individual metrics in different folders. The Save As window appears. 2. Click Export as XML. The File Download window appears. Time dimension tables contain date. 4. Enter a name for the XML file and click Save. When you export a report. 3. Data Analyzer exports the schema to an XML file. Enter a name for the XML file and click Save. 5. If an XML file with the same name already exists in the directory. Data Analyzer always exports the following report components: ♦ ♦ ♦ Report table Report charts Filters 42 Chapter 6: Exporting Objects from the Repository . Data Analyzer exports the time dimension table to an XML file. Select the time dimension you want to export.You can select Template Dimensions to select all template dimensions in the list or select a metrics folder to export all metrics within the folder. Click Export as XML. 7. Click Save. 4. The Export Time Dimensions page displays the time dimension tables in the repository.

7. 4. 6. 8. Click Administration > XML Export/Import > Export Reports. Data Analyzer exports all the workflow reports. Although the global variables are not exported with the report. Exported public highlighting uses the state set for all users as the default highlighting state. To modify the report components to export. Click Export as XML. and then click Save.♦ ♦ ♦ ♦ Calculations Custom attributes All reports in an analytic workflow All subreports in a composite report By default. Data Analyzer exports all current data for each component. The File Download window appears. You can choose not to export any of these components: ♦ ♦ ♦ ♦ ♦ ♦ Indicators Alerts Highlighting Permissions Schedules Filtersets Gauge indicators. Data Analyzer lists the global variables used in the report. the changes may not immediately display in the report export list. Alerts. Highlighting. clear each component that you do not want to export to the XML file. Exported personal gauge indicators do not keep their original owner. The Export Report page displays all public and personal folders in the repository that you have permission to access. Exported personal and public alerts use the state set for all report subscribers as the default alert state. When you export the originating report of an analytic workflow. Select a folder to export all subfolders and reports in the folder. From the list of Export Options. with the following exceptions: ♦ ♦ ♦ To export an analytic workflow. Select the folders or reports that you want to export. you can export them separately. modify. To export a report: 1. Click Save. you need to export only the originating report. Data Analyzer does not export any personal highlighting. Data Analyzer also exports the following components associated with reports. Exporting a Report 43 . Navigate to the directory where you want to save the file. 5. click Export Options. The Save As window appears. If you create. When you export a report that uses global variables. The user who imports the report becomes the owner of the gauge indicator and the gauge indicator becomes personal to that user. 3. or delete a folder or report. Exported public gauge indicators keep their original owner. Enter a name for the XML file. Click Refresh Reports to display the latest list of reports from Public Folders and Personal Folder. 2. Data Analyzer exports the definitions of all selected reports.

Optionally. 3. Click Administration > XML Export/Import > Export Global Variables. Data Analyzer exports the definitions of all selected global variables. The Export Global Variables page appears. Optionally. the Export Options button is unavailable. Select the global variables that you want to export. Click Administration > XML Export/Import > Export Dashboards. Data Analyzer exports the following objects associated with the dashboard: ♦ ♦ ♦ ♦ ♦ ♦ Reports Indicators Shared documents Dashboard filters Discussion comments Feedback Access permissions Attributes and metrics in the report Real-time objects Data Analyzer does not export the following objects associated with the dashboard: ♦ ♦ ♦ When you export a dashboard. You can export any of the public dashboards defined in the repository. 2. 5. Data Analyzer creates one XML file for the global variables and their default values. The Save As window appears. Click Save. Select the dashboards that you want to export. 44 Chapter 6: Exporting Objects from the Repository . You can export more than one dashboard at a time. you cannot select specific components to export. Exporting a Dashboard When you export a dashboard. To export a global variable: 1. The File Download window appears. select Name at the top of the list to select all the dashboards in the list. listing all the dashboards in the repository that you can export. When you export multiple global variables. Click Export as XML. Therefore. 2. select Name at the top of the list to select all the global variables in the list. 6. 4. Enter a name for the XML file and click Save. To export a dashboard: 1. Navigate to the directory where you want to save the file. listing all the global variables in the repository. The Export Dashboards page appears.Exporting a Global Variable You can export any global variables defined in the repository.

A security profile consists of the access permissions and data restrictions that the system administrator sets for a user or group. or shared documents. Enter a name for the XML file and click Save. 4. Click Administration > XML Export/Import > Export Security Profile. If a user or group security profile you export does not have access permissions or data restrictions. Data Analyzer exports the definitions of all selected dashboards and objects associated with the dashboard. click the page number. Data Analyzer does not export any object definitions and displays the following message: There is no content to be exported. metrics. 6. Data Analyzer exports the security profile definition of the selected user. it exports access permissions for objects under the Schema Directory. Data Analyzer lists one page of users and displays the page numbers at the top. 5. 7. Data Analyzer allows you to export one security profile at a time. The File Download window appears. and attributes. To export a user security profile: 1. Click Export as XML. The Save As window appears. Click Save. reports. Navigate to the directory where you want to save the file. 2. Select a user whose security profile you want to export. Click Save. Navigate to the directory where you want to save the file. 6. The Save As window appears. which include folders. 4. Exporting a Security Profile 45 . The File Download window appears. Click Export from Users. Enter a name for the XML file and click Save.3. Data Analyzer does not export access permissions for filtersets. The Export Security Profile page displays a list of all the users in the repository 3. Exporting a Group Security Profile You can export a security profile for only one group at a time. To view a list of users on other pages. Exporting a User Security Profile You can export a security profile for one user at a time. 5. If there are a large number of users in the repository. Click Export as XML. When Data Analyzer exports a security profile. Exporting a Security Profile Data Analyzer keeps a security profile for each user or group in the repository.

4. 6. Click Export from Groups. To view groups on other pages. Select the group whose security profile you want to export. 2. Data Analyzer exports the definitions of all selected schedules. Data Analyzer runs a report with an event-based schedule when a PowerCenter session completes. 5. Enter a name for the XML file and click Save. 3. Data Analyzer runs a report with a timebased schedule on a configured schedule. The File Download window appears. 4. The Export Security Profile page displays a list of all the groups in the repository. If there are a large number of groups in the repository. 6. Select the schedule you want to export. Data Analyzer lists one page of groups and displays the page numbers at the top. When you export a schedule. Data Analyzer exports the security profile definition for the selected group. Click Administration > XML Export/Import > Export Schedules. You can click Names at the top of the list to select all schedules in the list. click the page number. Click Export as XML. Navigate to the directory where you want to save the file. 5. Data Analyzer does not export the history of the schedule. 2.To export a group security profile: 1. Exporting a Schedule You can export a time-based or event-based schedule to an XML file. Navigate to the directory where you want to save the file. The File Download window appears. Click Administration > XML Export/Import > Export Security Profile. Click Save. 7. 3. The Save As window appears. Click Save. Click Export as XML. The Save As window appears. The Export Schedules page displays a list of the schedules in the repository. Enter a name for the XML file and click Save. 46 Chapter 6: Exporting Objects from the Repository . To export a schedule: 1.

Error processing resource 'Principal<DTDVersion>. I double-click the XML file and receive the following error: The system cannot locate the resource specified. Troubleshooting 47 .Troubleshooting After I export an object. However. Use a text editor to open the XML file.dtd'. The web browser cannot locate the DTD file Data Analyzer uses for exported objects. Changes might invalidate the file. the operating system tries to open the file with a web browser. do not edit the file. If you double-click the XML file.

48 Chapter 6: Exporting Objects from the Repository .

You can import the following repository objects from XML files: ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ Schemas Time dimensions Reports Global variables Dashboards Security profiles Schedules Users Groups Roles You can import objects into the same repository or a different repository. 59 Importing a Schedule. 54 Importing a Global Variable. see “Localization” on page 7. both repositories must have the same language type and locale settings. or. 62 Overview You can import objects into the Data Analyzer repository from a valid XML file of exported repository objects. the destination repository must be a superset of the source repository.CHAPTER 7 Importing Objects to the Repository This chapter includes the following topics: ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ Overview. For more information. 61 Troubleshooting. 49 Importing a Schema. Data Analyzer imports objects based on the following constraints: ♦ 49 . 57 Importing a Security Profile. 53 Importing a Report. 56 Importing a Dashboard. When you import a repository object that was exported from a different repository. 50 Importing a Time Dimension.

you do not need to validate an XML file that you create by exporting from Data Analyzer. Importing Objects from a Previous Version You can import objects from Data Analyzer 5. Except for global variables. see the PowerCenter Configuration Guide.♦ ♦ You can import objects from Data Analyzer 5. XML Validation When you import objects. When you import a report. see “Importing Objects from a Previous Version” on page 50. Data Analyzer grants you the same permissions to the object as the owner of the object. You can also import repository objects using the ImportExport command line utility. If you publish an imported report to everyone. see the PowerCenter Administrator Guide. You must ensure that you do not modify an XML file of exported objects. You can back up a Data Analyzer repository in the PowerCenter Administration Console. If you modify the XML file. you might not be able to use it to import objects into a Data Analyzer repository. Ordinarily. For more information. For more information. A valid XML file can contain definitions of the following schema objects: ♦ Tables. when you import a Data Analyzer 5. You cannot overwrite global variables that already exist in the repository. The schema tables associated with the exported metrics in the XML file. Importing a Schema You can import schemas from an XML file. all users in Data Analyzer have read and write access to the report. if you are not sure of the validity of an XML file.0 report using a custom attribute with groups. if you import objects that already exist in the repository. When you import objects from a previous version.0 or later. For example. Data Analyzer upgrades the objects to the current version. you can choose to overwrite the existing objects. users might experience slow response or timeout errors. Data Analyzer 8.x upgrades the attribute to one with an advanced expression. Make sure that you schedule exporting and importing tasks so that you do not disrupt Data Analyzer users. Data Analyzer stops the import process and displays the following message: Error occurred when trying to parse the XML file. You can then change the access permissions to the report to restrict specific users or groups from accessing it. Data Analyzer system administrators can access all imported repository objects. If you try to import an invalid XML file. Object Permissions When you import a repository object. you can validate the XML file against the DTD provided by Data Analyzer. However. you can validate it against the Data Analyzer DTD file when you start the import process. You might want to back up the target repository before you import repository objects into it. The file might include the following tables: − − − Fact table associated with the metric Dimension tables associated with the fact table Aggregate tables associated with the dimension and fact tables 50 Chapter 7: Importing Objects to the Repository . Exporting and importing repository objects use considerable system resources. you can limit access to the report for users who are not system administrators by clearing the Publish to Everyone option.0 repositories or later. If you perform these tasks while users are logged in to Data Analyzer. For more information about upgrading objects in the repository.

All metrics exported to the XML file. the XML file contains only a list of metric definitions. Data Analyzer displays a list of all the definitions contained in the XML file. see “Importing a Time Dimension” on page 53. You can import a metric only if its associated fact table exists in the target repository or the definition of its associated fact table is also in the XML file. The time keys associated with exported tables. time keys. Importing a Schema 51 . To validate the XML file against the DTD. Drill paths. The file can include the following relationships: − − Fact table joined to a dimension table Dimension table joined to a snowflake dimension ♦ ♦ ♦ ♦ ♦ Metrics. 4. If the XML file contains only the metric definition. select Validate XML against DTD. When you export metrics with the associated schema tables and attributes. 3. the XML file contains different types of schema objects. attributes. Data Analyzer imports the metrics and attributes in the hierarchical schema. and operational schemas display in separate sections. Attributes. Click Open. If you export the metric definition only. The relationships between tables associated with the exported metrics in the XML file. The drill paths associated with exported attributes. you must import or create a time dimension.− − ♦ Snowflake dimensions associated with the dimension tables Template dimensions associated with the dimension tables or exported separately Schema joins. When you import a schema. you must make sure that the fact table for the metric exists in the target repository. The attributes in the fact and dimension tables associated with the exported metrics in the XML file. The name and location of the XML file display on the Import Schemas page. metrics. Time keys. Operational schemas. When you import a hierarchical schema. schema joins. If you import a schema that contains time keys. Click Import XML. Imported Schema Table Description Property Name Last Modified Date Last Modified By Description Name of the fact or dimension tables associated with the metric to be imported. For more information. When you import an operational schema. To import a schema: 1. Date when the table was last modified. 5. Click Administration > XML Export/Import > Import Schemas. The Import Schemas page appears. drill paths. Table 7-1 shows the information that Data Analyzer displays for schema tables: Table 7-1. Click Browse to select an XML file from which to import schemas. Data Analyzer imports the following objects: − − − Tables in the operational schema Metrics and attributes for the operational schema tables Schema joins ♦ Hierarchical schemas. It then displays a list of all the object definitions in the XML file that already exist in the repository. User name of the Data Analyzer user who last modified the table. The file can include calculated metrics and base metrics. The lists of schema tables. 2. You can choose to overwrite objects in the repository.

Table 7-5 shows the information that Data Analyzer displays for the drill paths: Table 7-5. Imported Metrics Information Property Name Last Modified Date Last Modified By Analyzer Table Locations Description Name of the metric to be imported. Can also be the name of a snowflake dimension table associated with a dimension table. Imported Schema Join Expression Property Table1 Name Description Name of the fact table that contains foreign keys joined to the primary keys in the dimension tables.PrimaryKey Table2 Name Join Expression Table 7-3 shows the information that Data Analyzer displays for the metrics: Table 7-3. If the metric is a calculated metric. 52 Chapter 7: Importing Objects to the Repository .ForeignKey = Table. Date when the metric was last modified. Foreign key and primary key columns that join a fact and dimension table or a dimension table and a snowflake dimension in the following format: Table. square brackets ([]) display in place of a fact table. Fact or dimension table that contains the attribute. Can also be the name of a dimension table that joins to a snowflake dimension. Imported Drill Paths Information Property Name Last Modified Date Last Modified By Paths Description Name of the drill path that includes attributes in the fact or dimension tables associated with the metric to be imported. Fact table that contains the metric. Date when the attribute was last modified. User name of the person who last modified the attribute. Name of the dimension table that contains the primary key joined to the foreign keys in the fact table. List of attributes in the drill path that are found in the fact or dimension tables associated with the metric to be imported. User name of the person who last modified the drill path.Table 7-2 shows the information that Data Analyzer displays for the schema joins: Table 7-2. Imported Attributes Information Property Name Last Modified Date Last Modified By Analyzer Table Locations Description Name of the attributes found in the fact or dimension tables associated with the metric to be imported. Date when the drill path was last modified. Table 7-4 shows the information that Data Analyzer displays for the attributes: Table 7-4. User name of the person who last modified the metric.

User name of the person who last modified the operational schema.and time-related attributes that describe the occurrence of metrics and establish the time granularity of the data in the fact table. select Validate XML against DTD. To overwrite only specific schema objects.Table 7-6 shows the information that Data Analyzer displays for the time keys: Table 7-6. Date when the operational schema was last modified. If objects in the XML file are already defined in the repository. Table 7-8 shows the information that Data Analyzer displays for the hierarchical schemas: Table 7-8. a list of the duplicate objects appears. confirm that you want to overwrite the objects. To overwrite the schema objects of a certain type. Importing a Time Dimension 53 . User name of the person who last modified the hierarchical schema. When you import a time dimension table. 3. Click Apply. Table 7-7 shows the information that Data Analyzer displays for the operational schemas: Table 7-7. To validate the XML file against the DTD. You can import a time dimension table from an XML file. Imported Operational Schemas Information Property Name Last Modified Date Last Modified By Description Name of the operational schema to be imported. Imported Time Keys Information Property Name Description Name of the time key associated with the fact table. To overwrite all the schema objects. select Overwrite at the top of each section. Description Name of the hierarchical schema to be imported. 2. Imported Hierarchical Schema Information Property Name Last Modified Date Last Modified By 6. To import a time dimension table: 1. Importing a Time Dimension Time dimension tables contain date. Date when the hierarchical schema was last modified. select Overwrite All. Click Administration > XML Export/Import > Import Time Dimensions. The Import Time Dimensions page appears. 7. and calendar attribute of the time dimension table. The name and location of the XML file display on the Import Time Dimensions page. Data Analyzer imports the primary attribute. Click Open. Click Continue. secondary attribute. select the object. 4. Data Analyzer imports the definitions of all selected schema objects. If you select to overwrite schema objects. Click Browse to select an XML file from which to import time dimensions.

When available. If objects in the XML file are already defined in the repository. If you successfully import the time dimensions. Imported Time Dimension Information Property Name Last Modified Date Last Modified By Description Name of the time dimension table. it becomes personal to the user who imports the report. Click Import XML. Data Analyzer imports the following components of a report: ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ Report table Report chart Indicators Alerts Filters Filtersets Highlighting Calculations Custom attributes All reports in an analytic workflow Permissions Report links Schedules Gauge indicators. Alerts. a list of the duplicate objects appears. the XML file might not contain all supported metadata. 6. If the gauge indicator is personal. Data Analyzer displays a message that you have successfully imported the time dimensions. Click Continue.5. Imported personal and public alerts use the state set for all report subscribers as the default alert state. with the following exceptions: ♦ ♦ ♦ 54 Chapter 7: Importing Objects to the Repository . Imported gauge indicators do not keep their original owner. The user who imports the report becomes the owner of the gauge indicator. Data Analyzer does not export any personal highlighting. Data Analyzer imports the definitions of all selected time dimensions. 7. Importing a Report You can import reports from an XML file. Select the objects you want to overwrite. Depending on the reports included in the file and the options selected when exporting the reports. Table 7-9 shows the information that Data Analyzer displays for the time dimensions: Table 7-9. Highlighting. Data Analyzer displays the time dimensions found in the XML file. Imported public highlighting uses the state set for all users as the default highlighting state. Data Analyzer imports all data for each component. Date when the time dimension table was last modified. User name of the Data Analyzer user who last modified the report. Click Continue. 8.

For example. select Validate XML against DTD. Importing a Report 55 . To validate the XML file against the DTD. Click Administration > XML Export/Import > Import Reports. Data Analyzer also overwrites the workflow reports. 5. you must import or recreate the objects before you run the report.When you import a report. When you import a report exported from a personal folder. Thus. 4. if you import a report exported from a personal folder called Mozart. 2. Click Import XML. Importing Reports from Public or Personal Folders You can import reports exported from any folder in the repository. You can run imported cached reports in the background immediately after you import them. Data Analyzer displays the reports found in the XML file. Data Analyzer creates a new folder of that name for the report. you are the owner of the new public folder. Steps for Importing a Report To import a report: 1. such as Personal Reports (Imported 8/10/04). If you choose to overwrite the report. To view the data for the report. You can also edit the report and save it before you view it to make sure that Data Analyzer runs the report before displaying the results. you first must run the report. the XML file contains all workflow reports. and copies the imported report into a subfolder called Mozart. If you import a composite report. and global variables used in the report are defined in the target repository. The Import Reports page appears. you can overwrite the existing report. ensure that all imported analytic workflows have unique report names prior to export. The name and location of the XML file display on the Import Reports page. Data Analyzer does not import report data for cached reports. When possible. Data Analyzer creates a new folder within the public folder called Personal Reports with the date of import and creates a subfolder named for the owner of the personal folder. If you import a report that uses objects not defined in the target repository. Data Analyzer does not import analytic workflows containing the same workflow report names. attributes. When Data Analyzer imports a report to a repository that does not have the same folder as the originating repository. When importing multiple workflows. Running reports in the background can be a long process. You can choose to overwrite the subreports or composite report if they are already in the repository. the XML file contains all the subreports. If you import a report and its corresponding analytic workflow. Data Analyzer imports reports to the same folder in the target repository. Data Analyzer creates a public folder called Personal Reports with the import date. 3. For example. you chose to export schedules associated with a report. You can import cached and on-demand reports. To ensure security for the reports from the personal folders. If you try to view an imported cached report immediately after you import it. If a report of the same name already exists in the same folder. make sure all the metrics. If during the export process. then Data Analyzer also imports the schedule stored in the cached report. the following error appears: Result set is null. and the data may not be available immediately. it imports reports from the public folder to the public folder. Click Open. Click Browse to select an XML file from which to import reports.

Imported Report Properties Property Name Last Modified Date Last Modified By Path 6. Data Analyzer lists any folders created for the reports. If attributes or metrics associated with the report are not defined in the repository. select Publish to Everyone. After you import the reports. If reports in the XML file are already defined in the repository. 4. For more information about attaching the imported cached reports to a schedule immediately. Data Analyzer displays a list of the undefined objects. Click Open. select Run Cached Reports after Import. Click Continue. you can cancel the process. it displays a message that you need to assign the cached reports to a schedule in the target repository. Data Analyzer imports the definitions of all selected reports. select Overwrite at the top of the list. 5. To validate the XML file against the DTD. To allow all users to have access to the reports. 8. 3. To overwrite any of the reports. Location of the report in the Public Folders or Personal Folder. If the XML file contains global variables already in the repository. If you continue the import process.Table 7-10 shows the properties that Data Analyzer displays for the reports: Table 7-10. click Cancel. Data Analyzer imports only the global variables that are not in the target repository. If you import cached reports. If you import the report. When necessary. 56 Chapter 7: Importing Objects to the Repository . 2. a list of the duplicate reports appears. you might not be able to run it successfully. Description Name of the reports found in the XML file. Create the required objects in the target repository before attempting to import the report again. To overwrite all reports. Data Analyzer displays the global variables found in the XML file. The Import Global Variables page appears. Importing a Global Variable You can import global variables that are not defined in the target repository. Click Continue. Data Analyzer runs the cached reports in the background. To import a global variable: 1. The name and location of the XML file display on the Import Global Variables page. Click Import XML. select Overwrite next to the report name. Click Browse to select an XML file from which to import global variables. Click Administration > XML Export/Import > Import Global Variables. Date when the report was last modified. User name of the Data Analyzer user who last modified the report. Data Analyzer displays a message that you have successfully imported them. To cancel the import process. To immediately update the data for all the cached reports in the list. see “Attaching Imported Cached Reports to a Time-Based Schedule” on page 26 and “Attaching Imported Cached Reports to an Event-Based Schedule” on page 37. If you successfully import the reports. 7. select Validate XML against DTD.

When Data Analyzer imports a dashboard to a repository that does not have the same folder as the originating repository. You must add those indicators to the dashboard manually. Data Analyzer imports all indicators for the originating report and workflow reports in a workflow. Click Continue. Data Analyzer stores the imported dashboard in the following manner: ♦ Dashboards exported from a public folder. indicators for workflow reports do not display on the dashboard after you import it. Data Analyzer imports only the variables that are not in the repository. Data Analyzer imports the dashboards to the corresponding public folder in the target repository. If the Dashboards folder already exists at the time of import. Data Analyzer provides an option to overwrite the object. When you import a dashboard from an XML file. Data Analyzer imports the following objects associated with the dashboard: ♦ ♦ ♦ ♦ ♦ ♦ Reports Indicators Shared documents Dashboard filters Discussion comments Feedback Access permissions Attributes and metrics in the report Real-time objects Data Analyzer does not import the following objects associated with the dashboard: ♦ ♦ ♦ Dashboards are associated with the folder hierarchy. Data Analyzer displays a warning. even if the values are different. Dashboards exported from a personal folder. To continue the import process. click Continue. Imported Global Variable Description Property Name Value 6. Data Analyzer creates a new folder of that name for the dashboard. Dashboards_1 or Dashboards_2). Value of the global variable. Data Analyzer imports the dashboards to the Public Folders > Dashboards folder. Data Analyzer imports the dashboards to a new Public Folders > Personal Dashboards (Imported MMDDYY) > Owner folder. Importing a Dashboard Dashboards display links to reports. When you import a dashboard. shared documents. Dashboards exported from an earlier version of Data Analyzer. If the XML file includes global variables already in the repository. then Data Analyzer creates a new Public Folders > Dashboards_n folder to store the dashboards (for example. If an object exists in the repository. Importing a Dashboard 57 . If you continue the import process. Description Name of the global variable found in the XML file. ♦ ♦ ♦ When you import a dashboard. Data Analyzer does not import global variables whose names exist in the repository.Table 7-11 shows the information that Data Analyzer displays for the global variables: Table 7-11. Data Analyzer imports a personal dashboard to the Public Folders folder. Personal dashboard. However. and indicators.

or shared document. Table 7-12 shows the information that Data Analyzer displays for the dashboards: Table 7-12. If the attributes or metrics in a report associated with the dashboard do not exist. click Apply. reports. 6. 3. 5. 58 Chapter 7: Importing Objects to the Repository . Data Analyzer does not automatically display imported dashboards in your subscription list on the View tab. To validate the XML file against the DTD. or shared documents. Click Browse to select an XML file from which to import dashboards. Imported Dashboard Information Property Name Last Modified Date Last Modified By Description Name of the dashboard found in the XML file. click Cancel. 4. The name and location of the XML file display on the Import Dashboards page. To continue the import process. Click Open. User name of the Data Analyzer user who last modified the dashboard. select Validate XML against DTD. the report does not display on the imported dashboard.When you import a dashboard. report. select Overwrite next to the item name. To overwrite all dashboards. To overwrite a dashboard. To import a dashboard: 1. Click Apply. Click Continue. the report does not display on the imported dashboard. 2. Date when the dashboard was last modified. To cancel the import process. The Import Dashboards page appears. select Overwrite at the top of the list. Data Analyzer imports the definitions of all selected dashboards and the objects associated with the dashboard. Click Administration > XML Export/Import > Import Dashboards. and shared documents already defined in the repository. make sure all the metrics and attributes used in reports associated with the dashboard are defined in the target repository. Click Import XML. You must manually subscribe to imported dashboards to display them in the Subscription menu. Data Analyzer displays a list of the metrics and attributes in the reports associated with the dashboard that are not in the repository. Data Analyzer displays the list of dashboards found in the XML file. 8. reports. 7. Data Analyzer does not import the attributes and metrics in the reports associated with the dashboard. Data Analyzer displays a list of the dashboards. If the attributes or metrics in a report associated with the dashboard do not exist.

For example. Data Analyzer joins the restrictions using the OR operator. and metrics. Data Analyzer removes the old restrictions associated with the user or group. you import a security profile with the following data restriction for the Sales fact table: Region Name show only ‘United States’. Click Browse to select an XML file from which to import a security profile. When you import a security profile and associate it with a user or group. you can either overwrite the current security profile or add to it. Or. The Import Security Profiles window displays the access permissions and data restrictions for the security profile. Data Analyzer keeps a security profile for each user or group in the repository. To import a user security profile: 1. When you import a security profile from an XML file. Data Analyzer appends new data restrictions to the old restrictions but overwrites old access permissions with the new access permissions. To associate the security profiles with all displayed users. 4. Click Continue. you must first select the user or group to which you want to assign the security profile. Select the users you want to associate with the security profile. select the check box under Users at the top of the list. 6. If you append the profile. Click Open. 3. select Import To All. If you overwrite existing security profiles. The Import Security Profiles page appears. You can assign the same security profile to more than one user or group. The Sales group has an existing Sales fact table data restriction: Region Name show only ‘Europe’.Importing a Security Profile A security profile consists of data restrictions and access permissions for objects in the Schema Directory. When you overwrite a security profile. When you append a security profile. 5. 7. the Sales group restriction changes to show only data related to the United States. attributes. To validate the XML file against the DTD. 2. Data Analyzer assigns the user or group only the data restrictions and access permissions found in the new security profile. Importing a User Security Profile You can import a user security profile and associate it with one or more users. When a user or group has a data restriction and the imported security profile has a data restriction for the same fact table or schema and associated attribute. select Validate XML against DTD. Click Administration > XML Export/Import > Import Security Profiles. The name and location of the XML file display on the Import Security Profiles page. Importing a Security Profile 59 . 9. Click Import to Users. the Sales group data restriction changes to the following restriction: Region Name show only ‘United States’ OR Region Name show only ‘Europe’. click Append to add the imported security profile to existing security profiles. The Import Security Profile page displays all users in the repository. including folders. Click Overwrite to replace existing security profiles with the imported security profile. Click Import XML. To associate the security profile with all users in the repository. 8.

Click Overwrite to replace existing security profiles with the imported security profile. 4. Type Table 7-14 shows the information that Data Analyzer displays for the data restrictions: Table 7-14. Select the groups you want to associate with the security profile. Click Import XML. Indicates whether the schema object is a folder. Description Name of the restricted table found in the security profile. Importing a Group Security Profile You can import a group security profile and associate it with one or more groups. 3. 10. The name and location of the XML file display on the Import Security Profile page. To associate the security profile with all groups in the repository. 9. Click Administration > XML Export/Import > Import Security Profile. Click Continue. The Import Security Profile page appears. Click Append to add the imported security profile to existing security profiles. select the check box under Groups at the top of the list. To import a group security profile: 1. The Import Security Profile page displays all groups in the repository. The list of access permissions and data restrictions that make up the security profile appears. 6. select Import To All. To continue the import process. Click Continue. Data Analyzer displays a list of the objects in the security profile that are not in the repository. 7. click Continue. Data Analyzer imports the security profile and associates it with all selected users. select Validate XML against DTD. Imported Security Profile: Data Restrictions Property Schema Table Name Security Condition 10. Description of the data access restrictions for the table. 8. Click Continue. 5. 2. Click Browse to select an XML file from which to import a security profile. To associate the security profiles with all displayed groups. It imports access permissions and data restrictions only for objects defined in the repository. click Cancel. Click Open.Table 7-13 shows the information that Data Analyzer displays for the restricted objects: Table 7-13. or metric. To validate the XML file against the DTD. To cancel the import process. attribute. Indicates the fact or dimension table and attribute name if the object is an attribute. Imported Security Profile: Restricted Objects Property Object Name Description Indicates the Schema Directory path of the restricted schema object if the restricted object is a folder. 11. Indicates the fact table and metric name if the object is a metric. 60 Chapter 7: Importing Objects to the Repository . Click Import to Groups.

Click Administration > XML Export/Import > Import Schedules. Click Open. click Cancel. Click Browse to select an XML file from which to import a schedule. 5. The list of objects found in the XML file appears. 7. To cancel the import process. Click Import XML. 4. If the schedules in the XML file are already defined in the repository. The name and location of the XML file display on the Import Schedules page. a list of the duplicate schedules appears. click the Overwrite check box next to the schedule. To overwrite all schedules. Click Continue. you do not import the task history or schedule history. To validate the XML file against the DTD. 11. Importing a Schedule 61 . Click Continue. Data Analyzer imports the schedules. To overwrite a schedule. You can then attach reports to the imported schedule. click Continue. 6. 2. select Validate XML against DTD. To import a schedule: 1. Data Analyzer imports the security profile and associates it with all selected groups. Importing a Schedule You can import a time-based or event-based schedule from an XML file. When you import a schedule. Imported Schedule Information Property Name Last Modified Date Last Modified By Description Name of the schedule found in the XML file. Date when the schedule was last modified. 3. User name of the person who last modified the schedule. Data Analyzer does not attach the schedule to any reports. It imports access permissions and data restrictions only for objects defined in the repository.Data Analyzer displays a list of the objects in the security profile that are not in the repository. Table 7-15 shows the information that Data Analyzer displays for the schedules found in the XML file: Table 7-15. To continue the import process. click the Overwrite check box at the top of the list. The Import Schedules page appears. When you import a schedule from an XML file.

Extract the contents of the connectjdbc. On the Product Downloads page.sh 4.jar file in a temporary directory and install the DataDirect Connect for JDBC utility.properties file.jar.transaction. you must restart the application server. I run out of time. 3. see “Configuration Files” on page 129.x repository. click the DataDirect Connect for JDBC Any Java Platform link and complete the registration information to download the file.datadirect. 2.Troubleshooting When I import my schemas into Data Analyzer. Depending on the error that Data Analyzer generates. You must increase the default value of DynamicSections connection property to at least 500.x.seconds property in the DataAnalyzer. On the command line. the database. The error occurs when the default value of the DynamicSections property of the JDBC driver is too small to handle large XML imports. You can modify the settings of the application server.properties file.ssp To increase the value of the DynamicSections property: 1. Follow the instructions in the DataDirect Connect for JDBC Installation Guide. The name of the download file is connectjdbc. run the following file extracted from the connectjdbc. Data Analyzer might display error messages when you import large XML files.bat UNIX: Installer. To change the default transaction time out for Data Analyzer. edit the value of the import. Please recreate your package with a larger dynamicSections value. You can now run large import processes without timing out. you can change the default transaction time out value.com/download/index. Use the DataDirect Connect for JDBC utility to increase the default value of the DynamicSections connection property and recreate the JDBC driver package. Data Analyzer generates different errors.timeout. You might need to contact your database system administrator to change some of these settings.ejb. Enter the following license key and click Add: 62 Chapter 7: Importing Objects to the Repository . I have an IBM DB2 8. How can I import large XML files? The Data Analyzer installer installs a JDBC driver for IBM DB2 8. For more information about editing the DataAnalyzer. If you are importing large amounts of data from XML and the transaction time is not enough.jar file: Windows: Installer. After you change this value. Is there a way to raise the transaction time out period? The default transaction time out for Data Analyzer is 3600 seconds (1 hour). The default value of the DynamicSections connection property is 200. Download the utility from the Product Downloads page of DataDirect Technologies web site: http://www. or the JDBC driver to solve the problem. If you use this driver to connect to a DB2 8.x repository database. you might want to modify the following parameters: ♦ ♦ ♦ DynamicSections value of the JDBC driver Page size of the temporary table space Heap size for the application Increasing the DynamicSections Value Data Analyzer might display the following message when you import large XML files: javax. When I import large XML files.EJBException: nested exception is: Exception: SQL Exception: [informatica][DB2 JDBC Driver]No more available statements.

6. create a new system temporary table space with the page size of 32KB. 11. Restart the application server. Troubleshooting 63 . Modifying the Page Size of the Temporary Table Space Data Analyzer might display the following message when you import large XML files: SQL1585N A temporary table space with sufficient page size does not exist This problem occurs when the row length or number of columns of the system temporary table exceeds the limit of the largest temporary table space in the database. For more information. click Press Here to Continue. In the testforjdbc folder.bat UNIX: testforjdbc. If you continue getting the same error message when you import large XML files. you can run the Test for JDBC Tool again and increase the value of DynamicSections to 750 or 1000. On the Test for JDBC Tool window. ReplacePackage=TRUE. 10. increase the value of the application heap size configuration parameter (APPLHEAPSZ) to 512. Click Next twice and then click Install. To resolve the problem.sh 8. databaseName=<DatabaseName>. enter the user name and password you use to connect to the repository database from Data Analyzer. log out of Data Analyzer and stop the application server. 7. enter the following: jdbc:datadirect:db2://<ServerName>:<PortNumber>.CreateDefaultPackage=TRUE.DynamicSections=500 ServerName is the name of the machine hosting the repository database. see the IBM DB2 documentation. 12. DatabaseName is the name of the repository database. Click Connect. The installation program for the DataDirect Connect for JDBC utility creates the testforjdbc folder in the directory where you extracted the connectjdbc. In the Database field. see the IBM DB2 documentation. Restart the application server.jar file. 9. Increasing Heap Size for the Application Data Analyzer might display the following message when you import large XML files: [informatica][DB2 JDBC Driver][DB2]Virtual storage or database resource is not available ErrorCode=-954 SQLState=57011 This problem occurs when there is not enough storage available in the database application heap to process the import request. Click Connection > Connect to DB. run the Test for JDBC Tool: Windows: testforjdbc. Click Finish to complete the installation.eval 5. For more information. and then close the window. On the repository database. PortNumber is the port number of the database. 13. To resolve the error. In the User Name and Password fields.

64 Chapter 7: Importing Objects to the Repository .

You cannot use the utility to import or export other individual objects. you can run the utility to import all reports from an XML file or export all dashboards to an XML file. 65 Running the Import Export Utility. To import or export individual objects. the same rules as those about import or export from the Data Analyzer Administration tab apply. you can import only those global variables that do not already exist in the repository. When you use the Import Export utility.CHAPTER 8 Using the Import Export Utility This chapter includes the following topics: ♦ ♦ ♦ ♦ Overview. Data Analyzer authenticates the passwords directly in the LDAP directory. with the Import Export utility or the Data Analyzer Administration tab. When you run the Import Export utility. groups. You must run the utility multiple times to import or export different types of objects. If Data Analyzer is installed with the LDAP authentication method.0 repositories or later. Use the utility to import or export the security profile of an individual user or group. For example. You can also use the utility to archive your repository without using a browser. For example. You can use the Import Export utility to import objects from Data Analyzer 5. use the Data Analyzer Administration tab. For example. 65 . Data Analyzer does not store user passwords in the Data Analyzer repository. Data Analyzer imports or exports all objects of a specified type. 70 Overview The Import Export utility lets you import and export Data Analyzer repository objects from the command line. 69 Troubleshooting. With the LDAP authentication method. Use the Import Export utility to migrate repository objects from one repository to another. you cannot use the utility to export a specific user or report to an XML file. you cannot use the Import Export utility to import users. you can use the utility to quickly migrate Data Analyzer repository objects from a development repository into a production repository. For example. 66 Error Messages. You can also use the Data Analyzer Administration tab to import or export all objects of a specified type. or roles.

Running the Import Export Utility
Before you run the Import Export utility to import or export repository objects, you must meet the following requirements:
♦ ♦

To run the utility, you must have the System Administrator role or the Export/Import XML Files privilege. To import or export users, groups, or roles, you must also have the Manage User Access privilege. Data Analyzer must be running.

You can import Data Analyzer objects from XML files that were created when you exported repository objects from Data Analyzer. You can use files exported from Data Analyzer 5.0 or later. The default transaction time out for Data Analyzer is 3,600 seconds (1 hour). If you are importing large amounts of data from XML files and the transaction time is not enough, you can change the default transaction time out value. To change the default transaction time out for Data Analyzer, edit the value of the import.transaction.timeout.seconds property in DataAnalyzer.properties. After you change this value, you must restart the application server. When you run the Import Export utility, you specify options and arguments to import or export different types of objects. Specify an option by entering a hyphen (-) followed by a letter. The first word after the option letter is the argument. To specify the options and arguments, use the following rules:
♦ ♦ ♦ ♦

Specify the options in any order. Utility name, options, and argument names are case sensitive. If the option requires an argument, the argument must follow the option letter. If any argument contains more than one word, enclose the argument in double quotes.

To run the utility on Windows, open a command line window. On UNIX, run the utility as a shell command.
Note: Back up the target repository before you import repository objects into it. You can back up a Data

Analyzer repository with the Repository Backup utility.
To run the Import Export utility: 1.

Go to the Data Analyzer utilities directory. The default directory is <PCAEInstallationDirectory>/DataAnalyzer/import-exportutil/.

2.

Run the utility with the following format: Windows:
ImportExport [-option_1] argument_1 [-option_2] argument_2 ...

UNIX:
ImportExport.sh [-option_1] argument_1 [-option_2] argument_2 ...

Table 8-1 lists the options and arguments you can specify:
Table 8-1. Options and Arguments for the Import Export Utility
Option -i Argument repository object type Description Import a repository object type. For more information about repository object types, see Table 8-2 on page 68. Use the -i or -e option, but not both. Export a repository object type. For more information about repository object types, see Table 8-2 on page 68. Use the -i or -e option, but not both.

-e

repository object type

66

Chapter 8: Using the Import Export Utility

Table 8-1. Options and Arguments for the Import Export Utility
Option -w Argument No argument Description Import only. Instructs the Import Export utility to overwrite existing repository objects of the same name. If you do not specify this option and if a repository object with the same name already exists, the utility exits without completing the operation. If you do not use a hyphen when importing a security profile, the security profile being imported is appended to the existing security profile of the user or group. If you use this option when exporting repository objects, the utility displays an error message. Name of the XML file to import from or export to. The XML file must follow the naming conventions for the operating system where you run the utility. You can specify a path for the XML file. If you specify a path for the XML file: - When you import a repository object type, the Import Export utility looks for the XML file in the path you specify. - When you export an object type, the utility saves the XML file in the path you specify. For example, to have the utility save the file in the c:/PA directory, enter the following command:
ImportExport -e user -f c:/PA/Users.xml -u admin -p admin -l http://my.server.com:7001/ias

-f

XML file name

If you do not specify a path for the XML file: - When you import a repository object type, the Import Export utility looks for the XML file in the directory where you run the utility. - When you export an object type, the utility saves the XML file in the directory where you run the utility. For example, when you enter the following command, the utility places Users.xml in the directory where you run the utility:
ImportExport -e user -f Users.xml -u admin -p admin -l http://my.server.com:7001/ias

-u -p -l

user name password url

Data Analyzer user name. Password for the Data Analyzer user name. URL for accessing Data Analyzer. Contact the system administrator for the URL. The Data Analyzer URL has the following format:
http://host_name:port_number/ ReportingServiceName

ReportingServiceName is the name of the Reporting Service that runs the Data Analyzer instance. For example, PowerCenter runs on a machine with hostname fish.ocean.com and has a Reporting Service named IASReports with port number 18080. Use the following URL for Data Analyzer:
http://fish.ocean.com:18080/IASReports

-h -n

No argument user name or group name

Displays a list of all options and their descriptions, and a list of valid repository objects. Use to import or export the security profile of a user or group. For more information, see Table 8-2 on page 68.

Running the Import Export Utility

67

Table 8-2 lists the repository object types you can import or export using the Import Export utility and an example for each. Enter the repository object type as listed below:
Table 8-2. Valid Repository Object Types
Repository Object Type schema Description Schemas Example To import schemas from the PASchemas.xml file into the repository, use the following command:
ImportExport -i schema -f c:\PASchemas.xml -u jdoe -p doe -l http://localhost:7001/ias

timedim

Time dimension tables

To import time dimension tables from the TD.xml file into the repository, use the following command:
ImportExport -i timedim -f TD.xml -u jdoe -p doe -l http://localhost:7001/ias

report

Reports

To import reports from the Reports.xml file into the repository, use the following command:
ImportExport -i report -f c:\Reports.xml -u jdoe -p doe -l http://localhost:7001/ias

variable

Global variables. You can import global variables that do not already exist in the repository. Dashboards

To export global variables to the GV.xml file, use the following command:
ImportExport -e variable -f c:\xml\GV.xml -u jdoe -p doe -l http://server:7001/ias

dashboard

To export dashboards to the Dash.xml file, use the following command:
ImportExport -e dashboard -f c:\Dash.xml -u jdoe -p doe -l http://localhost:7001/ias

usersecurity <security profile option>

Security profile of a user. You must specify the following security profile option: -n <user name> Security profile of a group. You must specify the following security profile option: -n <group name> Schedules

To export the security profile of user jdoe to the JDsecurity.xml file, use the following command:
ImportExport -e usersecurity -n jdoe -f JDsecurity.xml -u admin -p admin -l http://localhost:7001/ias

groupsecurity <security profile option>

To export the security profile of group Managers to the Profiles.xml file, use the following command:
ImportExport -e groupsecurity -n Managers -f Profiles.xml -u admin -p admin -l http://localhost:7001/ias

schedule

To export all schedules to the Schedules.xml file, use the following command:
ImportExport -e schedule -f c:\Schedules.xml -u jdoe -p doe -l http://localhost:7001/ias

user

Users

To export all users to the Users.xml file, use the following command:
ImportExport -e user -f c:\Users.xml -u jdoe -p doe -l http://localhost:7001/ias

group

Groups

To import groups from the Groups.xml file into the repository, use the following command:
ImportExport -i group -f c:\Groups.xml -u jdoe -p doe -l http://localhost:7001/ias

role

Roles

To import roles from the Roles.xml file into the repository, use the following command:
ImportExport -i role -f c:\Roles.xml -u jdoe -p doe -l http://localhost:7001/ias

The Import Export utility runs according to the specified options. If the utility successfully completes the requested operation, a message indicates that the process is successful. If the utility fails to complete the requested operation, an error message displays.

68

Chapter 8: Using the Import Export Utility

Check the XML file name. Assign the appropriate privileges to the user. Check the syntax and spelling. Assign write permission to the user for the directory where you want to place the XML file. The import file contains a different repository object type than the repository object type given for the option -i. Cause: Action: You entered an incorrect argument for an option letter. For example. Error Messages 69 . Cause: Action: The directory where you want to place the XML file is read only or has run out of hard disk space. Cause: Action: You entered an incorrect option letter. Cause: Action: The user does not have the Export/Import XML Files privilege or the Manage User Access privilege to import or export users. Illegal option value. you entered -x or -E to export a file. with the specified name. Cause: Action: The user does not exist in Data Analyzer or password is incorrect. Contact the system administrator or Informatica Global Customer Support. Check that the user exists in Data Analyzer or the password is correct. it displays an error message. Unknown option. Cause: Action: Utility failed to run for unknown reasons. If the requested operation fails because a required option or argument is missing or not specified correctly. or roles. The export file cannot be written. groups. Check that a valid XML file. Incorrect number of command-line options. Use the correct object type or a different XML file.Error Messages If the Import Export utility fails to complete the requested operation. exists in the specified directory. the Import Export utility also displays a list of all options and their descriptions. make sure there is enough hard disk space. The user does not have privileges to import/export. Cause: Action: You omitted an option or included more options than needed. Or. The import file does not exist or cannot be read. The error message indicates why the requested operation failed. Cause: Action: The XML file to be imported does not exist or does not contain valid XML data or the utility cannot access the file. Invalid username or password. The Import Export utility can display the following error messages: Unknown error. Cause: Action: The XML file specified for the import (-i) option does not contain the correct object type. Check the validity and case sensitivity of the option letters. Check the spelling of the option values you entered. and a list of valid repository objects.

Check that the URL is correct and try to run the utility again. An export file with the provided filename already exists. or roles. Check the spelling of the user name or group name. The action depends on the root cause. To increase the memory allocation for the Java process. increase the value for the -mx option in the script file that starts the utility. Cause: Action: Data Analyzer is installed with the LDAP authentication method. If error still occurs. Cause: You cannot import global variables if they already exist in the repository. first delete them from Data Analyzer. The user or group does not exist. Run the utility again. Cause: Action: There is no data in the XML file. Cause: Action: An XML file of the same name already exists in the specified path.A communication error has occurred with Data Analyzer. Global variables cannot be overwritten. Action: Import file is empty. Delete the XML file before you enter the command. The root cause is: <error message>. increase the memory allocation for the process. Cause: Action: User name or group name that you typed for importing or exporting a security profile does not exist. Cause: Action: See the root cause message. Use a valid XML file. and then run the utility. If you want to import global variables already in the repository. Cause: Action: Data Analyzer session has timed out. 70 Chapter 8: Using the Import Export Utility . groups and roles. Check that Data Analyzer is running and try to run the utility again. Contact the Data Analyzer system administrator. contact Informatica Global Customer Support. the Java process for the Import Export utility might run out of memory and the utility might display an exception message. the Import Export utility displays this error message. The configured security realm does not support the import of users. You cannot use the Import Export utility to import users. If the XML file includes global variables already in the repository. If the Java process for the Import Export utility runs out of memory. groups. The Data Analyzer session is invalid. Troubleshooting Importing a Large Number of Reports If you use the Import Export utility to import a large number of reports (import file size of 16MB or more). Note: Back up the script file before you modify it.

add the following parameter to the Import Export utility script: -Djavax. Open the script file with a text editor: Windows: ImportExport. 4.jar TrustedCAKeystore is the keystore for the trusted CAs. Note: Back up the Import Export script file before you modify it.trustStore= If Data Analyzer uses a certificate signed by a CA defined in the default cacerts file. 5.ssl.sh 3.sh 3. If Data Analyzer uses a certificate signed by a CA not defined in the default cacerts file or if you have created your own trusted CA keystore. Using SSL with the Import Export Utility To use SSL. such as Verisign. increase the value to 1024. To specify the location of the trusted CAs. Increase the value for the -mx option from 256 to a higher number depending on the size of the import file. Add the trusted CA parameter to the Java command that starts the ImportExport utility: java -ms128m -mx256m -Djavax.bat UNIX: ImportExport. When you run the Import Export utility. Save and close the Import Export utility file. To specify the location of the trusted CAs: 1. Open the script file with a text editor: Windows: ImportExport.jar $* 4. make sure that the URL you provide with the -l option starts with https:// and uses the correct port for the SSL connection. Locate the -mx option in the Java command: java -ms128m -mx256m -jar repositoryImportExport. The default directory is <PCAEInstallationDirectory>/DataAnalyzer/import-exportutil/. By default.To increase the memory allocation: 1. Locate the Import Export utility script in the Data Analyzer utilities directory: <PCAEInstallationDirectory>/DataAnalyzer/import-exportutil 2. Save and close the Import Export utility script file. 2. If the utility still displays an exception. Locate the Import Export utility script file in the Data Analyzer utilities directory. Troubleshooting 71 .net.ssl.trustStore=<TrustedCAKeystore> -jar repositoryImportExport.net.bat UNIX: ImportExport. Data Analyzer needs a certificate that must be signed by a trusted certificate authority (CA). you do not need to specify the location of the trusted CA keystore when you run the Import Export utility. you must provide the location of the trusted keystore when you run the Import Export utility. Tip: Increase the value to 512. the trusted CAs are defined in the cacerts keystore file in the JAVA_HOME/jre/lib/security/ directory.

72 Chapter 8: Using the Import Export Utility .

View Data Analyzer log files for information on user and system activity. Provide the name. 74 Managing Logs. email address. 90 Configuring Display Settings for Groups and Users. 88 Configuring Departments and Categories. 81 Managing Delivery Settings. Register LDAP servers to enable users to access LDAP directory lists from Data Analyzer. View the configuration information of the machine hosting Data Analyzer. Register an outbound mail server to allow users to email reports and shared documents. Determine whether scroll bars appear in report tables. ♦ ♦ ♦ 73 . Query governing. Log files. report processing time. Contact information. Define upper limits on query time. and phone number of the Data Analyzer system administrator. 84 Setting Rules for Queries. Report settings. Delivery settings. 88 Configuring Report Headers and Footers. 73 Managing Color Schemes and Logos. Modify the color schemes. You can also configure alert delivery devices. and logos of Data Analyzer to match those of your organization. 91 Overview You can configure the following administrative settings: ♦ ♦ ♦ ♦ ♦ Color schemes. 83 Specifying Contact Information. 84 Viewing System Information. and number of table rows displayed.CHAPTER 9 Managing System Settings This chapter includes the following topics: ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ Overview. images. LDAP settings. and receive alerts. System information. and logos. Users might find the administrator contact information useful in the event of a system problem. images. 85 Configuring Report Table Scroll Bars. 78 Managing LDAP Settings.

Display Settings. Edit the predefined color scheme and change the file name of the Logo Image URL field or the Login Page Image URL to the name of your image file. Copy the logo or login image file to the predefined images folder. Use any HTML hexadecimal color code to define colors. use green for the Images Directory field. ♦ Betton Books color scheme. 74 Chapter 9: Managing System Settings . For the Informatica color scheme. Logo Image URL. The EAR directory containing images for this color scheme is in the following location: /custom/images/standard This is the default image directory for Data Analyzer. Enter the name of the logo image file you want to use. You can set a default color scheme for all users and groups. Login Page Image URL. complete the following steps: 1. 2. You can also assign users and groups to specific color schemes. Enter the name of the login page image file that you want to use. Control display settings for users and groups. you can search for these objects by department or category on the Find tab. The color schemes and image files used in Data Analyzer are stored in the EAR directory. By default. Metadata configuration. Enter the following information in the predefined color scheme settings: ♦ Images Directory. Using a Predefined Color Scheme Data Analyzer provides the following predefined color schemes that you can use or modify: ♦ Informatica color scheme. Predefined color scheme folder name. Alternative predefined color scheme. Create the headers and footers printed in Data Analyzer reports. You can edit existing color schemes or create new color schemes. For the Betton Books color scheme. You can associate repository objects with a department or category to help you organize the objects. leave the Images Directory field blank. Data Analyzer references the image and logo files in the Data Analyzer images directory on the web server associated with the application server. Create department and category names for your organization.♦ ♦ Report header and footer. When you associate repository objects with a department or category. ♦ ♦ All file names are case sensitive. The EAR directory containing images for the Betton Books color scheme is in the following location: /custom/images/standard/color/green Adding a Logo to a Predefined Color Scheme To use a predefined color scheme with your own logo or login page image. using your own images and colors. This is the default Data Analyzer color scheme. You can modify or add color schemes and images in the EAR directory to customize the Data Analyzer color schemes and images for the organization. ♦ Managing Color Schemes and Logos A color scheme defines the look and feel of Data Analyzer. 3. the Informatica color scheme is the default color scheme for all users and groups in Data Analyzer.

com:7001/CompanyLogo. use the forward slash (/) as a separator. Editing a Predefined Color Scheme You can edit the colors and image directories for predefined color schemes and preview the changes. Table 9-1 shows the display items you can modify in the Color Scheme page: Table 9-1. If you specify a URL. To edit a predefined color scheme: 1. Enter hexadecimal color codes to represent the colors you want to use. Name of a background image file in the color scheme directory or the URL to a background image on a web server. The Color Schemes and Logos page displays the list of available color schemes. Report sub-heading on the Analyze tab. Name of a logo file image in the color scheme directory or the URL to a logo image on a web server. Managing Color Schemes and Logos 75 . The Color Scheme page displays the settings of the color scheme. For more information about hexadecimal color codes. If blank. click the name of the color scheme. Page header of Data Analyzer. The height of your login page image must be approximately 240 pixels. port 7001.PaintersInc. Click Administration > System Management > Color Schemes and Logos. Background Image URL. the width of your login page image must be approximately 1600 pixels. If you specify a URL. Section sub-heading such as the container sub-heading on the View tab. you might lose your changes when you upgrade to future versions of Data Analyzer. Logo Image URL. Display Items in the Color Scheme Page Display Item Background Page Header Primary Secondary Heading Sub-Heading Description Background color of Data Analyzer. Report heading on the Analyze tab.PaintersInc. 4.You can also enter a URL for the logo and login image files.gif The URL can point to a logo file in the Data Analyzer machine or in another web server. or the width of your monitor setting. use the forward slash (/) as a separator.com. It also displays the directory for the images and the URL for the background. if the host name of the web server where you have the logo file is http://monet. Section heading such as the container heading on the View tab. Name of the login page image file in the color scheme directory or the URL to a login image on a web server. enter the following URL in the Logo Image URL field: http://monet. login. Login Page Image URL. For example. Name of the color scheme directory where you plan to store the color and image files. and logo image files. enter file and directory information for color scheme images: ♦ ♦ ♦ ♦ Images Directory. Data Analyzer uses all the colors and images of the selected predefined color scheme with your logo or login page image. Optionally. Data Analyzer looks for the images in the default image directory. 2. 3. The color scheme uses the hexadecimal color codes for each display item. see “HTML Hexadecimal Color Codes” on page 121. If you modify a predefined color scheme. To edit the settings of a color scheme. All file names are case sensitive. To display the login page properly.

and Access Management. Click Close to close the Color Scheme Preview window. Button Colors Tab Colors 5. including Schema Design. Odd rows in a list. To create a new color scheme folder: 1. Click OK to save your changes. Copy your image files into the new folder. pop-up windows. When you create a color scheme. System Management. Analyze. copy your logo and image files into the new directory: /custom/images/standard/color/CompanyColor 76 Chapter 9: Managing System Settings . Create a new color scheme in Data Analyzer and use the new folder as the Images Directory. and tabs with drop-down lists. navigate to the EAR directory. Create a folder for the images and make sure it contains the new images. For example. Make sure Data Analyzer can access the images to use with the color scheme. complete the following steps: 1. To create a new color scheme folder. Step 1. Find. Administration. Alerts. you can use your own images and logos. Menu items on the Administration tab. 2. Use the same color in Section for the Selected field in Tab Colors so that color flows evenly for each tab under the Primary Navigation tab. Buttons in Data Analyzer.Table 9-1. XML Export/Import. Add the directory and files for the new color scheme under the default image directory. if you want to create a /CompanyColor directory for your new color scheme. Tabs under the Primary Navigation tab. Create a New Color Scheme Folder Create a folder in the color schemes directory and copy the image files you want to use to this folder. click Preview. To create a color scheme. Create a folder for the images and logo. and Manage Account tabs. Real-time Configuration. Creating a Color Scheme You can create a Data Analyzer color scheme. Scheduling. Tabs include items such as the Define Report Properties tab in Step 5 of the Create Report wizard and the toolbar on the Analyze tab. Display Items in the Color Scheme Page Display Item Section Odd Table Row Even Table Row Selected Rows Primary Navigation Tab Colors Secondary Navigation Colors Description Background color for sections such as forms on the Administration tab. 6. View. To preview the choices. Rows you select in the report table or on tabs such as the Find tab. Create a folder for the new color scheme: /custom/images/standard/color/ 2. The Color Scheme Preview window displays an example of the way Data Analyzer will appear with the color scheme. Even rows in a list. Create. 7. 3. The name of the color scheme folder can be up to 10 characters.

in GIF or JPG format.You must have image files for all buttons and icons that display in Data Analyzer. Selecting a Default Color Scheme You can select a default color scheme for Data Analyzer. Step 2. 2. 3. Click OK to save the new color scheme. Click Add. All file names are case sensitive. Data Analyzer uses the Informatica color scheme. enter the file name of the background image you want to use. 7. 3. enter the file name of the logo image to use. In the Login Page Image URL field. If you do not set up new colors for the color scheme. If you do not specify a color scheme for a user or group. To select a default color scheme: 1. 8. the image files for your color scheme must have the same names and format as the image files for the predefined color schemes. 4. Click Administration > System Management > Color Schemes and Logos. The Color Scheme page appears. The Color Schemes and Logos page appears. The background and logo image files can have file names that you specify. For more information about hexadecimal color codes. see Table 9-1 on page 75. Click Administration > System Management > Color Schemes and Logos. Enter the name and description of the new color scheme. enter the file name of the login page image to use. In the Background Image URL field. Managing Color Schemes and Logos 77 . After you set up the folder for the images to use in a new color scheme. The Color Schemes and Logos page displays the list of available color schemes. see “HTML Hexadecimal Color Codes” on page 121. 6. To create a new color scheme in Data Analyzer: 1. For more information about display items on the Color Scheme page. In the Images Directory field. you can create the color scheme in Data Analyzer and use the new color scheme directory. In the Logo Image URL field. Enter the hexadecimal codes for the colors you want to use in the new color scheme. 10. Make sure the image file is saved in the color scheme folder you created earlier. select Default next to the color scheme name. To set the default color scheme for Data Analyzer. 2. enter the name of the color scheme folder you created. Click Preview to preview the new color scheme colors. Create a New Color Scheme in Data Analyzer On the Color Schemes page. 9. set the colors you want to use for the color scheme and provide the new folder name for the images. Click Apply. 5. Data Analyzer uses the selected color scheme as the default for the repository. Since Data Analyzer references the image files to display them in Data Analyzer. The new color scheme folder must exist in the EAR directory for Data Analyzer to access it. Data Analyzer uses a default set of colors that may not match the colors of your image files.

Click OK to close the dialog box. and click Add. JDBC log. ♦ ♦ ♦ Viewing the User Log With the user log. Activity log. 2. click Edit. and debugging messages. the color scheme for the primary group takes precedence over the other group color schemes. The name of the user accessing Data Analyzer. The host name accessing Data Analyzer when available. In the Query Results area. Assign specific color schemes when you want a user or group to use a color scheme other than the default color scheme. 4. Data Analyzer uses the default color scheme. activity type. Managing Logs Data Analyzer provides the following logs to track events and information: ♦ ♦ User log. The user log lists the following information: ♦ ♦ ♦ ♦ ♦ Login name. You can view. Lists the location and login and logout times for each user. Lists Data Analyzer activity. To assign additional users or groups. you can track user activity in Data Analyzer. The Assign Color Scheme window appears. Click Administration > System Management > Color Schemes and Logos. including the success or failure of the activity. When a user belongs to more than one group. and debugging messages about the size of the Data Analyzer global cache. warning. The date and time the user logged in based on the machine running the Data Analyzer server. informational. and save the user log. The date and time the user logged out based on the machine running the Data Analyzer server. the user color scheme takes precedence over the group color scheme. Lists error. repeat steps 3 to 5. 6. Lists error. You can also assign color schemes when you edit the user or group on the Access Management page. To assign the color scheme to a user or group. Logoff time. The IP address accessing Data Analyzer when available. If the user does not have a primary group. Data Analyzer stores the user log entries in the repository. To assign a color scheme: 1. Remote address. Click the name of the color scheme you want to assign. 3. Lists all repository connection activities. System log. clear. Global cache log. Remote host. 5. Use the search options to produce a list of users or groups. the objects used for the activity. You can also configure it to log report queries. informational. 7. warning. Login time. select the users or groups you want to assign to the color scheme. and the duration of the request and activity. the user requesting the activity.Assigning a Color Scheme You can assign color schemes to users and groups. When you assign a user and its group to different color schemes. 78 Chapter 9: Managing System Settings . You can assign color schemes to users and groups when you edit the color scheme. Click OK to save the color scheme.

By default. Request ID. User name.properties. Managing Logs 79 . Configuring and Viewing the Activity Log With the activity log. Click Administration > System Management > User Log. 2. such as the number of requests to view or run reports. The overall time in milliseconds takes to perform the request. To view the role of the user.) The SQL statement used to run a report. The identification number of the activity. User role. Click Clear. see “Configuration Files” on page 129. The source type of the activity request. The role of the user. You can clear the Data Analyzer user log. Object type. The time in milliseconds Data Analyzer takes to send the activity request to the data warehouse. such as report. you can track the activity requests for your Data Analyzer server. The requested activity. To clear the user log: 1. and then follow the prompts to save the log to disk.user. If the user has not logged out. Tables. such as Success or Failure. duration displays the length of time the user has been logged into Data Analyzer. hold the pointer over the user name. (XML file only.properties. Status. Saving and Clearing the User Log You can save the user log to an XML file. If you sort the user log by a column. API. The difference between login and logout times for each user. Use this statistic to optimize database performance and schedule reports. Start time. You might save a user log before clearing it to keep a record of user access. SQL. The type of object requested. Data Analyzer stores the activity log entries in the repository. The identification number of the request that the activity belongs to. To view the role of the user. The status of the activity. You can change the number of rows by editing the value of the logging. To view the user log. The time the user issued the activity request. Data Analyzer sorts on all user log data. Click Administration > System Management > User Log. The name of the object requested.♦ ♦ Duration. such as web. Object name. (XML file only. By default. When you clear the user log. hold the pointer over the user name.maxRowsToDisplay property in DataAnalyzer. or scheduler. Source. such as Execute or Update. 2. Click Save.) The tables used in the SQL statement for a report. The Data Analyzer user requesting the activity. Duration. User role. Clear the activity log on a regular basis to optimize repository performance. Activity. Data Analyzer clears all entries except for users who have logged in during the past 24 hours and have not yet logged off. click Administration > System Management > User Log. not just the currently displayed rows. the activity log tracks the following information: ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ Activity ID. To save a user log: 1. Data Analyzer displays up to 1. For more information about editing DataAnalyzer. Data Analyzer deletes the log entries from the repository.000 rows in the user log. DB access.

activity. the System log displays error and warning messages. 2.xml file. Click SQL in the Activity Log area to log queries. Click Clear. select both SQL and Tables. If you sort the activity log by a column. You can choose to display the following messages in the system log: ♦ ♦ ♦ ♦ Errors Warnings Information Debug To specify the messages displayed in the system log file: Click Administration > System Management > Log Configuration.properties file. Click Administration > System Management > Activity Log. To log the tables accessed in the query. This additional information appears in the XML file generated when you save the activity log. Saving and Clearing the Activity Log You can save the activity log to an XML file. You can locate the system log file in the following directory: <PowerCenter_install folder>/server/tomcat/jboss/server/informatica/log/<Reporting Service Name> By default. 2. To configure the activity log: 1. You can configure the activity log to provide the query used to perform the activity and the database tables accessed to complete the activity. 2.maxRowsToDisplay property in the DataAnalyzer. When you clear the activity log. Click Administration > System Management > Activity Log. click Administration > System Management > Activity Log.000 rows in the activity log. Configuring the System Log Data Analyzer generates a system log file named ias. You can clear the activity log of all entries to free space and optimize repository performance. Click Administration > System Management > Log Configuration. Click Save. Data Analyzer displays up to 1. Data Analyzer logs the additional details.To view the activity log. Data Analyzer clears all entries from the log. and then follow the prompts to save the log to disk. You can view the system log file with any text editor. not just the currently displayed rows. By default.log which logs messages produced by Data Analyzer. To clear the activity log: 1. You can change the name of the log file and the directory where it is saved by editing the log4j. You can change the number of rows by editing the value of the logging. Data Analyzer sorts on all activity log data. 80 Chapter 9: Managing System Settings . You might save the activity log to file before you clear it to keep a record of Data Analyzer activity. To view the information. You might also save the activity log to view information about the SQL statements and tables used for reports. To save an activity log: 1. save the activity log to file.

Locate the log4j.properties file. If you installed JBoss Application Server using the PowerCenter installer.To configure the name and location of the system log file: 1.server. Managing LDAP Settings Lightweight Directory Access Protocol (LDAP) is a set of protocols for accessing information directories. use the forward slash (/) or two backslashes (\\) in the path as the file separator.DailyRollingFileAppender"> <param name="File" value="${jboss. After you set up the connection to the LDAP directory service. see the PowerCenter Administrator Guide. you must provide a value for the BaseDN property. You use the LDAP settings in Data Analyzer to access contacts within the LDAP directory service when you send email from Data Analyzer. The Base distinguished name entries define the type of information that is stored in the LDAP directory.home. users can email reports and shared documents to LDAP directory contacts. If you specify a path. contact your LDAP system administrator. Save the file. Data Analyzer does not support a single backslash as a file separator.appender. modify the File parameter to include the path and file name: <param name=”File” value=”d:/Log_Files/mysystem. You can also determine whether Data Analyzer appends data to the file or overwrites the existing JDBC log file by editing the jdbc.log. Your changes will take affect in Data Analyzer within several minutes. When you add an LDAP server.log in a folder called Log_Files in the D: drive. locate the JDBC log file in the following directory: <PowerCenter_install folder>/server/tomcat/jboss/bin/ You can change the name of the file and the directory where it is saved by editing the jdbc. you can add the LDAP server on the LDAP Settings page. enter the Base distinguished name entries for your LDAP directory.log. To access contacts in the LDAP directory service. if you want to save the Data Analyzer system logs to a file named mysystem. In the BaseDN property.append property in DataAnalyzer. Access LDAP directory contacts. Open the file with a text editor and locate the following lines: <appender name="IAS_LOG" class="org. For example. You can use LDAP in the following ways: ♦ ♦ Authentication. Modify the value of the File parameter to specify the name and location for the log file.log”/> 4. If you do not know the value for BaseDN. Configuring the JDBC Log Data Analyzer generates a JDBC log file. You use the PowerCenter LDAP authentication to authenticate the Data Analyzer users and groups. You can view the log file with any text editor.logging.xml file in the following directory: <PowerCenter_install folder>/server/tomcat/jboss/server/informatica/ias/<Reporting Service Name>/META-INF The above folder is available after you enable the Reporting Service and the Data Analyzer instance is started.jboss.log"/> 3.file property in the DataAnalyzer. For more information about LDAP authentication.dir}/log/<Reporting Service Name>/ias.properties. 2. Managing LDAP Settings 81 .

Select System if you use Microsoft Active Directory as an LDAP directory. LDAP Server Settings Setting Name URL BaseDN Description Name of the LDAP server you want to configure. You must enter a valid system name and system password for the LDAP server.dc= company_name. 2.cn=users. 82 Chapter 9: Managing System Settings . System name of the LDAP server. Select Anonymous if the LDAP server allows anonymous authentication. Authentication method your LDAP server uses.company. 3.domain. you must choose System authentication as the type of authentication on the LDAP Settings page. Required when using System authentication. Enter the following information. If you do not know the BaseDN. Authentication System Name System Password 4.If you use Microsoft Active Directory as the LDAP directory. The following example lists the values you need to enter on the LDAP Settings page for an LDAP server running Microsoft Active Directory: Name: Test URL: ldap://machine. Use the following format: ldap://machine.company. contact your LDAP system administrator.com Base distinguished name entry identifies the type of information stored in the LDAP directory. The LDAP Settings page appears. Required when using System authentication. System password for the LDAP server.com BaseDN: dc=company_name.dc=com System Password: password The following example lists the values you need to enter on the LDAP Settings page for an LDAP server running a directory service other than Microsoft Active Directory: Name: Test URL: ldap:// machine. Click OK to save the changes.dc=com Authentication: System System Name: cn=Admin. If your LDAP server requires system authentication. click the name of the LDAP server on the LDAP Settings page. select System. Table 9-2 lists the LDAP server settings you can enter: Table 9-2. To modify the settings of an LDAP server. URL for the server. Click Add.dc=com Authentication: Anonymous To add an LDAP server: 1. Click Administration > System Management > LDAP Settings.com BaseDN: dc= company_name. Contact your LDAP system administrator for the system name and system password.

3. 3. The mail server you configure must support Simple Mail Transfer Protocol (SMTP). SMS/text messaging and mobile carriers. Data Analyzer configures the following mobile carriers: ♦ ♦ ♦ ♦ ♦ ATT Cingular Nextel Sprint Verizon You can configure additional mobile carriers by entering connection information for the carriers. Allows users to connect to Data Analyzer from the internet. Managing Delivery Settings 83 . you must configure SMS/Text messaging. Configure an external URL so that users can access Data Analyzer from the internet. Click Administration > System Management > Delivery Settings. In the External URL field. You can configure one outbound mail server at a time. Allows users to register an SMS/Text pager or phone as an alert delivery device. Configuring the Mail Server The mail server provides outbound email access for Data Analyzer and users. To configure the external URL: 1. Click Administration > System Management > Delivery Settings. enter the URL to the outbound mail server. Click Apply. you might need to create a mail server connector before configuring the mail server. Allows Data Analyzer users to email reports and shared documents. Depending on the mail server. Enter the URL for the proxy server you configured during installation. For more information about using an SMS/Text pager or phone as an alert device. enter the URL for the proxy server. the users also need to select a mobile carrier. The Delivery Settings page appears. 2. You can configure the following delivery settings: ♦ ♦ ♦ Mail server. In the Mail Server field. 2. With outbound mail server configured. The URL must begin with http:// or https://. Configuring the External URL The external URL links Data Analyzer with your proxy server. The Delivery Settings page appears. Click Apply. External URL. users can email reports and shared documents. To configure the mail server: 1. Configuring SMS/Text Messaging and Mobile Carriers To allow users to receive one-way SMS/Text message alerts on a phone or pager.Managing Delivery Settings You can determine how users access Data Analyzer and which functions they can access with delivery settings. To receive SMS/Text message alerts. and receive email alerts. see the Data Analyzer User Guide.

phone number. Specifying Contact Information When a system problem occurs. users may need to contact the system administrator. database server type. The Delivery Settings page displays. Click Administration > System Management > Delivery Settings. To add a mobile carrier. The Java section displays the following information about the Java environment on the machine hosting Data Analyzer: − − − − − − − ♦ ♦ Application Server. The Java vendor web site. The version of the application server that runs Data Analyzer. The Java vendor. Vendor. Vendor URL. The Operating System section displays the operating system. 2. 4. Enter the name. Classpath. and user name. select SMS/Text Messaging. In the Delivery Settings area. The System Information page contains the following sections: ♦ System Information. if the wireless email address for ATT is myusername@mobile. enter the name and address for the mobile carrier. The version of the Java Servlet API. database version. driver version. in the Mobile Carriers task area.net. Viewing System Information On the System Information page. To specify contact information: 1. and email address of the system administrator. The System Information section lists the Data Analyzer version and build. driver name. Click Administration > System Management > Contact Information. Home. For example. The home directory of the JVM. If you do not know the domain and extension.att. 2. you can view information about Data Analyzer and the machine that hosts it. Data Analyzer adds the mobile carrier to the list of mobile carriers. 3. Java Version.To configure SMS/Text Messaging and mobile carriers: 1. In the address field. enter the domain and extension of the email address associated with your device. Operating System. JDBC connection string. You can specify contact information for the system administrator in the System Management Area. 3. and architecture of the machine hosting Data Analyzer. see your wireless carrier documentation. A list of the paths and files contained in the Java classpath system variable. The version of the Java Virtual Machine (JVM). repository version. Java. To view system information: Click Administration > System Management > System Information.net. Click Apply. version. 84 Chapter 9: Managing System Settings . you enter mobile. Servlet API. Click Add.att.

Row Limit 3. Default is 20. Setting Query Rules at the System Level You can specify the query governing settings for all reports in the repository. Enter the query governing rules. Data Analyzer displays a warning message and drops the excess rows. Table 9-3 describes the system query governing rules you can enter: Table 9-3. you must log out of Data Analyzer and log in again for the new query governing settings to take effect. To set up group query governing rules: 1. Click Administration > System Management > Query Governing. Setting Rules for Queries 85 . When you clear this option. You may have more than one SQL query for the report. and the maximum number of rows that each query returns. user. 3. Data Analyzer uses the query governing settings entered on this page. When this option is selected. Click Administration > Access Management > Groups. Setting Query Rules at the Group Level You can specify query governing settings for all reports belonging to a specific group. Click Apply. System Query Governing Settings Setting Query Time Limit Report Processing Time Limit Description Maximum amount of time for each SQL query. Maximum amount of time allowed for the application server to run the report. Default is 600 seconds. unless you override them at the group. In the Query Governing section. Enter the query governing settings you want to use. Maximum number of rows SQL returns for each query. Report Processing Time includes time to run all queries for the report. These settings apply to all reports. To set up group query governing rules: 1. 2. the time limit on processing a report. Query governing settings for the group override system query governing settings. Data Analyzer uses the system query governing settings.Setting Rules for Queries You can configure the time limit on each SQL query for a report. 2. You can set up these rules for querying at the following levels: ♦ ♦ ♦ ♦ System Group User Report When you change the system query governing setting or the query governing setting for a group or user. Data Analyzer uses the largest query governing setting from each group.000 rows. clear the Use Default Settings option. 4. or report level. If a user belongs to one or more groups in the same level in the group hierarchy. Click Edit next to the group whose properties you want to modify. Default is 240 seconds. If a query returns more rows than the row limit. The Query Governing page appears.

For more information about each setting, see Table 9-3 on page 85.
5.

Click OK. Data Analyzer saves the group query governing settings.

Setting Query Rules at the User Level
You can specify query governing settings for all reports belonging to a specific user. Query governing settings for the user override group and system query governing settings.
To set up user query governing rules: 1. 2. 3.

Click Administration > Access Management > Users. Click the user whose properties you want to modify. In the Query Governing section, clear the Use Default Settings option. When you clear this option, Data Analyzer uses the query governing settings entered on this page. When this option is selected, Data Analyzer uses the query governing settings for the group assigned to the user.

4.

Enter the query governing settings you want to use. For more information about each setting, see Table 9-3 on page 85.

5.

Click OK. Data Analyzer saves the user query governing settings.

Query Governing Rules for Users in Multiple Groups
If you specify query governing settings for a user, Data Analyzer uses the query governing setting when it runs reports for the user. If you do not specify query governing settings for a user, Data Analyzer uses the query governing settings for the group that the user belongs to. If a user belongs to multiple groups, Data Analyzer assigns the user the least restrictive query governing settings available. Data Analyzer ignores groups with the system default query governing settings. For example, you have not specifically configured query governing settings for a user. The user belongs to three groups with the following query governing settings:
Group Group 1 Group 2 Group 3 Row Limit 25 rows Query Time Limit 30 seconds

Default query governing settings 18 rows 120 seconds

Data Analyzer does not consider Group 2 in determining the group query governing settings to use for the user reports. For the row limit, Data Analyzer uses the setting for Group 1 since it is the least restrictive setting. For query time limit, Data Analyzer uses the setting for Group 3 since it is the least restrictive setting.

Setting Query Rules at the Report Level
You can specify query governing settings for a specific report. Query governing settings for a specific report override group, user, and system query governing settings.
To set up report query governing rules: 1. 2. 3.

Click the Find tab. Click the report whose properties you want to modify. Click Edit.

86

Chapter 9: Managing System Settings

4. 5. 6.

Click Publish. On the Report Properties tab, click More Options. In the Query Governing section, clear the Use Default Settings option. When you clear this option, Data Analyzer uses the query governing settings entered on this page. When this option is selected, Data Analyzer uses the query governing settings for the user.

Setting Rules for Queries

87

7.

Enter the query governing settings you want to use. For more information about each setting, see Table 9-3 on page 85.

8.

Click Save.

Configuring Report Table Scroll Bars
You can configure report tables to appear with a scroll bar. When you enable the Show Scroll Bar on Report Table option, Data Analyzer displays a scroll bar when data in a report table extends beyond the size of the browser window. When the option is disabled, you use the browser scroll bar to navigate large report tables. By default, Data Analyzer displays scroll bars in report tables.
To change report table scroll bar display: 1.

Click Administration > System Management > Report Settings. The Report Settings page appears.

2. 3.

To allow scroll bars, select Show Scroll Bar on Report Table. To disable scroll bars, clear the option. Click Apply.

Configuring Report Headers and Footers
In the Header and Footer page, you can configure headers and footers for reports. You can configure Data Analyzer to display text, images, or report information such as report name. Headers and footers display on the report when you complete the following report tasks:
♦ ♦ ♦ ♦ ♦

Print. Headers and footers display in the printed version of the report. Export. Headers and footers display when you export to an HTML or PDF file. Broadcast. Headers and footers display when you broadcast a report as an HTML, PDF, or Excel file. Archive. Headers and footers display when you archive a report as an HTML, PDF, or Excel file. Email. Headers and footers display when you email a report as an HTML or PDF file.

You can display text or images in the header and footer of a report. When you select the headers and footers to display, preview the report to verify that the headers and footers display properly with enough spaces between text or images. Table 9-4 lists the options you can select to display in the report headers and footers:
Table 9-4. Display Options for Report Headers and Footers
Header/Footer Left Header Center Header Right Header Display Options Text or image file. Text. Text.

88

Chapter 9: Managing System Settings

Text and Page Number.Printed On.Name. The Report Header and Footer page appears. middle name. if the host name of the web server where you saved the Header_Logo. Enter the text or image file name to display. Data Analyzer looks for the header and footer image files in the image file directory for the current Data Analyzer color scheme. Data Analyzer shrinks the font to fit the text in the allotted space by default. Configuring Report Headers and Footers 89 . If you want to use an image file in a different location. Users can specify their names on the Manage Account tab. If a user specifies a first name. Data Analyzer displays the specified name in the footer.User Name. Click Administration > System Management > Header and Footer.HeaderFooter. Name of the report.PaintersInc. or select report property to display. Name of the user. or email the report. Text or image file. broadcast. Data Analyzer does not display any image for the report header or footer. . The report header and footer image files are stored with the color scheme files in the EAR directory. Select an option and enter text. For example.Last Update. archive. export. Select report properties to display. To configure report headers and footers: 1. . You can use the PDF. enter the following URL: http://monet.properties file to determine how Data Analyzer handles long headers and footers. Center Footer Right Footer The image files you display in the left header or the right footer of a report can be any image type supported by your browser. Or select to display both. By default. allowing Data Analyzer to display only the text that fits in the header or footer. port 7001.com.gif image file is http://monet. Date when the report was last updated. If you want to modify or use a new image for the left header or right footer. you must update the images in the EAR directory.PaintersInc. Select an option and enter text to display.Table 9-4. .gif If Data Analyzer cannot find the header or footer image in the color scheme directory or the URL. When you enter a large amount of text in a header or footer.ShrinktoWidth property in the DataAnalyzer. enter the complete URL for the image when you configure the header or footer. Select to display text or image. You can also configure Data Analyzer to keep header and footer text the configured font size.com:7001/Header_Logo. Display Options for Report Headers and Footers Header/Footer Left Footer Display Options One or more of the following report properties: . Date and time when you print. or last name.

Click Add. Click Preview to see how the report will look with the headers and footers you selected. Data Analyzer looks for the header and footer images in the image directory for the color scheme. Click Add. click Apply to set the report header and footer. 6. You might use department names to organize repository objects according to the departments in your organization. To configure department and category: 1. Associating repository objects with a department or category can also help you search for these objects on the Find tab. Configuring Departments and Categories You can associate repository objects with a department or category to organize repository objects. To use text for left headers.2. such as Quarterly or Monthly. Click OK. select the headers you want to display and enter the header text. Adobe Acrobat launches in a new browser window to display a preview of the report. select the footer you want to display. 6. such as Human Resource and Development. 2. The department name appears in the list in the Departments area. 4. 90 Chapter 9: Managing System Settings . select the top field and enter the text to use. To use text for the right footer. select the lower field and enter the name of an image file in the Data Analyzer EAR file or specify a URL for the image. To use an image for the left header. close the preview window and click Preview again to see the new report header and footer. Click Administration > System Management > Metadata Configuration. On the Report Header and Footer page. 5. If the image is not in the default image directory. For more information about the header and footer display options. 3. Or click Cancel to discard the changes to the headers and footers. select the top field and enter the text to display. enter the name of the category. Close the preview window. specify the complete URL. 5. In the Categories area. select the lower field and enter the name of the file to use. 3. you can choose properties specific to the report. Note: If you make more changes in the report header and footer configuration. In the Departments area. The category name appears in the list in the Categories area. The Categories Departments page appears. For left footers. see Table 9-4 on page 88. enter the name of the department. To use an image for the right footer. 4. You might use category names to organize repository objects according to object characteristics. To configure report footers. To configure report headers. Data Analyzer saves the department or category names you added. You can associate the category or department you created with repository objects.

Change the value of the showSearchThreshold property according to your requirements. searchLimit. Locate the line containing the following property: searchLimit The value of the searchLimit property is the maximum number of groups or users in the search result before you must refine the search criteria.showSearchThreshold </param-name> <param-value>100</param-value> </init-param> 3. Default is 100. Note: The web. Determines the maximum number of groups or users in the search results before you must refine the search criteria. useradmin. Determines the number of groups or users Data Analyzer displays before displaying the Search box.ias. Change the value of the searchLimit property according to your requirements. 2.informatica.com.000.xml file before you modify it. To change group or user display options in web.ias. Open the /custom/properties/web. useradmin. If Data Analyzer returns more than 1. You can customize the way Data Analyzer displays users or groups. <init-param> <param-name> InfUserAdminUIConfigurationStartup.xml so you can configure the user or group display according to your requirements: ♦ ♦ showSearchThreshold. Data Analyzer displays a Search box so you can find the group or user you want to edit. 4.xml. refine the search criteria. Back up the web. Default is 1. Data Analyzer provides the following properties in a file named web. <init-param> <param-name> InfUserAdminUIConfigurationStartup.xml file is stored in the EAR directory.com. Restart Data Analyzer.informatica.000 groups or users in the search results. if you have more than 100 groups or users.xml file with a text editor and locate the line containing the following property: showSearchThreshold The value of the showSearchThreshold property is the number of groups or users Data Analyzer displays without providing the Search box.Configuring Display Settings for Groups and Users By default. Save and close web.searchLimit </param-name> <param-value>1000</param-value> </init-param> 5. Configuring Display Settings for Groups and Users 91 .xml: 1. 6.

92 Chapter 9: Managing System Settings .

After you set up the Data Analyzer administrative reports. the number of reports accessed in each hour for the day. The reports provide a view into the information stored in the Data Analyzer repository. The Administrator’s Dashboard has the following containers: ♦ Today’s Usage. you can view and use the reports just like any other set of reports in Data Analyzer. On the Administrator’s Dashboard. Data Analyzer Administrative Reports folder. You can enhance the reports to suit your needs and help you manage the users and processes in Data Analyzer more efficiently. Provides information on the number of users who logged in for the day. They require a data source that points to the Data Analyzer repository. you can quickly see how well Data Analyzer is working and how often users log in. 93 .CHAPTER 10 Working with Data Analyzer Administrative Reports This chapter includes the following topics: ♦ ♦ ♦ Overview. They include details on Data Analyzer usage and report schedules and errors. 94 Using the Data Analyzer Administrative Reports. You can access all administrative reports in the Data Analyzer Administrative Reports public folder under the Find tab. 97 Overview Data Analyzer provides a set of administrative reports that enable system administrators to track user activities and monitor processes. The Data Analyzer administrative reports use an operational schema based on tables in the Data Analyzer repository. you can modify it to add metrics or attributes. Administrator’s Dashboard The Administrator’s Dashboard displays the indicators associated with the administrative reports. 93 Setting Up the Data Analyzer Administrative Reports. and any errors encountered when Data Analyzer runs cached reports. If you need additional information in a report. You can add charts or indicators. You can view the administrative reports in two areas: ♦ ♦ Administrator’s Dashboard. or change the format of any report. They also require a data connector that includes the Data Analyzer administrative reports data source and operational schema.

and then add the data source to a data connector. see “Step 3. After you create a Reporting Service in the PowerCenter Administration Console and the corresponding Data Analyzer instance is running properly. Set Up a Data Source for the Data Analyzer Repository The administrative reports provide information on the Data Analyzer processes and usage. Click Administration > Schema Design > Data Sources. The administrative reports display information from the Data Analyzer repository. 2. 4. see “Step 2. Add the Administrative Reports to Schedules” on page 96. Select JDBC Data Source. You need a data source to connect to the repository. Select the server type of your Data Analyzer repository. For more information about. Create a data source for the Data Analyzer repository. see “Step 1. Note: If you have a data source that points to the Data Analyzer repository. To run the administrative reports.♦ ♦ ♦ Historical Usage. For more information. Data Analyzer Administrative Reports Folder The Data Analyzer Administrative Reports folder stores all the administrative reports. and the longest running cached reports for the current month. Setting Up the Data Analyzer Administrative Reports Informatica ships a set of prepackaged administrative reports for Data Analyzer. see “Step 4. Set Up a Data Source for the Data Analyzer Repository” on page 94. For more information. 4. Import the Data Analyzer Administrative Reports” on page 95. Add the repository data source to a data connector. On the Data Sources page. Lists the cached reports in Data Analyzer and when they are scheduled to run next. 2. you can set up the administrative reports on Data Analyzer. Add the administrative reports to a schedule. The information comes from the Data Analyzer repository. Future Usage. Also provides reports on the most and least accessed reports for the year. For more information. Displays the users who logged in the most number of times during the month. Provides a report on the Data Analyzer users who have never logged in. you need a data connector that contains the data source to the repository. open. Import the XML files in the <PowerCenter_install folder>/DA-tools/AdministrativeReports folder to the Data Analyzer repository. you can run the administrative reports on specific schedules. the longest running on-demand reports. Import the administrative reports to the Data Analyzer repository. You can view. To create the repository data source: 1. To set up the administrative reports. you can skip this step and use the existing data source for the administrative reports. 5. The Data Source page appears. and run reports from this folder. Admin Reports. 94 Chapter 10: Working with Data Analyzer Administrative Reports . Step 1. You must create a data source that points to the Data Analyzer repository. Enter a name and description for the data source. 3. complete the following steps: 1. 3. You must enable the Reporting Service and access the Data Analyzer URL to set up the administrative reports. To have the reports and indicators regularly updated. click Add. Add the Data Source to a Data Connector” on page 95.

The server type list includes the following databases: ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ Oracle. Click the name of the system data connector. Select to connect to an Oracle repository. Customize the JDBC connection string with the information for your Data Analyzer repository database. Add the Data Source to a Data Connector Data Analyzer uses a data connector to connect to a data source and read the data for a report. Select if you want to use a different driver or you have a repository that requires a different driver than those provided by Data Analyzer. click Add. Teradata. The XML files contain the schemas. If Data Analyzer does not have a data connector. Other. When you select Other. 7. Import the Data Analyzer Administrative Reports Before you import the Data Analyzer administrative reports. Step 2. 3. Select to connect to a Sybase repository. Typically. Step 3. 6. Test the connection. Data Analyzer displays the properties of the system data connector. To add the administrative reports data source to the system data connector: 1. Data Analyzer does not support a DB2 repository on AS/400. For more information about importing XML files. add the administrative reports data source to the specific data connector. DB2 (AS/400). DB2 (OS/390). Import the XML files under the <PowerCenter_install folder>/DA-tools/AdministrativeReports folder. 2. For more information about data connectors. see “Importing Objects to the Repository” on page 49. Sybase ASE. and database-specific global variables that you need to run the administrative reports. Data Analyzer uses the system data connector to connect to all the data sources required for Data Analyzer reports. SQL Server. add the administrative reports data source to the system data connector. In the Additional Schema Mappings section. Consult your database administrator if necessary. Data Analyzer does not support a Teradata repository. If you have several data connectors and you want to use a specific data connector for the administrative reports. dashboards. DB2. you must create one before running the Data Analyzer administrative reports. Data Analyzer supplies the driver name and connection string format for the JDBC drivers that Data Analyzer provides. Setting Up the Data Analyzer Administrative Reports 95 . When you select the server type. Data Analyzer does not support a DB2 repository on OS/390. Enter the user name and password to connect to the repository database. Select to connect to a Microsoft SQL Server repository. verify that the repository database information is correct. Select to connect to an IBM DB2 repository. Click Administration > Schema Design > Data Connectors. schedules. If the connection fails. ensure that the Reporting Service is enabled and the Data Analyzer instance is running properly. you must provide the driver name and connection string. 9. Click OK. To enable Data Analyzer to run the administrative reports.Data Analyzer provides JDBC drivers to connect to the Data Analyzer repository and data warehouse. The Data Connectors page appears. 8. see the Data Analyzer Schema Designer Guide.

verify that the cached reports are assigned to the appropriate schedules. 4. 5. After you complete the steps to add the reports to the schedules. Repeat steps 1 to 8 to verify that the following administrative reports are assigned to the appropriate schedules: Report Todays Logins Todays Report Usage by Hour Top 5 Logins (Month To Date) Top 5 Longest Running On-Demand Reports (Month To Date) Top 5 Longest Running Scheduled Reports (Month To Date) Total Schedule Errors for Today Schedule Hourly Refresh Hourly Refresh Midnight Daily Midnight Daily Midnight Daily Hourly Refresh The Hourly Refresh schedule is one of the schedules installed by the PowerCenter Reports installer. In the Data Source list. Add the Administrative Reports to Schedules Data Analyzer provides a set of schedules that you can use to run the administrative reports on a regular basis. and then select Hourly Refresh from the list of schedules. 4. 5. you might want to review the list of reports in the Data Analyzer Administrative Reports folder to make sure that the cached reports have been added to the correct schedule. select the administrative reports data source you created earlier. The Midnight Daily schedule is one of the schedules created when you install Data Analyzer. The report appears in the Create Report wizard. 7. Step 4. Data Analyzer displays the additional schema mapping for the system data connector. 6. After you import all the necessary objects for the administrative reports. click Public Folders. Select a report to add to a schedule. 8. 3.Data Analyzer expands the section and displays the available schemas in the repository. Click the Find Tab. Locate and click the folder named Data Analyzer Administrative Reports. 6. The public folder named Data Analyzer Administrative Reports contains the administrative reports. Save the report. The PA_Reposit operational schema is one of the schemas installed by the PowerCenter Reports installer. 9. Click Publish. 96 Chapter 10: Working with Data Analyzer Administrative Reports . select Cached. Click Add. On the Properties tab. In the folders section of the Find tab. select PA_Reposit and click Add >>. 7. Click OK. 2. To add the administrative reports to schedules: 1. In the Available Schemas section. Click Edit. You can now run the administrative reports using the system data connector.

You can access this report from the Find tab. After you schedule the administrative reports. Use this report to determine the users who logged in to Data Analyzer the most number of times in the current month. The report displays the user names and number of times each user logged in. You can access this report from the Future Usage container on the Administrator’s Dashboard and from the Find Tab. You can also access these reports from the Administrator’s Dashboard. It is the primary report for an analytic workflow. Use this report to monitor the update time for various reports. Use this report to monitor the update time for various reports. Report Activity Details for Current Month. Use this report to help you tune the database or web server. You can access this report from the Find tab. Todays Logins. Todays Report Usage by Hour. Bottom 10 Least Accessed Reports this Year. Data Analyzer provides the following administrator reports. You can access this report from the Historical Usage container on the Administrator’s Dashboard and from the Find Tab. You can access this report from the Today’s Usage container on the Administrator’s Dashboard and from the Find Tab. Use this on-demand report to view the activity logs. View this report as part of the analytic workflows for several primary reports or as a standalone report. Using the Data Analyzer Administrative Reports The Data Analyzer administrative reports are located in the Data Analyzer Administrative Reports public folder on the Find tab. Report Refresh Schedule. The report shows the list of 10 reports that users find most useful. Use this report to get information on the reports accessed by users in the current day. This report displays the average response time for the five longest-running on-demand reports in the current month to date. Data Analyzer updates this cached report based on the Hourly Refresh schedule. select a report and look at the Report Properties section. Use this on-demand report to determine the 10 least used reports in the current calendar year. It is the primary report for an analytic workflow. To review the schedule for a report in the Data Analyzer Administrative Reports folder. You can access this report from the Admin Reports container on the Administrator’s Dashboard and from the Find Tab. You can view this report as part of the analytic workflow for the Todays Logins primary report or as a standalone report. This report provides information about the number of reports accessed for each hour of the current day. Data Analyzer updates this cached report based on the Hourly Refresh schedule.10. Data Analyzer updates this cached report based on the Hourly Refresh schedule. Top 5 Logins (Month To Date). Top 5 Longest Running On-Demand Reports (Month To Date). You can also use it to determine whether an on-demand report needs to ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ Using the Data Analyzer Administrative Reports 97 . Report Activity Details. listed in alphabetical order: ♦ ♦ Activity Log Details. It is the primary report for an analytic workflow. the report provides detailed information about all reports accessed by any user in the current day. Use this report to determine the system usage for the current day. This on-demand report provides information about the reports accessed within the current month. You can access this report from the Today’s Usage container on the Administrator’s Dashboard and from the Find Tab. When you run this report from the Find tab. When you run the Report Activity Details from the Find tab. Data Analyzer updates this cached report based on the Midnight Daily schedule. This report provides the login count and average login duration for users who logged in on the current day. Use this report to determine the reports most accessed by users in the current calendar year. Top 10 Most Accessed Reports this Year. it displays access information for all reports in the repository. You can access this report from the Admin Reports container on the Administrator’s Dashboard and from the Find Tab. you need to create a data source for the repository. Reports Accessed by Users Today. This report provides information about the next scheduled update for cached reports.

run on a schedule. You can access this report from the Historical Usage container on the Administrator’s Dashboard and from the Find Tab. Data Analyzer updates this cached report based on the Midnight Daily schedule.

Top 5 Longest Running Scheduled Reports (Month To Date). This report displays the time that Data Analyzer takes to display the five longest running cached reports in the current month to date. Use this report for performance tuning and for determining whether a cached report needs to run on demand. You can access this report from the Historical Usage container on the Administrator’s Dashboard and from the Find Tab. Data Analyzer updates this cached report based on the Midnight Daily schedule. Total Schedule Errors for Today. This report provides the number of errors Data Analyzer encountered when running cached reports. Use this report to monitor cached reports and modify them if necessary. You can access this report from the Today’s Usage container on the Administrator’s Dashboard and from the Find Tab. Data Analyzer updates this cached report based on the Hourly Refresh schedule. User Log Details. Use this on-demand report to view the user logs. You can access this report from the Find tab. User Logins (Month To Date). This report displays the number of times each user logged in during the month. Use this report to determine how often users log in to Data Analyzer. You can access this report from the Historical Usage container on the Administrator’s Dashboard and from the Find Tab. Users Who Have Never Logged On. This report provides information about users who have never logged in to Data Analyzer. Use this report to make administrative decisions about disabling accounts. You can access this report from the Admin Reports container on the Administrator’s Dashboard and from the Find Tab.

♦ ♦

98

Chapter 10: Working with Data Analyzer Administrative Reports

CHAPTER 11

Performance Tuning
This chapter includes the following topics:
♦ ♦ ♦ ♦ ♦

Overview, 99 Database, 99 Operating System, 101 Application Server, 106 Data Analyzer Processes, 111

Overview
Data Analyzer requires the interaction of several components and services, including those that may already exist in the enterprise infrastructure, such as the enterprise data warehouse and authentication server. Data Analyzer is built on JBoss Application Server and uses related technology and application programming interfaces (APIs) to accomplish its tasks. JBoss Application Server is a Java 2 Enterprise Edition (J2EE)compliant application server. Data Analyzer uses the application server to handle requests from the web browser. It generates the requested contents and uses the application server to transmit the content back to the web browser. Data Analyzer stores metadata in a repository database to keep track of the processes and objects it needs to handle web browser requests. You can tune the following components to optimize the performance of Data Analyzer:
♦ ♦ ♦ ♦

Database Operating system Application server Data Analyzer

Database
Data Analyzer has the following database components:
♦ ♦

Data Analyzer repository Data warehouse

99

The repository database contains the metadata that Data Analyzer uses to construct reports. The data warehouse contains the data for the Data Analyzer reports. The data warehouse is where the report SQL queries are executed. Typically, it has a very high volume of data. The execution time of the reports depends on how well tuned the database and the report queries are. Consult the database documentation on how to tune a high volume database for optimal SQL execution. The Data Analyzer repository database contains a smaller amount of data than the data warehouse. However, since Data Analyzer executes many SQL transactions against the repository, the repository database must also be properly tuned to optimize the database performance. This section provides recommendations for tuning the Data Analyzer repository database for best performance.
Note: Host the Data Analyzer repository and the data warehouse in separate database servers. The following

repository database tuning recommendations are valid only for a repository that resides on a database server separate from the data warehouse. If you have the Data Analyzer repository database and the data warehouse in the same database server, you may need to use different values for the parameters than those recommended here.

Oracle
This section provides recommendations for tuning the Oracle database for best performance.

Statistics
To ensure that the repository database tables have up-to-date statistics, periodically run the following command for the repository schema:
EXEC DBMS_STATS.GATHER_SCHEMA_STATS(ownname=><RepositorySchemaName>, cascade=>true,estimate_percent=>100);

Shared Pool and Database Cache Size
For optimal performance, set the following parameter values for the Data Analyzer repository database:
shared_pool_size = 100000000 (100 M) db_cache_size = 100000000 (100 M)

For more information about tuning an Oracle database, see the Oracle documentation.

User Connection
For an Oracle repository database running on HP-UX, you may need to increase the number of user connections allowed for the repository database so that Data Analyzer can maintain continuous connection to the repository. To enable more connections to the Oracle repository, complete the following steps: 1. At the HP-UX operating system level, raise the maximum user process (maxuprc) limit from the default of 75 to at least 300. Use the System Administration Manager tool (SAM) to raise the maxuprc limit. Raising the maxuprc limit requires root privileges. You need to restart the machine hosting the Oracle repository for the changes to take effect. 2. In Oracle, raise the values for the following database parameters in the init.ora file:
♦ ♦

Raise the value of the processes parameter from 150 to 300. Raise the value of the pga_aggregate_target parameter from 32 MB to 64 MB (67108864).

Updating the database parameters requires database administrator privileges. You need to restart Oracle for the changes to take effect. If the Data Analyzer instance has a high volume of usage, you may need to set higher limits to ensure that Data Analyzer has enough resources to connect to the repository database and complete all database processes.

100

Chapter 11: Performance Tuning

Operating System For all UNIX operating systems. For optimal performance.IBM DB2 To ensure that the repository database tables have up-to-date statistics. set the following parameter values for the Data Analyzer repository database: LOCKLIST = 600 MAXLOCKS=40 DBHEAP = 4000 LOGPRIMARY=100 LOGFILSIZ=2000 For more information about DB2 performance tuning.d/rc.boulder. Enlarge the maximum open file descriptors. If you do not update table statistics periodically. You must modify basic system and kernel settings to allow the Java component better access to the resources of your system: ♦ ♦ ♦ Enlarge the shared memory and shared memory segments. Enter the following commands to make them permanent: # echo '#Tuning kernel parameters' >> /etc/rc. refer to the following IBM Redbook: http://publib-b. you may encounter transaction deadlocks during times of high concurrency usage. You need to increase these values because the Java threads need to have access to the same area of shared memory and its resultant segments.local Operating System 101 .html?Open Microsoft SQL Server 2000 To ensure that repository database tables and indexes have up-to-date statistics.nsf/RedbookAbstracts/sg246432. Enlarging Shared Memory and Shared Memory Segments By default. make sure the file descriptor limit for the shell running the application server process is set to at least 2048.ibm. The following recommendations for tuning the operating system are based on information compiled from various application server vendor web sites. Linux To optimize Data Analyzer on Linux. periodically run the following command for the repository schema: REORGCHK UPDATE STATISTICS on SCHEMA <DBSchemaName> Analysis of table statistics is important in DB2. you need to make several changes to your Linux environment.com/Redbooks. Use the ulimit command to set the file descriptor limit. periodically run the sp_updatestats stored procedure on the repository schema. enter the following commands as root on the machine where you install Data Analyzer: # echo "2147483648" > /proc/sys/kernel/shmmax # echo "250 32000 100 128" > /proc/sys/kernel/sem These changes only affect the system as it is running now. To change these parameters. Linux limits the amount of memory and the number of memory segments that can be shared among applications to a reasonably small value. Enlarge the maximum per-process open file descriptors.

this is set to 4096 files. You can download the configuration utility from the following HP web site: http://h21007.local Enlarging the Maximum Open File Descriptors Linux has a programmed limit for the number of files it allows to be open at any one time.d/rc. Enter the following command as root to increase the maximum number of open file descriptors: # echo "65536" > /proc/sys/fs/file-max These changes affect the system as it is currently running.# echo 'echo "2147483648" > /proc/sys/kernel/shmmax' >> /etc/rc.d/rc.www2.1701.conf 'session required /lib/security/pam_limits. Enter the following commands to make them permanent: # echo 'echo "65536" > /proc/sys/fs/file-max' >> /etc/rc.local # echo 'echo "250 32000 100 128" > /proc/sys/kernel/sem' >> /etc/rc.00. By default. Increasing this limit removes any bottlenecks from all the Java threads requesting files.so' >> /etc/pam.local Enlarging the Maximum Per-Process Open File Descriptors Increase the maximum number of open files allowed for any given process.msgmni net.conf '* hard nofile 4096' >> /etc/security/limits.1620.d/login Additional Recommended Settings Table 11-1 shows additional recommended settings for Linux operating system parameters: Table 11-1.d/rc.conf '* soft nofile 4096' >> /etc/security/limits.com/dspp/tech/tech_TechDocumentDetailPage_IDX/1.tcp_max_syn_backlog Suggested Values 1500 1024 8192 HP-UX You can tune the following areas in the HP-UX operating system to improve overall Data Analyzer performance: ♦ ♦ ♦ Kernel Java Process Network Kernel Tuning HP-UX has a Java-based configuration utility called HPjconfig which shows the basic kernel parameters that need to be tuned and the different patches required for the operating system to function properly.ipv4. Recommended Settings for Linux Parameters Linux Parameters /sbin/ifconfig lo mtu kernel.hp. Enter the following commands as root to increase the maximum open file descriptors per process: # # # # echo echo echo echo '# Set soft and hard process file descriptor limits' >> /etc/security/limits.html The HPjconfig recommendations for a Java-based application server running on HP-UX 11 include the following parameter values: Max_thread_proc = 3000 Maxdsiz = 2063835136 Maxfiles=2048 Maxfiles_lim=2048 Maxusers=512 102 Chapter 11: Performance Tuning .

hp.0/native_threads/java Network Tuning For network performance tuning. use the following command: ndd -set /dev/tcp tcp_conn_request_max 1024 After modifying the settings. The default value for the Java virtual machine instruction and data page sizes is 4 MB. Set parameters on the network card. Increase the value to 64 MB to optimize the performance of the application server that Data Analyzer runs on. restart the machine.Ncallout=6000 Nfile=30000 Nkthread=3000 Nproc=2068 Note: For Java processes to function properly. use the ndd command to view and set the network parameters. Table 11-2 provides guidelines for ndd settings: Table 11-2.html Java Process You can set the JVM virtual page size to improve the performance of a Java process running on an HP-UX machine. Solaris You can tune the Solaris operating system to optimize network and TCP/IP operations in the following ways: ♦ ♦ ♦ Use the ndd command. to set the tcp_conn_request_max parameter.com/hpux/onlinedocs/TKP-90203/TKP-90203. as shown in the following example: ndd -set /dev/tcp tcp_conn_req_max_q 16384 Tip: Use the netstat -s -P tcp command to view all available TCP-related parameters. it is important that the HP-UX operating system is on the proper patch level as recommended by the HPjconfig tool. see the HP documentation. see the document titled “Tunable Kernel Parameters” on the following HP web site: http://docs. use the following command: chatr +pi64M +pd64M <JavaHomeDir>/bin/PA_RISC2. To set the JVM virtual page size. Setting Parameters Using ndd Use the ndd command to set the TCP-related parameters. For more information about kernel parameters affecting Java performance. Operating System 103 . Set parameters in the /etc/system file. Recommended ndd Settings for HP-UX ndd Setting tcp_conn_request_max tcp_xmit_hiwater_def tcp_time_wait_interval tcp_recv_hiwater_def tcp_fin_wait_2_timeout Recommended Value 16384 1048576 60000 1048576 90000 For example. For more information about tuning the HP-UX kernel.

Setting this parameter to a value of 60000 (60 seconds) has shown a significant throughput enhancement when running benchmark JSP tests on Solaris.7. Change the default file descriptor limits. Recommended ndd Settings for Solaris ndd Setting /dev/tcp tcp_time_wait_interval /dev/tcp tcp_conn_req_max_q /dev/tcp tcp_conn_req_max_q0 /dev/tcp tcp_ip_abort_interval /dev/tcp tcp_keepalive_interval /dev/tcp tcp_rexmit_interval_initial /dev/tcp tcp_rexmit_interval_max /dev/tcp tcp_rexmit_interval_min /dev/tcp tcp_smallest_anon_port /dev/tcp tcp_xmit_hiwat /dev/tcp tcp_recv_hiwat /dev/tcp tcp_naglim_def /dev/ce instance /dev/ce rx_intr_time /dev/tcp tcp_fin_wait_2_flush_interval Recommended Value 60000 16384 16384 60000 30000 4000 10000 3000 32768 131072 131072 1 0 32 67500 Note: Prior to Solaris 2. holding these socket resources can have a significant negative impact on performance.Table 11-3 lists the TCP-related parameters that you can tune and their recommended values: Table 11-3. When many clients connect for a short period of time. the hash table size. the tcp_time_wait_interval parameter was called tcp_close_ wait_interval. This parameter determines the time interval that a TCP socket is kept alive after issuing a close call. Setting Parameters in the /etc/system File Each socket connection to the server consumes a file descriptor. The default value of this parameter on Solaris is four minutes. Note: Restart the machine if you modify /etc/system parameters. configure your operating system to have the appropriate number of file descriptors. and other tuning parameters in the /etc/system file. Recommended /etc/system Settings for Solaris Parameter rlim_fd_cur rlim_fd_max tcp:tcp_conn_hash_size semsys:seminfo_semume semsys:seminfo_semopm *shmsys:shminfo_shmmax autoup Recommended Value 8192 8192 32768 1024 200 4294967295 900 104 Chapter 11: Performance Tuning . You might want to decrease this setting if the server is backed up with a queue of half-opened connections. To optimize socket performance. Table 11-4 lists the /etc/system parameters that you can tune and the recommended values: Table 11-4.

Recommended Buffer Size Settings for nfso Command for AIX Parameter nfs_socketsize nfs_tcp_socketsize Recommended Value 200000 200000 To permanently set the values when the system restarts.boulder. Recommended Buffer Size Settings for no Command for AIX Parameter tcp_sendspace tcp_recvspace rfc1323 tcp_keepidle Recommended Value 262144 262144 1 600 Table 11-7 lists the nfso parameters that you can set and their recommended values: Table 11-7. For more information about AIX tuning options. add the commands to the /etc/rc. to set the tcp_sendspace parameter.ibm. Use the no and nfso commands to set the buffer sizes. Recommended CE Gigabit Card Settings for Solaris Parameter ce:ce_bcopy_thresh ce:ce_dvma_thresh ce:ce_taskq_disable ce:ce_ring_size ce:ce_comp_ring_size ce:ce_tx_ring_size Recommended Value 256 256 1 256 1024 4096 For more information about Solaris tuning options.net file. use the following command: /usr/sbin/no -o tcp_sendspace=262144 Table 11-6 lists the no parameters that you can set and their recommended values: Table 11-6.com/pseries/en_US/aixbman/prftungd/prftungd. see the Solaris Tunable Parameters Reference Manual.htm Operating System 105 . Setting Parameters on the Network Card Table 11-5 lists the CE Gigabit card parameters that you can tune and the recommended values: Table 11-5. For example. AIX If an application on an AIX machine transfers large amounts of data. Recommended /etc/system Settings for Solaris Parameter tune_t_fsflushr Recommended Value 1 *Note: Set only on machines that have at least 4 GB of RAM. you can increase the TCP/IP or UDP buffer sizes. see the Performance Management Guide on the IBM web site: http://publib16.Table 11-4.

For additional information about configuring the Servlet/JSP container. the parameter is set to 200. Increasing the number of threads means that more users can use Data Analyzer concurrently. the parameter is set to 4.apache.bind. To tune the Servlet/JSP container. Maximum number of unused request processing threads that can exist before the pool begins stopping the unused threads.org/tomcat-5. Number of request processing threads initially created in the pool. However. Application Server JBoss Application Server consists of several components. see the Apache Tomcat Configuration Reference on the Apache Tomcat website: http://tomcat. maxSpareThreads. ♦ ♦ By default.address}" maxThreads="250" strategy="ms" maxHttpHeaderSize="8192" emptySessionPath="true" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true"/> The following parameters may need tuning: ♦ maxThreads. Decreasing the number of threads means that fewer users can use Data Analyzer concurrently. more concurrent users may cause the application server to sustain a higher processing load. Usually. then the following message may appear in the log files: ERROR [ThreadPool] All threads are busy. Set the attribute to a value smaller than the value set for maxThreads. If not specified.sar/server. waiting.5-doc/config/index. If the number of threads is too low.html The Servlet/JSP container configuration file does not determine how JBoss Application Server handles threads. Data Analyzer may generate unexpected results if you modify properties that are not documented in this section. each of which has a different set of configuration files and parameters that can be tuned.xml The following is a typical configuration: <!-. the Windows 2000 default settings for the TCP/IP parameters are adequate to ensure optimal network performance. leading to faster response times. the parameter is set to 50. which determines the maximum number of simultaneous requests that the Servlet/JSP container can handle. modify the following configuration file: <JBOSS_HOME>/server/informatica/deploy/jbossweb-tomcat55. For more 106 Chapter 11: Performance Tuning . However.1 Connector on port 8080 --> <Connector port="8080" address="${jboss. If not specified.A HTTP/1. Fewer concurrent users may alleviate the load on the application server. The following are some of the JBoss Application Server components and recommendations for tuning parameters to improve the performance of Data Analyzer running on JBoss Application Server. Data Analyzer is configured to have a maximum of 250 request processing threads which is acceptable for most environments. some users may need to wait for their HTTP request to be served. Please increase maxThreads Although the Servlet/JSP container configuration file contains additional properties.Windows Disable hyper-threading on a four-CPU Windows 200 machine to provide better throughput for a clustered application server in a high concurrency usage environment. Servlet/JSP Container JBoss Application Server uses the Apache Tomcat 5. You can tune the Servlet/JSP container to make an optimal number of threads available to accept and process HTTP requests. minSpareThreads. leading to a general slow down of Data Analyzer. Maximum number of request processing threads that can be created in the pool. You can also define and configure thread handling in the JBoss Application Server configuration files. If not specified.5 Servlet/JSP container. You may need to modify this value to achieve better performance.

apache.SID=prfbase8 </connection-url> <driver-class> com. Typically. If you set the development parameter to true. modify the JBoss configuration file: <JBOSS_HOME>/server/informatica/deploy/<DB_Type>_ds.oracle. you can set the checkInterval parameter to specify when the JSPs are checked.xml The name of the file includes the database type. Set the development parameter to false in a production installation. For example. To avoid having the application server compile JSP scripts when they are executed for the first time.sar/conf/web. for an Oracle repository. If you find that you need to compile the JSP files either because of customizations or while patching. ♦ checkInterval.JspServlet</servlet-class> <init-param> <param-name>logVerbosityLevel</param-name> <param-value>WARNING</param-value> <param-name>development</param-name> <param-value>false</param-value> </init-param> <load-on-startup>3</load-on-startup> </servlet> The following parameter may need tuning: ♦ development.informatica. Informatica ships Data Analyzer with pre-compiled JSPs. Checks for changes in the JSP files on an interval of n seconds. set it to 600 seconds. When set to true. For example: <param-name>checkInterval</param-name> <param-value>99</param-value> Note: Make sure that the checkInterval is not too low. you can tune the database connection pools. <DB_Type> can be oracle. To tune the repository database connection pool. Repository Database Connection Data Analyzer accesses the repository database to get metadata information.servlet. the configuration file name is oracle_ds. Data Analyzer keeps a pool of database connections for the repository.xml. This works only when the development parameter is set to true.jasper.information about configuring thread management on JBoss Application Server.xml The following is a typical configuration: <servlet> <servlet-name>jsp</servlet-name> <servlet-class>org. the JSP scripts must be compiled when they are executed for the first time.jdbc. To optimize Data Analyzer database connections. In production environment.OracleDriver </driver-class> Application Server 107 . you can modify the following configuration file to optimize the JSP compilation: <JBOSS_HOME>/server/informatica/deploy/jbossweb-tomcat55. db2 or other databases. see the JBoss Application Server documentation. The following is a typical configuration: <datasources> <local-tx-datasource> <jndi-name>jdbc/IASDataSource</jndi-name> <connection-url> jdbc:informatica:oracle://aries:1521. checks for modified JSPs at every access. JSP Optimization Data Analyzer uses JavaServer Pages (JSP) scripts to generate content for the web pages used in Data Analyzer.

Maximum time in milliseconds that a caller waits to get a connection when no more free connections are available in the pool.TxInterceptorCMT</interceptor> <interceptor transaction="Container" metricsEnabled="true"> org. Set a higher value for idle-timeout-minutes.plugins.ejb. The following is a typical configuration <container-configuration> <container-name> Standard Stateless SessionBean</container-name> <call-logging>false</call-logging> <invoker-proxy-binding-name> stateless-rmi-invoker</invoker-proxy-binding-name> <container-interceptors> <interceptor>org.jboss. Since Data Analyzer accesses the repository very frequently. After you use it.jboss.jboss. EJB Container Data Analyzer uses Enterprise Java Beans extensively.jboss. Each report needs a database connection.ejb. Maximum size of the connection pool.plugins.jboss. Length of time an idle connection remains in the pool before it is used.ejb.SecurityInterceptor</interceptor> <!-.plugins.ejb. it will contain at least the minimum number of pool-size connections.MetricsInterceptor</interceptor> <interceptor transaction="Container"> org. it consumes resources to check for idle connections and clean them out. the most important tuning parameter is the EJB pool.resource.plugins. You can tune the EJB pool parameters in the following file: <JBOSS_HOME>/server/Informatica/conf/standardjboss.StatelessSessionInstanceInterceptor </interceptor> <interceptor transaction="Bean"> 108 Chapter 11: Performance Tuning .<user-name>powera</user-name> <password>powera</password> <exception-sorter-class-name> org.jboss.plugins. blocking-timeout-millis. max-pool-size. It has over 50 stateless session beans (SLSB) and over 60 entity beans (EB). It may block other threads that require new connections.adapter.BMT --> <interceptor transaction="Bean"> org.OracleExceptionSorter </exception-sorter-class-name> <min-pool-size>5</min-pool-size> <max-pool-size>50</max-pool-size> <blocking-timeout-millis>5000</blocking-timeout-millis> <idle-timeout-minutes>1500</idle-timeout-minutes> </local-tx-datasource> </datasources> The following parameters may need tuning: ♦ ♦ ♦ ♦ min-pool-size.plugins.CMT --> <interceptor transaction="Container"> org. Minimum number of connections in the pool.ProxyFactoryFinderInterceptor </interceptor> <interceptor> org.plugins.LogInterceptor</interceptor> <interceptor> org.ejb. idle-timeout-minutes.xml. because there may be several scheduled reports running in the background. The pool is empty until it is first accessed.ejb.StatelessSessionInstanceInterceptor </interceptor> <!-.jboss.jboss.vendor. The max-pool-size value needs to be at least five more than the maximum number of concurrent users.jdbc. There are also six message-driven beans (MDBs) used for scheduling and real-time processes.ejb. Stateless Session Beans For SLSBs.

These parameters are not set by default in Data Analyzer.ejb.plugins.plugins.jboss. then <MaximumSize> is a strict upper limit for the number of objects that will be created. the <strictMaximumSize> enforces a rule that only <MaximumSize> number of objects will be active. strictTimeout. only the <MaximumSize> number of objects will be returned to the pool. Any subsequent requests will wait for an object to be returned to the pool.ejb.jboss.TxInterceptorBMT</interceptor> <interceptor transaction="Bean" metricsEnabled="true"> org. ♦ Message-Driven Beans (MDB) MDB tuning parameters are very similar to stateless bean tuning parameters.plugins.ejb. To tune the MDB parameters.org. modify the following configuration file: <JBOSS_HOME>/server/informatica/conf/standardjboss.ejb. They may be tuned after you have completed proper iterative testing in Data Analyzer to increase the throughput for high concurrency installations: ♦ strictMaximumSize.MessageDrivenInstanceInterceptor Application Server 109 .plugins.ProxyFactoryFinderInterceptor </interceptor> <interceptor>org.ejb.ejb. the messaging system delivers messages to the MDB when they are available. If <strictMaximumSize> is set to false. the number of active objects can exceed the <MaximumSize> if there are requests for more objects. If <strictMaximumSize> is set to true.jboss.connectionmanager.resource.ejb.jboss.plugins.TxInterceptorCMT</interceptor> <interceptor transaction="Container" metricsEnabled="true"> org. then <strictTimeout> is the amount of time that requests will wait for an object to be made available in the pool. Represents the maximum number of objects in the pool.jboss. If you set <strictMaximumSize> to true.plugins.plugins.plugins. You can set two other parameters to fine tune the EJB pool. When the value is set to true.CachedConnectionInterceptor </interceptor> </container-interceptors> <instance-pool> org.jboss.RunAsSecurityInterceptor </interceptor> <!-.StatelessSessionInstancePool</instance-pool> <instance-cache></instance-cache> <persistence-manager></persistence-manager> <container-pool-conf> <MaximumSize>100</MaximumSize> </container-pool-conf> </container-configuration> The following parameter may need tuning: ♦ MaximumSize.xml The following is a typical configuration: <container-configuration> <container-name>Standard Message Driven Bean</container-name> <call-logging>false</call-logging> <invoker-proxy-binding-name>message-driven-bean </invoker-proxy-binding-name> <container-interceptors> <interceptor>org.LogInterceptor</interceptor> <interceptor>org.ejb.MessageDrivenInstanceInterceptor </interceptor> <!-.jboss.ejb.jboss.MetricsInterceptor</interceptor> <interceptor> org.jboss. Instead.BMT --> <interceptor transaction="Bean"> org. The main difference is that MDBs are not invoked by clients.plugins.plugins. However.CMT --> <interceptor transaction="Container"> org.MetricsInterceptor </interceptor> <interceptor transaction="Container"> org.ejb.jboss.jboss.

ejb.ejb. Otherwise.</interceptor> <interceptor transaction="Bean"> org.connectionmanager.jboss.plugins.plugins.jboss.LogInterceptor</interceptor> <interceptor>org. Represents the maximum number of objects in the pool.ejb.ejb. the number of active objects can exceed the <MaximumSize> if there are requests for more objects.ejb.MetricsInterceptor</interceptor> <interceptor>org.ejb.plugins.EntityInstancePool 110 Chapter 11: Performance Tuning .jboss.CachedConnectionInterceptor </interceptor> <interceptor> org.resource.plugins.xml.ejb.ejb.EntityReentranceInterceptor </interceptor> <interceptor> org.ejb.MessageDrivenTxInterceptorBMT </interceptor> <interceptor transaction="Bean" metricsEnabled="true"> org.plugins. The EJB tuning parameters are in the following configuration file: <JBOSS_HOME>/server/informatica/conf/standardjboss.plugins.plugins. if <strictMaximumSize> is set to false.jboss.plugins.EntitySynchronizationInterceptor </interceptor> </container-interceptors> <instance-pool>org.jboss.plugins.MessageDrivenInstancePool </instance-pool> <instance-cache></instance-cache> <persistence-manager></persistence-manager> <container-pool-conf> <MaximumSize>10</MaximumSize> </container-pool-conf> </container-configuration> The following parameter may need tuning: ♦ MaximumSize.SecurityInterceptor </interceptor> <interceptor>org.jboss. If <strictMaximumSize> is set to true.plugins.ejb.jboss. only the <MaximumSize> number of objects will be returned to the pool.resource. However.jboss.jboss.jboss.jboss.EntityCreationInterceptor </interceptor> <interceptor>org.jboss.jboss.jboss.ejb.jboss. The following is a typical configuration: <container-configuration> <container-name>Standard BMP EntityBean</container-name> <call-logging>false</call-logging> <invoker-proxy-binding-name>entity-rmi-invoker </invoker-proxy-binding-name> <sync-on-commit-only>false</sync-on-commit-only> <container-interceptors> <interceptor>org.EntityInstanceInterceptor </interceptor> <interceptor>org.MetricsInterceptor</interceptor> <interceptor> org.connectionmanager.EntityLockInterceptor </interceptor> <interceptor>org.ejb.CachedConnectionInterceptor </interceptor> </container-interceptors> <instance-pool>org.plugins.plugins.ProxyFactoryFinderInterceptor </interceptor> <interceptor>org.ejb.plugins.jboss.ejb.TxInterceptorCMT </interceptor> <interceptor metricsEnabled="true"> org. then <MaximumSize> is a strict upper limit for the number of objects that will be created.plugins. Enterprise Java Beans Data Analyzer EJBs use bean-managed persistence (BMP) as opposed to container-managed persistence (CMP).

ejb. Data Analyzer delegates the ranking task to the database by doing a multi-pass query to first get the ranked items and then running the actual query with ranking filters.plugins. ♦ Data Analyzer Processes To design schemas and reports and use Data Analyzer features more effectively. only the <MaximumSize> number of objects will be returned to the pool. Ranked Reports Data Analyzer supports two-level ranking.jboss.plugins. These parameters are not set by default in Data Analyzer.jboss. strictTimeout.plugins.ejb.jboss.ejb. Data Analyzer performance improves if the data warehouse contains good indexes and is properly tuned. When the value is set to true. However.QueuedPessimisticEJBLock </locking-policy> <container-cache-conf> <cache-policy>org.LRUEnterpriseContextCachePolicy </cache-policy> <cache-policy-conf> <min-capacity>50</min-capacity> <max-capacity>1000000</max-capacity> <overager-period>300</overager-period> <max-bean-age>600</max-bean-age> <resizer-period>400</resizer-period> <max-cache-miss-period>60</max-cache-miss-period> <min-cache-miss-period>1</min-cache-miss-period> <cache-load-factor>0. Represents the maximum number of objects in the pool. the <strictMaximumSize> parameter enforces a rule that only <MaximumSize> number of objects will be active.</instance-pool> <instance-cache>org.plugins. They may be tuned after you have completed proper iterative testing in Data Analyzer to increase the throughput for high concurrency installations: ♦ strictMaximumSize.ejb.BMPPersistenceManager </persistence-manager> <locking-policy>org. Data Analyzer Processes 111 . Any subsequent requests will wait for an object to be returned to the pool. if <strictMaximumSize> is set to false. then <strictTimeout> is the amount of time that requests will wait for an object to be made available in the pool.lock.75</cache-load-factor> </cache-policy-conf> </container-cache-conf> <container-pool-conf> <MaximumSize>100</MaximumSize> </container-pool-conf> <commit-option>A</commit-option> </container-configuration> The following parameter may need tuning: ♦ MaximumSize. Aggregation Data Analyzer can run more efficiently if the data warehouse has a good schema design that takes advantage of aggregate tables to optimize query execution. Otherwise.jboss. If <strictMaximumSize> is set to true. If the ranking is defined on a calculation that is performed in the middle tier. use the following guidelines. the number of active objects can exceed the <MaximumSize> if there are requests for more objects.EntityInstanceCache </instance-cache> <persistence-manager>org. then <MaximumSize> is a strict upper limit for the number of objects that will be created. If you set <strictMaximumSize> to true. You can set two other parameters to fine tune the EJB pool. If the report has one level of ranking.

set the Data Source is Timestamp attribute property so that Data Analyzer includes conversion functions in the SQL query for any report the uses the column. On the machine hosting the application server. Data Analyzer performs date manipulation on any column with a datatype of Date. However. Date Columns By default. These types of reports consume resources and may slow down other Data Analyzer processes. Set column datatypes to reflect the actual precision required. On a typical workstation with a CPU speed greater than 2. If a high degree of precision is not required. If a column has a numeric datatype.5 GHz. create reports with two levels of ranking based on smaller schemas or on schemas that have good aggregate tables and indexes.000 cells in a report. If a column contains date and time information. JDBC uses a different data structure when it returns data. avoid creating reports with ranking defined on custom attributes or custom metrics. then a BigDecimal format for columns in tables with a large volume of data adds unnecessary overhead. conversion functions in a query prevent the use of database indexes and makes the SQL query inefficient. Make sure that a report displayed in the Analyze tab has a restriction on the number of cells displayed on a page.Data Analyzer has to pull all the data before it evaluates the calculation expression and ranks the data and filter. Use the Data Source is Timestamp property for an attribute to have Data Analyzer include conversion functions in the SQL query. interactive charts display at about the same speed as regular charts. requires a multi-pass SQL query to first get the data to generate the top 10 products and then get the data for each product and corresponding top five customers. A report with second level ranking. On the Formatting tab. an interactive chart can use up to 25% less CPU resources than a regular chart. Also. Each cell in a report on the Analyze tab has embedded JavaScript objects to capture various user interactions. For more information about editing your general preferences to enable interactive charts. JDBC packages the returned data in a BigDecimal format. which has a high degree of precision. Step 4 of the Create Report wizard. JavaScript on the Analyze Tab The Analyze tab in Data Analyzer uses JavaScript for user interaction. such as the top 10 products and the top five customers for each product. not including time. On a slow workstation with a CPU speed less than 1 GHz. If a column contains date information only. clear the Data Source is Timestamp attribute property so that Data Analyzer does not include conversion functions in the SQL query for any report the uses the column. If a report includes a column that contains date and time information but the report requires a daily granularity. If the report is defined to show Total Others at End of Table. consider making the report cached so that it can run in the background. Data Analyzer runs another SQL query to get the aggregated values for the rows not shown in the report. Interactive Charts An interactive chart uses less application server resources than a regular chart. see the Data Analyzer User Guide. 112 Chapter 11: Performance Tuning . Data Analyzer includes conversion functions in the WHERE clause and SELECT clause to get the proper aggregation and filtering by date only. Data Analyzer may display messages warning that the JavaScripts on the page are running too slow. set the number of rows to display per page for a report on the Analyze tab. based on the column datatype defined in the database. You can control the number of rows displayed on a page in Layout and Setup. If there are over 5. Datatype of Table Columns Data Analyzer uses JDBC drivers to connect to the data warehouse. For optimal performance. the report may take several minutes to display. Use interactive charts whenever possible to improve performance. If you have a data warehouse with a large volume of data.

Data Analyzer must pre-fetch all the rows so that the full dataset is available for operations such as ranking or ordering data. For more information about query governing. However. Query Governing You can restrict the number of rows returned by an SQL query for a report with the query governing settings in Data Analyzer. Scheduler and User-Based Security Data Analyzer supports parallel execution of both time-based and event-based schedulers. You can increase the value for specific reports that require more data. such as 1000. Data Analyzer generates a subset of the report dataset for each chart. Row Limit for SQL Queries Data Analyzer fetches all the rows returned by an SQL query into the JVM before it displays them on the report. ProviderContext. you have five reports with user-based security and there are 500 security profiles for subscribers to the report. If there are a large number of concurrent users on Data Analyzer and each runs multiple reports. By default. Frequency of Schedule Runs Setting the report schedules to run very frequently. Although Data Analyzer displays only 20 rows in a page. This means that each chart that Data Analyzer generates for a report has computing overhead associated with it. Within a task. For example. To keep Data Analyzer from consuming more resources than necessary. Data Analyzer starts running ReportA again before the previous run is completed. To improve performance. You can set this parameter at the system level. If ReportA takes six minutes to run. the memory requirements can be considerable. or formatting reports into sections. Since a chart may use only a subset of the report columns and rows as a datapoint. limit the number of returned rows to a small value. user level. can create problems. This situation can drastically affect the performance of Data Analyzer. performing time comparisons. at the server level. minimize the number of security profiles in Data Analyzer. it is important to restrict the number of rows returned by the SQL query of a report.Number of Charts in a Report Data Analyzer generates the report charts after it generates the report table. Data Analyzer saves the dataset returned by the report query in the user session until the user terminates the session. Data Analyzer must execute each of the five reports for each of the 500 security profiles.maxInMemory When a user runs a report. For optimal performance. such as every five minutes. You can set parameters in Data Analyzer to restrict the number of rows returned by an SQL query for a report and to manage the amount of memory it uses. For example. Since generating a report for each security profile is a subtask for each report. you add ReportA to a schedule that runs every five minutes. and report level. use the real-time message stream features available in Data Analyzer. Data Analyzer runs subtasks sequentially. it may already have fetched hundreds of rows and stored them in the JVM heap. Do not use the report schedules to frequently update reports to simulate real-time reports. Data Analyzer keeps two reports in Data Analyzer Processes 113 . see “Setting Rules for Queries” on page 85. consider the overhead cost associated with report charts and create the minimum set of charts required by the end user. To keep Data Analyzer scalable. Data Analyzer cannot take advantage of parallel scheduler execution and sequentially generates the report for each security profile. Data Analyzer runs only the tasks in an event in parallel mode. If you require reports to deliver real-time data. Report designers who create a large number of charts to cover all possible user requirements can weaken the performance and scalability of Data Analyzer.

When a user closes a browser window without logging out. Data Analyzer records every user login in the user log. you can set a threshold value between 50% and 99%. To improve Data Analyzer performance. Data Analyzer retains report results that are part of a workflow or drill path in memory irrespective of the value set in this property. divide the used memory by the total memory configured for the JVM. Gauges based on cached reports load the fastest because gauges have only one data value and they are cached in the database along with the report model.000 KB. Data Analyzer runs the underlying report once.abortThreshold property in the DataAnalyzer. Data Analyzer might run out of memory that results in the users getting a blank page. Data Analyzer loads all indicators based on cached reports before it loads indicators based on on-demand reports. which.abortThreshold When a user runs a report that involves calculation or building large result sets. Indicators in Dashboard Data Analyzer uses two parallel threads to load indicators in the dashboards. which results in very little overhead for rendering the table indicators on the browser. Data Analyzer keeps the datasets for all reports in a workflow in the user session. It uses a first in first out (FIFO) algorithm to overwrite reports in memory with more recent reports. Data Analyzer obtains the report model and the datapoint for the gauge at the same time and can immediately create the gauge. Closing a browser window does not release the memory immediately. Data Analyzer displays an error and stops processing the report request. If the percentage is below the threshold. you must clear these two logs frequently. Data Analyzer uses the data in the activity log to calculate the Estimated Time to Run the Report for an on-demand 114 Chapter 11: Performance Tuning . Set the value as low as possible to conserve memory. Include only reports that have small datasets in a workflow. Note: A user must log out of Data Analyzer to release the user session memory. Data Analyzer has been optimized to handle the way multiple indicators are queued up for loading: ♦ ♦ In a dashboard with indicators based on cached and on-demand reports. see “Managing System Settings” on page 73. Data Analyzer releases the memory after the expiration of session-timeout. is 30 minutes. the default value of 2 is sufficient. Similarly. it checks the amount of available memory. Both for cached and ondemand reports. ♦ ♦ Purging of Activity Log Data Analyzer logs every activity or event that happens in Data Analyzer in the activity log. The default value is 95%. For more information about managing the activity and user logs. and the total memory configured for the JVM is 2.000 KB.properties file to set the maximum percentage of memory that is in use before Data Analyzer stops building report result sets and executing report queries.maxInMemory property in DataAnalyzer. Before Data Analyzer starts calculating the report or building the tabular result set. All indicators on a dashboard based on the same report use the same resultset. For example. For on-demand reports. When there are multiple indicators based on a single report. These logs can grow quickly. if the used memory is 1. You can edit the providerContext. the percentage of memory that is in use is 50%. If the amount of free memory does not meet a pre-defined percentage. These parallel threads are default threads spawned by the browser. Typically. Typically. The table indicators use plain HTML instead of DHTML.properties to set the number of reports that Data Analyzer keeps in memory. Data Analyzer provides an estimate of the length of time a report takes to display. To calculate the percentage. If the percentage is above the threshold. ProviderContext.the user session at a time. Data Analyzer continues with the requested operation. You can edit the providerContext. then Data Analyzer displays an error. The value must be greater than or equal to 2. by default.

see “Properties in DataAnalyzer.000 rows. If the value is 0. use the following recommendations: ♦ ♦ ♦ For dashboard indicators.waitForConnectionSeconds=1 dynapool. Typically. Use aggregate tables for indicators based on ondemand reports on the dashboards.minCapacity.maxCapacity. This pool of JDBC connections is different from the pool of connections to the repository defined at the application server level. Regular charts are rendered at server side and use the server CPU resources. the default value of 30 days is fine. Default is 5 minutes.connectionIdleTimeMins=10 datamart. consider displaying the chart without legends to improve Data Analyzer performance. ♦ Chart Legends When Data Analyzer displays charts with legends. Interactive charts are rendered on the browser and require much less server resources.evictionPeriodMins. Data Analyzer creates a new connection to the data source to calculate a report. use interactive charts on the dashboard. Maximum number of connections that the data source pool can grow to. If legends are not essential in a chart. If the activity log contains a lot of data.minCapacity=2 dynapool.report.window property in DataAnalyzer. Minimum number of connections maintained in the data source pool. Data Analyzer Processes 115 ♦ ♦ . Connection Pool Size for the Data Source Data Analyzer internally maintains a pool of JDBC connections to the data warehouse. To optimize the database connection pool for a data source. the Data Analyzer charting engine must perform many complex calculations to fit the legends in the limited space available on the chart. modify the connection pool settings in DataAnalyzer. The following is a typical configuration: # Datasource definition # dynapool.properties” on page 130.maxCapacity=20 dynapool.defaultRowPrefetch=20 The following parameters may need tuning: ♦ dynapool. dynapool.evictionPeriodMins=5 dynapool. Depending on the number of legends in a chart. Hence. You can specify the number of days that Data Analyzer uses for the estimate by editing the queryengine. Recommendations for Dashboard Design When you design a dashboard. dynapool. In a high usage environment.properties. Set the value to the total number of concurrent users. Use position-based indicators instead of value-based indictors for reports with a volume of more than 2. whereas value-based indicators have to perform a linear scan of the rowset to match up the values. Number of minutes between eviction runs or clean up operations during which Data Analyzer cleans up failed and idle connections from the connection pool. For most cases. dashboards provide summarized information. Default is 2.properties. Data Analyzer returns an error message to some users. then the SQL query to calculate the estimated time may take considerable CPU resources because it calculates the estimated time by doing an average of all the entries for a specified number of days.estimation. the scan can get progressively slower for large datasets. If you set a value less than the number of concurrent users. Set to 0 to ensure that no connections are maintained in the data source pool. For more information about the estimation window property. Position-based indicators can use indexes in the java collection for faster access of the database. use indicators based on cached reports instead of on-demand reports. it might take Data Analyzer from 10% to 50% longer to render a chart with legends.

You can keep the repository and data warehouse on the same database but in separate schemas as long as the machine has enough CPU and memory resources to handle the repository SQL queries and the data warehouse SQL queries. Make sure that all processes have enough resources to function optimally. Note: Data Analyzer connects to only one repository database. these servers must have enough CPU power and RAM. the machine must have enough CPU power and RAM to handle the demands of each of the server processes. Number of minutes that a connection may remain idle.waitForConnectionSeconds. the data warehouse requires more CPU power than the repository database. As with any major software implementation project. ♦ dynapool. and does not allow a connection to remain idle for too long. Since the queries return many rows. Server Location and CPU Power and RAM If you locate the application server and database server in a single machine. This type of distributed architecture can be more economical because it can leverage existing infrastructure. Data Analyzer runs a large number of SQL queries against the repository to get the metadata before running any report. However. have the repository database as close as possible to the application server Data Analyzer runs on. However. network latency between the application server and the repository database must be minimal. It also becomes a single point of failure. the requirements for a very powerful machine makes it an expensive solution.connectionIdleTimeMins so that Data Analyzer performs the eviction run. For optimal performance. Data Analyzer ignores this property if the parameter dynapool. since Data Analyzer runs a large number of them. it can connect to more than one data warehouse. frees the connections for report calculations. Server Location and Network Latency There are two database components in Data Analyzer: the repository and data warehouse. dynapool. If you set the parameter to 0 or a negative value. network latency is an issue in a distributed architecture. Enter a positive value for this parameter. Default is 10. carefully perform capacity planning and testing before a Data Analyzer deployment. If you set the parameter to 0. There should also be minimal network latency between these servers.connectionIdleTimeMins. Although a single-machine architecture means that there is no network latency. Default is 1.evictionPeriodMins is not set. Data Analyzer runs only a few SQL queries against the data warehouse. Data Analyzer does not wait and aborts the operation. The choice of architecture depends on the requirements of the organization. Data Analyzer sets the parameter to the default value. An alternative to the single-machine architecture is a distributed system where the servers are located on different machines across a network. Typically. network latency between the application server and the data warehouse must also be minimal.You can set the value to half of the value set for the parameter dynapool. The SQL queries that Data Analyzer runs against the repository are not CPU or IO intensive. Number of seconds Data Analyzer waits for a connection from the pool before it aborts the operation. ♦ Server Location Data Analyzer runs on an application server and reads data from a database server. The SQL queries that Data Analyzer runs against the data warehouse return many rows and are CPU and IO intensive. For optimal performance. 116 Chapter 11: Performance Tuning . However.

Use the Data Analyzer API single sign on (SSO) scheme to access Data Analyzer web pages without a user login. For more information about the Data Analyzer URL API. 117 Using the Data Analyzer URL API. 118 Setting Up Color Schemes and Logos. You can use the following techniques to customize Data Analyzer: ♦ ♦ ♦ ♦ Use the URL API to display Data Analyzer web pages on a portal. The URL consists of the Data Analyzer location and parameters that determine the content and interface for the Data Analyzer page. Set the user interface (UI) configuration properties in the DataAnalyzer. report.properties file to display or hide the Data Analyzer header or navigation bar. 118 Overview You can customize the Data Analyzer user interface so that it meets the requirements for web applications in your organization. Set up custom color schemes and logos on the Data Analyzer Administration tab. 117 Using the Data Analyzer API Single Sign-On. or tab pages. 118 Setting the UI Configuration Properties. see the Data Analyzer SDK Guide. 117 . such as dashboard.CHAPTER 12 Customizing the Data Analyzer Interface This chapter includes the following topics: ♦ ♦ ♦ ♦ ♦ Overview. Data Analyzer provides several ways to allow you to modify the look and feel of Data Analyzer. Using the Data Analyzer URL API You can use the URL interface provided with the Data Analyzer API to provide links in a web application or portal to specific pages in Data Analyzer.

Ordinarily. add the following property: uiconfig. You can also create color schemes and use custom graphics. To hide the navigation bar or the header section on the Data Analyzer pages. when a user logs in to Data Analyzer through the Login page. You can use the default Informatica color scheme and the sample color scheme named Betton Books as a starting point for a custom color scheme.properties and set the properties to false.ShowHeader=false 118 Chapter 12: Customizing the Data Analyzer Interface . and navigation bar display on all the Data Analyzer pages. You can configure Data Analyzer to accept the portal authentication and bypass the Data Analyzer login page. Setting the UI Configuration Properties In DataAnalyzer. the Data Analyzer login appears even if you have already logged in to the portal where the Data Analyzer pages are displayed. You must enter a user name and password. The Data Analyzer API provides an SSO mechanism that you can use when you display Data Analyzer pages in another web application or portal. see “Managing Color Schemes and Logos” on page 74. The UI configuration include the following properties: uiconfig.<ConfigurationName>.<ConfigurationName>. the logout and help links. and logos to match the standard color scheme for the web applications in your organization. For more information. To hide the whole header section. you can define a user interface configuration that determines how Data Analyzer handles specific sections of the user interface.Using the Data Analyzer API Single Sign-On When you access Data Analyzer.properties.ShowNav=true The properties determine what displays in the header section of the Data Analyzer user interface which includes the logo. Setting Up Color Schemes and Logos Data Analyzer provides two color schemes for the Data Analyzer interface.default. if you display Data Analyzer web pages in another web application or portal. logout and help links. For more information about the Data Analyzer API SSO. the logo. you can set up an SSO mechanism that allows you to log in once and be authenticated in all subsequent web applications that you access. you can add a UI configuration named default to DataAnalyzer.ShowHeader=true uiconfig. To avoid multiple logins. the login page appears. and the navigation bar: Navigation Bar Header Section Default UI Configuration By default. buttons. see the Data Analyzer SDK Guide.

complete one of the following tasks: ♦ ♦ ♦ Change the values of the default configuration instead of adding a new configuration. For example.Fred. to display the Data Analyzer administration page on a portal without the navigation bar.properties and include the configuration name in the URL. If you want to change the default configuration settings. complete the following steps: 1.properties: ♦ ♦ ♦ The default configuration properties are not required in DataAnalyzer. see “Configuration Files” on page 129. After you login. If you access a Data Analyzer page with a specific configuration through the URL API and the session expires. To avoid this.To hide only the navigation bar. the Login page appears. It can include only alphanumeric characters. The header section of the Data Analyzer page appears on the portal according to the setting in the configuration. For more information about modifying the settings in DataAnalyzer.ShowHeader=true uiconfig. The configuration name can be any length and is case sensitive. you can add a configuration to DataAnalyzer. specifying a configuration name: uiconfig. Set the default configuration to the same values as your customized configuration. see the Data Analyzer SDK Guide.properties.ShowNav=false 2.ShowNav=false Tip: DataAnalyzer. Add the following properties to DataAnalyzer. UI Configuration Parameter in Data Analyzer URL If you use the URL API to display Data Analyzer pages on another web application or a portal. The default settings determine what Data Analyzer displays after the Login page.properties.properties includes examples of the properties for the default UI configuration. not the configuration passed through the URL. Add them only if you want to modify the default configuration settings or create new UI configurations. Include the parameter <UICONFIG> and the configuration name in the URL when you call the Data Analyzer Administration page from the portal: http://HostName:PortNumber/InstanceName/jsp/api/ShowAdministration. uncomment the default properties and update the values of the properties. Setting the UI Configuration Properties 119 .default. It cannot include special characters.Fred.properties. Data Analyzer displays the Data Analyzer pages based on the default configuration. Customize the Data Analyzer login page to use your customized configuration after user login.jsp?<UICONFIG>=Fred For more information about the Data Analyzer URL API. Configuration Settings Use the following guidelines when you set up a configuration in DataAnalyzer. add the following property: uiconfig. Setting the ShowHeader property to false implicitly sets the ShowNav property to false.

The following examples show what appears on the Data Analyzer header when the UI configuration properties are set to different values: ♦ ShowHeader=true and ShowNav=true (default setting) ♦ ShowHeader=true and ShowNav=false ♦ ShowHeader=false and ShowNav=false Note: Data Analyzer stores DataAnalyzer.properties in the Data Analyzer EAR file.properties from the Data Analyzer EAR file before you can modify the UI configuration properties. 120 Chapter 12: Customizing the Data Analyzer Interface . You must use the EAR Repackager utility to extract DataAnalyzer.

APPENDIX A Hexadecimal Color Codes This appendix includes the following topic: ♦ HTML Hexadecimal Color Codes. 121 HTML Hexadecimal Color Codes You can create new color schemes for Data Analyzer by entering valid HTML hexadecimal color codes into the appropriate fields on the Color Scheme page. see “Managing Color Schemes and Logos” on page 74. you can alter the colors in Data Analyzer to match your corporate color scheme. For more information about creating a color scheme. HTML Color Codes for Color Schemes Color Name alice blue antique white antique white1 antique white2 antique white3 antique white4 aquamarine aquamarine1 aquamarine2 aquamarine3 aquamarine4 azure azure1 azure2 azure3 Color Code F0F8FF FAEBD7 FFEFDB EEDFCC CDC0B0 8B8378 7FFFD4 7FFFD4 76EEC6 66CDAA 458B74 F0FFFF F0FFFF E0EEEE C1CDCD Color Name blue blue violet blue1 blue2 blue3 blue4 brown brown1 brown2 brown3 brown4 burlywood burlywood1 burlywood2 burlywood3 Color Code 0000FF 8A2BE2 0000FF 0000EE 0000CD 00008B A52A2A FF4040 EE3B3B CD3333 8B2323 DEB887 FFD39B EEC591 CDAA7D 121 . For example. Table A-1 lists the colors and hexadecimal color codes you can use when creating color schemes for Data Analyzer: Table A-1.

HTML Color Codes for Color Schemes Color Name azure4 beige bisque bisque1 bisque2 bisque3 bisque4 black blanched almond chartreuse3 chartreuse4 chocolate chocolate1 chocolate2 chocolate3 chocolate4 coral coral1 coral2 coral3 coral4 cornflower blue cornsilk cornsilk1 cornsilk2 cornsilk3 cornsilk4 cyan cyan1 cyan2 cyan3 cyan4 dark blue dark cyan dark goldenrod dark goldenrod1 dark goldenrod2 dark goldenrod4 Color Code 838B8B F5F5DC FFE4C4 FFE4C4 EED5B7 CDB79E 8B7D6B 000000 FFEBCD 66CD00 458B00 D2691E FF7F24 EE7621 CD661D 8B4513 FF7F50 FF7256 EE6A50 CD5B45 8B3E2F 6495ED FFF8DC FFF8DC EEE8CD CDC8B1 8B8878 00FFFF 00FFFF 00EEEE 00CDCD 008B8B 00008B 008B8B B8860B FFB90F EEAD0E 8B6508 Color Name burlywood4 cadet blue cadet blue1 cadet blue2 cadet blue3 cadet blue4 chartreuse chartreuse1 chartreuse2 dark khaki dark magenta dark olive green dark orange dark orange1 dark orange2 dark orange3 dark orange4 dark orchid dark orchid1 dark orchid2 dark orchid3 dark orchid4 dark red dark salmon dark sea green dark slate blue dark slate gray dark turquoise dark violet dark goldenrod3 dark olive green1 dark olive green2 dark olive green3 dark olive green4 dark sea green1 dark sea green2 dark sea green3 dark sea green4 Color Code 8B7355 5F9EA0 98F5FF 8EE5EE 7AC5CD 53868B 7FFF00 7FFF00 76EE00 BDB76B 8B008B 556B2F FF8C00 FF7F00 EE7600 CD6600 8B4500 9932CC BF3EFF B23AEE 9A32CD 68228B 8B0000 E9967A 8FBC8F 483D8B 2F4F4F 00CED1 9400D3 CD950C CAFF70 BCEE68 A2CD5A 6E8B3D C1FFC1 B4EEB4 9BCD9B 698B69 122 Appendix A: Hexadecimal Color Codes .Table A-1.

Table A-1. HTML Color Codes for Color Schemes
Color Name dark gray dark green dark slate gray3 dark slate gray4 deep pink deep pink1 deep pink2 deep pink3 deep pink4 dark slate gray3 deep sky blue deep sky blue1 deep sky blue2 deep sky blue3 deep sky blue4 dim gray dodger blue dodger blue1 dodger blue2 dodger blue3 dodger blue4 firebrick firebrick1 firebrick2 firebrick3 firebrick4 floral white forest green gainsboro ghostwhite gold gold1 gold2 gray22 gray23 gray24 gray25 gray26 Color Code A9A9A9 006400 79CDCD 528B8B FF1493 FF1493 EE1289 CD1076 8B0A50 79CDCD 00BFFF 00BFFF 00B2EE 009ACD 00688B 696969 1E90FF 1E90FF 1C86EE 1874CD 104E8B B22222 FF3030 EE2C2C CD2626 8B1A1A FFFAF0 228B22 DCDCDC F8F8FF FFD700 FFD700 EEC900 383838 3B3B3B 3D3D3D 404040 424242 Color Name dark slate gray1 dark slate gray2 gold3 deep sky blue deep sky blue1 deep sky blue2 deep sky blue3 deep sky blue4 dim gray dodger blue gold4 goldenrod goldenrod1 goldenrod2 goldenrod3 goldenrod4 gray gray0 gray1 gray10 gray100 gray11 gray12 gray13 gray14 gray15 gray16 gray17 gray18 gray19 gray2 gray20 gray21 gray50 gray51 gray52 gray53 gray54 Color Code 97FFFF 8DEEEE CDAD00 00BFFF 00BFFF 00B2EE 009ACD 00688B 696969 1E90FF 8B7500 DAA520 FFC125 EEB422 CD9B1D 8B6914 BEBEBE 000000 030303 1A1A1A FFFFFF 1C1C1C 1F1F1F 212121 242424 262626 292929 2B2B2B 2E2E2E 303030 050505 333333 363636 7F7F7F 828282 858585 878787 8A8A8A

HTML Hexadecimal Color Codes

123

Table A-1. HTML Color Codes for Color Schemes
Color Name gray27 gray28 gray29 gray3 gray30 gray31 gray32 gray33 gray34 gray35 gray36 gray37 gray38 gray39 gray4 gray40 gray41 gray42 gray43 gray44 gray45 gray46 gray47 gray48 gray49 gray5 gray79 gray8 gray80 gray81 gray82 gray83 gray84 gray85 gray86 gray87 gray88 gray89 Color Code 454545 474747 4A4A4A 080808 4D4D4D 4F4F4F 525252 545454 575757 595959 5C5C5C 5E5E5E 616161 636363 0A0A0A 666666 696969 6B6B6B 6E6E6E 707070 737373 757575 787878 7A7A7A 7D7D7D 0D0D0D C9C9C9 141414 CCCCCC CFCFCF D1D1D1 D4D4D4 D6D6D6 D9D9D9 DBDBDB DEDEDE E0E0E0 E3E3E3 Color Name gray55 gray56 gray57 gray58 gray59 gray6 gray60 gray61 gray62 gray63 gray64 gray65 gray66 gray67 gray68 gray69 gray7 gray70 gray71 gray72 gray73 gray74 gray75 gray76 gray77 gray78 honeydew1 honeydew2 honeydew3 honeydew4 hot pink hot pink3 hot pink4 hot pink1 indian red indian red1 indian red2 indian red3 Color Code 8C8C8C 8F8F8F 919191 949494 969696 0F0F0F 999999 9C9C9C 9E9E9E A1A1A1 A3A3A3 A6A6A6 A8A8A8 ABABAB ADADAD B0B0B0 121212 B3B3B3 B5B5B5 B8B8B8 BABABA BDBDBD BFBFBF C2C2C2 C4C4C4 C7C7C7 F0FFF0 E0EEE0 C1CDC1 838B83 FF69B4 CD6090 8B3A62 FF6EB4 CD5C5C FF6A6A EE6363 CD5555

124

Appendix A: Hexadecimal Color Codes

Table A-1. HTML Color Codes for Color Schemes
Color Name gray9 gray90 gray91 gray92 gray93 gray94 gray95 gray96 gray97 gray98 gray99 green green yellow green1 green2 green3 green4 hot pink 2 honeydew lemon chiffon 2 lemon chiffon 3 lemon chiffon1 lemon chiffon4 light blue light blue2 light blue3 light coral light cyan light goldenrod light goldenrod yellow light goldenrod1 light goldenrod2 light goldenrod3 light goldenrod4 light gray light green light pink light salmon Color Code 171717 E5E5E5 E8E8E8 EBEBEB EDEDED F0F0F0 F2F2F2 F5F5F5 F7F7F7 FAFAFA FCFCFC 00FF00 ADFF2F 00FF00 00EE00 00CD00 008B00 EE6AA7 F0FFF0 EEE9BF CDC9A5 FFFACD 8B8970 ADD8E6 B2DFEE 9AC0CD F08080 E0FFFF EEDD82 FAFAD2 FFEC8B EEDC82 CDBE70 8B814C D3D3D3 90EE90 FFB6C1 FFA07A Color Name indian red4 ivory ivory1 ivory2 ivory3 ivory4 khaki khaki1 khaki2 khaki3 khaki4 lavender lavender blush lavender blush1 lavender blush2 lavender blush3 lavender blush4 lawn green lemon chiffon light yellow2 light yellow3 light yellow4 light blue1 light blue4 light cyan1 light cyan2 light cyan3 light cyan4 light pink1 light pink2 light pink3 light pink4 light sky blue1 light sky blue2 light sky blue3 light skyblue4 light steel blue1 light steel blue2 Color Code 8B3A3A FFFFF0 FFFFF0 EEEEE0 CDCDC1 8B8B83 F0E68C FFF68F EEE685 CDC673 8B864E E6E6FA FFF0F5 FFF0F5 EEE0E5 CDC1C5 8B8386 7CFC00 FFFACD EEEED1 CDCDB4 8B8B7A BFEFFF 68838B E0FFFF D1EEEE B4CDCD 7A8B8B FFAEB9 EEA2AD CD8C95 8B5F65 B0E2FF A4D3EE 8DB6CD 607B8B CAE1FF BCD2EE

HTML Hexadecimal Color Codes

125

HTML Color Codes for Color Schemes Color Name light salmon1 light salmon2 light salmon3 light salmon4 light sea green light sky blue light slate blue light slate gray light steel blue light steel blue4 light yellow light yellow1 maroon4 medium slate blue medium aquamarine medium blue medium orchid medium orchid1 medium orchid2 medium orchid3 medium orchid4 medium purple medium purple1 medium purple2 medium purple3 medium purple4 medium sea green medium spring green medium turquoise medium violet red midnight blue mint cream misty rose misty rose1 misty rose2 misty rose3 misty rose4 moccasin Color Code FFA07A EE9572 CD8162 8B5742 20B2AA 87CEFA 8470FF 708090 B0C4DE 6E7B8B FFFFE0 FFFFE0 8B1C62 7B68EE 66CDAA 0000CD BA55D3 E066FF D15FEE B452CD 7A378B 9370DB AB82FF 9F79EE 8968CD 5D478B 3CB371 00FA9A 48D1CC C71585 191970 F5FFFA FFE4E1 FFE4E1 EED5D2 CDB7B5 8B7D7B FFE4B5 Color Name light steel blue3 lime green linen magenta magenta1 magenta2 magenta3 magenta4 maroon maroon1 maroon2 maroon3 navy old lace olive drab olive drab1 olive drab2 olive drab3 olive drab4 orange orange red orange red1 orange red2 orange red3 orange red4 orange1 orange2 orange3 orange4 orchid orchid1 orchid2 orchid3 orchid4 pale goldenrod pale green pale green1 pale green2 Color Code A2B5CD 32CD32 FAF0E6 FF00FF FF00FF EE00EE CD00CD 8B008B B03060 FF34B3 EE30A7 CD2990 000080 FDF5E6 6B8E23 C0FF3E B3EE3A 9ACD32 698B22 FFA500 FF4500 FF4500 EE4000 CD3700 8B2500 FFA500 EE9A00 CD8500 8B5A00 DA70D6 FF83FA EE7AE9 CD69C9 8B4789 EEE8AA 98FB98 9AFF9A 90EE90 126 Appendix A: Hexadecimal Color Codes .Table A-1.

Table A-1. HTML Color Codes for Color Schemes Color Name navajo white navajo white1 navajo white2 navajo white3 navajo white4 pale turquoise3 pale turquoise4 pale violet red pale violet red 2 pale violet red 3 pale violet red1 pale violet red4 papaya whip peach puff peach puff1 peach puff2 peach puff3 peach puff4 peru pink pink1 pink2 pink3 pink4 plum plum1 plum2 plum3 plum4 powder blue purple purple1 purple2 purple3 purple4 red sienna sienna1 Color Code FFDEAD FFDEAD EECFA1 CDB38B 8B795E 96CDCD 668B8B DB7093 EE799F CD6889 FF82AB 8B475D FFEFD5 FFDAB9 FFDAB9 EECBAD CDAF95 8B7765 CD853F FFC0CB FFB5C5 EEA9B8 CD919E 8B636C DDA0DD FFBBFF EEAEEE CD96CD 8B668B B0E0E6 A020F0 9B30FF 912CEE 7D26CD 551A8B FF0000 A0522D FF8247 Color Name pale green3 pale green4 pale turquoise pale turquoise1 pale turquoise2 red1 red2 red3 red4 rosy brown rosybrown1 rosybrown2 rosybrown3 rosybrown4 royal blue royal blue1 royal blue2 royal blue3 royal blue4 saddle brown salmon salmon1 salmon2 salmon3 salmon4 sandy brown sea green seagreen1 seagreen2 seagreen3 seagreen4 seashell seashell1 seashell2 seashell3 seashell4 steel blue2 steel blue3 Color Code 7CCD7C 548B54 AFEEEE BBFFFF AEEEEE FF0000 EE0000 CD0000 8B0000 BC8F8F FFC1C1 EEB4B4 CD9B9B 8B6969 4169E1 4876FF 436EEE 3A5FCD 27408B 8B4513 FA8072 FF8C69 EE8262 CD7054 8B4C39 F4A460 2E8B57 54FF9F 4EEE94 43CD80 2E8B57 FFF5EE FFF5EE EEE5DE CDC5BF 8B8682 5CACEE 4F94CD HTML Hexadecimal Color Codes 127 .

HTML Color Codes for Color Schemes Color Name sienna2 sienna3 sienna4 sky blue sky blue1 sky blue2 sky blue3 sky blue4 slate blue slate blue1 slate blue2 slate blue3 slate blue4 slate gray slate gray1 slate gray2 slate gray3 slategray4 snow1 snow2 snow3 snow4 spring green spring green1 spring green2 spring green3 spring green4 steel blue steel blue1 wheat2 wheat3 wheat4 white white smoke yellow Color Code EE7942 CD6839 B47268 87CEEB 87CEFF 7EC0EE 6CA6CD 4A708B 6A5ACD 836FFF 7A67EE 6959CD 473C8B 778899 C6E2FF B9D3EE 9FB6CD 6C7B8B FFFAFA EEE9E9 CDC9C9 8B8989 00FF7F 00FF7F 00EE76 00CD66 008B45 4682B4 63B8FF EED8AE CDBA96 8B7E66 FFFFFF F5F5F5 FFFF00 Color Name steel blue4 tan tan1 tan2 tan3 tan4 thistle thistle1 thistle2 thistle3 thistle4 tomato tomato1 tomato2 tomato3 tomato4 turquoise turquoise1 turquoise2 turquoise3 turquoise4 violet violet red violet red 1 violet red 2 violet red3 violet red4 wheat wheat1 yellow green yellow1 yellow2 yellow3 yellow4 Color Code 36648B D2B48C FFA54F EE9A49 CD853F 8B5A2B D8BFD8 FFE1FF EED2EE CDB5CD 8B7B8B FF6347 FF6347 EE5C42 CD4F39 8B3626 40E0D0 00F5FF 00E5EE 00C5CD 00868B EE82EE D02090 FF3E96 EE3A8C CD3278 8B2252 F5DEB3 FFE7BA 9ACD32 FFFF00 EEEE00 CDCD00 8B8B00 128 Appendix A: Hexadecimal Color Codes .Table A-1.

You can modify the following configuration files: ♦ ♦ DataAnalyzer. Contains the configuration settings for an instance of Data Analyzer. Although web. Although infacache-service.xml. They are stored in the Data Analyzer EAR directory.xml.xml contains many settings.properties infa-cache-service.xml 129 . 137 Properties in web. The configuration files define the appearance and operational parameters of Data Analyzer. The following configuration files that contain the settings for an instance of Data Analyzer are stored in its EAR file: ♦ ♦ ♦ DataAnalyzer. infa-cache-service.xml. ♦ Modifying the Configuration Files Each instance of Data Analyzer has an associated enterprise archive (EAR) file. 129 Properties in DataAnalyzer.xml contains many settings.properties. 130 Properties in infa-cache-service. web. you only need to modify specific settings.properties. They are stored in the Data Analyzer EAR directory. Contains the global cache configuration settings for Data Analyzer. you only need to modify specific settings.xml web. 141 Overview To customize Data Analyzer for your organization. you can modify the Data Analyzer configuration files.APPENDIX B Configuration Files This appendix includes the following topics: ♦ ♦ ♦ ♦ ♦ Overview. They are stored in the Data Analyzer EAR directory. Contains additional configuration settings for an instance of Data Analyzer.xml. 129 Modifying the Configuration Files.

Restart Data Analyzer.frequency securityadapter. You can modify DataAnalyzer.defaultRowPrefetch For more information. You must customize some properties in DataAnalyzer. Change the settings and save the configuration file.evictionPeriodMins dynapool.waitForConnectionSeconds dynapool. ♦ Security Adapter Properties.properties together to achieve a specific result. 3. To optimize the database connection pool for a data source. modify the following properties: − − − − − − dynapool. you may need to modify more than one property to effectively customize Data Analyzer operations: ♦ Dynamic Data Source Pool Properties.To change the settings in the configuration files stored in the Data Analyzer EAR file.properties control the processes within the connection pool. complete the following steps: 1.syncOnSystemStart 130 Appendix B: Configuration Files .minCapacity dynapool. Data Analyzer periodically updates the list of users and groups in the repository with the list of users and groups in the LDAP directory service. open the configuration file you want to modify and search for the setting you want to customize.properties to customize the operation of an instance of Data Analyzer. Data Analyzer internally maintains a pool of JDBC connections to the data source.properties The DataAnalyzer. you can modify the following properties. Properties in DataAnalyzer.connectionIdleTimeMins datamart. 2. see “Connection Pool Size for the Data Source” on page 115.maxCapacity dynapool. With a text editor. In the following groups of properties. Data Analyzer provides a synchronization scheduler that you can customize to set the schedule for these updates based on the requirements of your organization. To customize the synchronization scheduler. If you use LDAP authentication. Several properties in DataAnalyzer.properties file contains the configuration settings for an instance of Data Analyzer. − − securityadapter.

ConfigurationName.compatibility. the font must also exist on the workstation that accesses Data Analyzer. If you are using the Internet Explorer browser.ShowNav Note: Do not modify the properties in the section of DataAnalyzer. Default is alert@informatica.1 API. Data Analyzer retrieves objects from the repository each time a user accesses them. Set it to blank to use the current API. Default is 7. you need to enter a valid email address for your organization. the font does not have to exist on the workstation.xml” on page 137.xml determines how the global cache is configured. have installed Adobe SVG Viewer. see “Properties in infa-cache-service. Leaving the default value does not affect alert functionality. and enabled interactive charts.Report. Determines whether global caching is enabled for the repository.The machine where the repository database resides performs fast enough that enabling global caching does not provide a performance gain.level Cache. However. Default is Helvetica. Compatibility level of the API. If set to false.0 and 4. For more information about editing your general preferences to enable interactive charts. Data Analyzer retrieves it from the cache instead of accessing the repository.NoOfDaysToExpire Chart. Supported values are 40 or blank. Number of days before a subscription for cached reports expires.♦ UI Configuration Properties.properties 131 . You might want to disable global caching for the following reasons: . see the Data Analyzer User Guide. Table B-1 describes the properties in the DataAnalyzer. When a user accesses an object that exists in the cache. If you are using the Mozilla Firefox browser. When global caching is enabled. . The font must exist on the machine hosting Data Analyzer.The machine running Data Analyzer has insufficient memory for the global cache. Together they define a single user interface configuration. Default is true.com.Fontname Properties in DataAnalyzer.properties Property alert. If set to true.properties file: Table B-1.ConfigurationName.Subscription. you must enter an email address that includes a domain.GlobalCaching Cache. infa-cache-service. Properties in DataAnalyzer. Font to use in all charts generated by this instance of Data Analyzer. Set to true to increase Data Analyzer performance. This set of properties determine the look and feel of the Data Analyzer user interface. To customize the navigation and header display of Data Analyzer.fromaddress Description From address used for alerts sent by Data Analyzer.ShowHeader uiconfig. Data Analyzer creates a cache in memory for repository objects accessed by Data Analyzer users. you can modify the following properties: − − uiconfig. You can modify several properties in this file to customize how the global cache works. If you use an SMTP mail server. api.properties labeled For Data Analyzer system use only. Set it to 40 to force the current API to behave in the same way as the Data Analyzer 4. For more information about configuring global caching.

defaultRowPrefetch 132 Appendix B: Configuration Files . if Data Analyzer compresses a MIME type not supported by the browser. Chart. Maximum number of containers allowed in custom layouts for dashboards. The value must be smaller than the value of Chart. text/javascript.compressableMimeTypes. Using this property may result in marginally better performance than using compressionFilter. Enter a commaseparated list of MIME types.Minfontsize compression. If the browser does not support compressed files of a MIME type. Some MIME types are handled by plug-ins that decompress natively.properties Property Chart.MaxDataPoints Chart. Default is 30. application/x-javascript. If Data Analyzer users select more data points than the value of this property. Data Analyzer compresses dynamic content of the following MIME types: text/html. Data Analyzer does not compress dynamic content of the unsupported MIME type.alwaysCompressMimeTypes compressionFilter. Minimum size (in bytes) for a response to trigger compression. Enter a comma-separated list of MIME types. Minimum font size to use on the chart axis labels and legend.Fontsize.compressThreshold CustomLayout. Data Analyzer determines the actual font size. without verifying that the browser can support compressed files of this MIME type. Maximum number of data points to plot in all charts. By default. no MIME types are listed. Data Analyzer compresses only the MIME types listed in compressionFilter. Default is 10. Default is 20. Data Analyzer compresses responses if the response size is larger than this number and if it has a compressible MIME type. but will not use a font size smaller than the value of this property. the default is sufficient for most organizations.compressableMimeTypes after verifying browser support.Fontsize Description Maximum font size to use on the chart axis labels and legend. Default is 1000. Default is 7. but will not use a font size larger than the value of this property. By default. MIME types for dynamic content that Data Analyzer compresses. an error message appears.Table B-1. MIME types for dynamic content that Data Analyzer always compresses. However. These MIME types may work with compression regardless of whether the browser supports compression or if an intervening proxy would otherwise break compression. Properties in DataAnalyzer. Default is 512.MaximumNumberofContainers datamart. Data Analyzer determines the actual font size. Typically.compressableMimeTypes compressionFilter. Maximum number of rows that Data Analyzer fetches in a report query. the browser might display an error.

you have a data source named ias_demo that you want to set to READ_UNCOMMITTED and another data source named ias_test that you want to set to REPEATABLE_READ (assuming that the databases these data sources point to support the respective transaction levels).maxCapacity. Supported values are: . Add a property for each data source and then enter the appropriate value for that data source. Nonrepeatable reads and phantom reads can occur. DataSourceName Description Transaction isolation level for each data source used in your Data Analyzer instance.ias_demo=READ_UNC OMMITTED .transactionIsolationLevel.transactionIsolationLevel. Default is 1000.datamart.initialCapacity dynapool.maxCapacity dynapool. Dirty reads.Table B-1. Data Analyzer uses the data restriction merging behaviors in Data Analyzer 4. Data Analyzer uses the data restriction merging behavior provided in Data Analyzer 5. Minimum number of initial connections in the data source pool. Default is true. String to use as a prefix for the dynamic JDBC pool name. If set to false. Default is 2.OldBehavior Provided for backward compatibility. non-repeatable reads. Transactions are not supported.allowShrinking dynapool.properties Property datamart.0. Dirty reads. Increasing this setting can slowData Analyzer performance.properties 133 . Default is 2. .READ_UNCOMMITTED.x and previous releases and does not support AND/OR conditions in data restriction filters.transactionIsolationLevel. Dirty reads cannot occur. If set to true. The value must be greater than zero. For example. If no property is set for a data source. and phantom reads cannot occur.NONE. datatype. . Set the value to the total number of concurrent users. Data Analyzer uses the default transaction level of the database. Set the value to 25% of the maximum concurrent users. Determines the maximum number of characters in a CLOB attribute that Data Analyzer displays in a report cell. Determines whether the pool can shrink when connections are not in use. see the Data Analyzer Schema Designer Guide.READ_COMMITTED.capacityIncrement dynapool. and phantom reads can occur.ias_test=REPEATABLE _READ DataRestriction. non-repeatable reads. The value cannot exceed dynapool. Maximum number of connections that the data source pool may grow to.datalength dynapool.datamart. .REPEATABLE_READ. Dirty reads and non-repeatable reads cannot occur. Number of connections that can be added at one time. Properties in DataAnalyzer. Default is 20. .SERIALIZABLE. Default is IAS_. Default is false. For more information about CLOB support. Phantom reads can occur.CLOB. Add the following entries: .poolNamePrefix Properties in DataAnalyzer.1 and supports AND/OR conditions in data restriction filters.

Default is 3600 seconds (1 hour). Maximum number of seconds a client waits to grab a connection from the pool if none is readily available before giving a timeout error.append 134 Appendix B: Configuration Files . Default is true. the value of this property does not affect how the report displays. Number of minutes Data Analyzer allows an idle connection to be in the pool.GroupOnAttributePair help. the Data Analyzer installation sets the value of this property in the following format: http://Hostname:PortNumber/InstanceName/ Number of seconds after which the import transaction times out.url host. Default is 5. Set to true to append new messages. Determines whether Data Analyzer groups values by row attributes in cross tabular report tables for reports with a suppressed GROUP BY clause when the data source stores a dataset in more than one row in a table. Set to false to overwrite existing information. If the data source stores a dataset in a single row in a table.files. By default. By default.pollingIntervalSeconds jdbc.transaction. Data Analyzer should not perform the check too frequently because it locks up the connection pool and may prevent other clients from grabbing connections from the pool.timeout. To import a large XML file.shrinkPeriodMins dynapool. you might need to increase this value. Set to true to group values by the row attributes. Default is true.seconds Indicator.url import.properties Property dynapool.log. see the Data Analyzer User Guide. Default is 60. the installation process installs online help files on the same machine as Data Analyzer and sets the value of this property. Determines whether Data Analyzer waits for a database connection if none are available in the connection pool. the number of connections in the pool reverts to the value of its initialCapacity parameter if the allowShrinking parameter is true. After this period.waitForConnection dynapool. Properties in DataAnalyzer. URL for the Data Analyzer instance. Frequency in seconds that Data Analyzer refreshes indicators with animation. Set to false if you do not want the Data Analyzer report to group the data based on the row attributes. dynapool.refreshTestMinutes Description Frequency in minutes at which Data Analyzer performs a health check on the idle connections in the pool. Determines whether to append or overwrite new log information to the JDBC log file. Default is true. URL for the location of Data Analyzer online help files. Default is 300 seconds (5 minutes). Default is 1.waitSec GroupBySuppression. For more information.Table B-1.

set the value of the property to include the path and file name: jdbc. If the percentage is below the threshold.properties 135 . logging. For more information. For example. If not specified. Default is 1000. Maximum number of rows to display in the user log. Data Analyzer does not consider the value set for this property while retaining results of the reports that are part of workflow or drill path.log. Data Analyzer creates the JDBC log file in the following default directory: <PowerCenter_install folder>/server/tomcat/jboss/bin/ Default is iasJDBC. Displaying a number larger than the default value may cause the browser to stop responding.maxInMemory providerContext. If the percentage is above the threshold. Displaying a number larger than the default value may cause the browser to stop responding. The percentage is calculated by dividing the used memory by the total memory configured for the JVM. use the forward slash (/) or two backslashes (\\) as the file separator.ShrinkToWidth Determines how Data Analyzer handles header and footer text in reports saved to PDF. If set to zero.properties Property jdbc. Default is 30.HeaderFooter. Data Analyzer continues with the requested operation.abortThresHold queryengine.activity.maxRowsToDisplay Maximum number of rows to display in the activity log.log.log If you do not specify a path.maxRowsToDisplay Maps. Default is true. Data Analyzer estimates the execution time for a report by averaging all execution times for that report during this estimation window. defaults to 1000. Default is 95.estimation. Default is 1000. Number of days used to estimate the query execution time for a particular report. Data Analyzer displays an unlimited number of rows. defaults to 1000. providerContext. Directory where the XML files that represent maps for the Data Analyzer geographic charts are located.file=d:/Log_Files/myjdbc.Table B-1. The default value is 2.user. Number of reports that Data Analyzer keeps in memory for a user session.log. Data Analyzer does not support a single backslash as a file separator.Directory PDF.file Description Name of the JDBC log file. Data Analyzer does not retain report results when you set the property value below 2. To specify a path. Data Analyzer displays an unlimited number of rows. Set to true to allow Data Analyzer to shrink the font size of long headers and footers to fit the configured space.window Properties in DataAnalyzer. If set to zero. The default location is in the following directory: <PCAEInstallationDirectory>/DataAnalyzer/ maps logging. If not specified.log in a directory called Log_Files in the D: drive. Properties in DataAnalyzer. Defines the maximum percentage of memory that is in use before Data Analyzer stops building report result sets and running report queries. Set to false to use the configured font size and allow Data Analyzer to display only the text that fits in the header or footer. Data Analyzer displays an error and notifies the user about the low memory condition. to set the log file to myjdbc. The directory must be located on the machine where Data Analyzer is installed. see “Configuring Report Headers and Footers” on page 88.

including synchronization at startup. Possible values are view or analyze. This property specifies the interval between the end of the last synchronization and the start of the next synchronization. report. Determines whether the servlet compresses files. Set to true to display the Summary section and hide the Grand Totals section on the Analyze tab. If the property is not set. Determines the default tab on which Data Analyzer opens a report when users double-click a report on the Find tab. Data Analyzer displays the sections on multiple pages. Determines the number of minutes between synchronization of the Data Analyzer user list. or is set to false.Table B-1. Default is view.maxRowsPerTable report. Data Analyzer does not synchronize the user list at startup. Users can change this default report view by editing their Report Preferences on the Manage Account tab. If a report contains more sectional tables than this number. Default is 100. During synchronization. The Service Manager considers the value set for this property as the batch size to copy the users.maxSectionSelectorValues report. in reports emailed from the Find tab. Determines whether Data Analyzer displays the Summary section in a sectional report table when you email a report from the Find tab or when you use the Data Analyzer API to generate a report. Determines whether Data Analyzer synchronizes the user list at startup.properties and set the value of the batch size. If true.userReportDisplayMode securityadapter. Default is 300. Maximum number of attribute values users can select for a sectional report table. Properties in DataAnalyzer.batchsize Description Number of users that the PowerCenter Service Manager processes in a batch. Default is true. If a report has more sections than the value set for this property. You can add this property to DataAnalyzer. Default is false. Default is 15.showSummary report. Data Analyzer displays all sections on the Analyze tab. Set to false only if you see problems with compressed content.compress 136 Appendix B: Configuration Files . Default is true. Maximum number of rows to display for each page or section for a report on the Analyze tab.maxSectionsPerPage report. Set to true to enable servlet compression. Default is 720 minutes (12 hours). Data Analyzer rounds the value up to the next value divisible by 5. Data Analyzer disables all user list synchronization. the Service Manager copies the users from the domain configuration database to the Data Analyzer repository in batches.syncOnSystemStart servlet. Set to false to disable.properties Property ReportingService. Data Analyzer synchronizes the user list when it starts. If you set the time interval to 0. Maximum number of sectional tables to display per page on the Analyze tab. If the value is not an increment of 5.frequency securityadapter. and in reports generated by the Data Analyzer API. Set to false to display both the Summary and Grand Totals sections on the Analyze tab but hide these sections in reports emailed from the Find tab and in reports generated by the Data Analyzer API. Default is 65.

Set to false to hide the navigation bar. set this property to true. a date conversion is necessary to avoid SQL errors.Table B-1. Determines whether the server verifies that the browser contains an Accept-Encoding header and thus supports compression before sending a compressed response.Date and time in separate tables . Properties in DataAnalyzer. see “Properties in DataAnalyzer.Date only .ConfigurationName.compress. servlet. Data Analyzer uses the primary date in date comparisons without any date conversion. Set to true to allow the server to send compressed files without checking for browser support.Date and time as separate attributes in same table Determines whether Data Analyzer converts a primary date column from date and time to date before using the primary date in SQL queries with date field comparisons. Setting ShowHeader to false implicitly sets ShowNav to false.xml A cache is a memory area where frequently accessed data can be stored for rapid access. Properties in infa-cache-service.xml 137 . Set to false to force the server to check if the browser can handle compression before sending compressed files. In this case. Set to false to disable. you define a Date Only time dimension. the data source is DB2. However. Determines whether to display the header section for the Data Analyzer pages. Default is true. Set this property to false if the primary date is stored in a DATE column and date conversion is not necessary.properties Property servlet. Set to false only if you see problems with compressed JavaScript. for the given user interface configuration.GlobalCaching property in DataAnalyzer. navigation bar. Default is false. Set to true to display the header section.ConfigurationName. and this property is set to the default value of false. but can have impact on performance.properties” on page 130.jscriptContentEncoding Description Determines whether the servlet compresses JavaScript loaded by <script> tags through content-encoding for browsers that support this compression. Determines whether to display the Data Analyzer navigation bar for the given configuration. Set to true to enable servlet compression of JavaScript. help.useCompressionThroughProxies TimeDimension. To ensure that Data Analyzer always converts the primary date column to DATE before using it in date comparisons. For more information about enabling global caching. Applicable to the following types of time dimension: . Set to false to hide the header section. and logout links. Set to true only all browsers used by Data Analyzer users support compression. Set to true to display the navigation bar. useDateConversionOnPrimaryDate uiconfig. if the datatype of the primary date column in the table is TIMESTAMP. The date conversion ensures that Data Analyzer accurately compares dates. For example.properties determines whether global caching is enabled for Data Analyzer. DB2 generates an error when Data Analyzer compares the primary date column with another column that has a DATE datatype.ShowHeader uiconfig. The Cache. Default is false. including the logo.ShowNav Properties in infa-cache-service.

When global caching is enabled. If Data Analyzer frequently rolls back transactions due to lock acquisition timeouts. Data Analyzer retrieves the object from the repository and then stores the object in memory.xml determines how long Data Analyzer attempts to acquire a lock on an object node. Data Analyzer may not be able to acquire a lock on an object node in the global cache under the following conditions: ♦ ♦ Another user or background process has locked the same object node. it rolls back the transaction and displays an appropriate message to the user. Changes to the default values of the unsupported properties may generate unexpected results. only the properties documented in this section are supported by Data Analyzer.xml in the following directory: <PowerCenter_Install folder>/server/tomcat/jboss/server/informatica/ias/<reporting service name>/properties 2. Data Analyzer stores data in the global cache in a hierarchical tree structure consisting of nodes. When a user modifies an object that exists in the global cache. the LockAcquisitionTimeout attribute is set to 10. for example. Data Analyzer retrieves the object from the repository. Although infa-cacheservice. Data Analyzer releases the lock on the object node. see the JBoss Cache documentation library: http://labs. Open the infa-cache-service. Data Analyzer removes the object from the cache and then saves the updated object to the repository. you can increase the value of the LockAcquisitionTimeout attribute.xml.com/portal/jbosscache/docs Configuring the Lock Acquisition Timeout The global cache uses an optimistic node locking scheme to prevent Data Analyzer from encountering deadlocks.xml contains a number of properties to support the global cache. Data Analyzer has lost the connection to the repository. The next time a user accesses the same object.jboss. To configure the lock acquisition timeout: 1. The next time a user accesses the updated object.GlobalCaching property in DataAnalyzer. Data Analyzer uses JBoss Cache to maintain the global cache for Data Analyzer. A node contains the data for a single cached object.properties. locate infa-cache-service. When global caching is enabled. 3. In the directory where you extracted the Data Analyzer EAR file. a report or dashboard. Data Analyzer acquires a lock on the object node when it commits the update or delete transaction to the repository. Locate the following text: name=”LockAcquisitionTimeout” 4. the properties in infa-cache-service. Use infa-cache-service. If Data Analyzer cannot acquire a lock during this time period. By default.xml to configure the following global cache features: ♦ ♦ Lock acquisition timeout Eviction policy If you disable global caching in the Cache.xml file with a text editor. Data Analyzer ignores the properties in infa-cache-service. Data Analyzer creates a global cache in memory for repository objects accessed by Data Analyzer users. The LockAcquisitionTimeout attribute in infa-cache-service. For more information about JBoss Cache. <attribute name=”LockAcquisitionTimeout”>10000</attribute> 138 Appendix B: Configuration Files .xml determine how the global cache is configured.000 milliseconds. If a user updates an object that exists in the global cache. When the transaction completes. Change the attribute value according to your requirements. When a user first accesses an object. Data Analyzer retrieves the object from the global cache instead of the repository.

6. Administrative system settings. Default region if an object does not belong to any of the other defined regions. Infa-cache-service. For example. Operational. /Users. User specific objects defined for reports. and analytic schemas.xml 139 . You can decrease this value to have Data Analyzer run the eviction policy more frequently. Attribute definitions. For example.5. indicators. Metric definitions. and role definitions.xml back to the Data Analyzer EAR file. you can increase the maximum number of dashboards and decrease the maximum number of reports that Data Analyzer stores in the global cache. Data connector definitions. Dashboard definitions. maxNodes Properties in infa-cache-service. Each global cache region defined in infa-cache-service. Report definitions. logs. Configuring the Eviction Policy To manage the size of the global cache. /Reports. /Security. /Attributes. Save and close infa-cache-service. Eviction Policy Attributes Attribute wakeUpIntervalSeconds Description Frequency in seconds that Data Analyzer checks for objects to remove from the global cache. Set the value to 0 to have Data Analyzer cache an infinite number of objects. Access permissions on an object and data restrictions defined for users or groups. Current time values for calendar and time dimension definitions. User profiles. Calendar and time dimension definitions. The eviction policy works on regions of the global cache. and highlighting rules added by each user. /DataSources. /Time. and contact information. Add infa-cache-service. color schemes. /Reports/User. /Schemas. if a large number of concurrent users frequently access dashboards but not reports. /System. Data source definitions. Each global cache region contains the cached data for a particular object type. the /Reports region contains all cached reports. Table B-2 lists the eviction policy attributes you can configure for the global cache: Table B-2. Maximum number of objects stored in the specified region of the global cache. /Trees. You can modify these attributes to customize when Data Analyzer removes objects from the global cache.xml. Data Analyzer uses an eviction policy to remove the least frequently used objects from the cache when the cache approaches its memory limit.xml includes several eviction policy attributes. Default is 60 seconds. group definitions. For example. Global variables used in reports.xml defines the following global cache regions: ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ /Dashboards. Content folder definitions in the Find tab. Data Analyzer writes informational messages to a global cache log file when a region approaches its maxNodes limit. You can configure a different eviction policy for each region so that Data Analyzer caches more or less objects of a particular type. /_default_. gauges. Default varies for each region. hierarchical. delivery settings. /DataConnectors. For example. /Metrics. /Reports/Variables.

Locate the region whose eviction policy you want to modify. To configure the eviction policy: 1. Data Analyzer removes cached user data if it has not been accessed for 30 minutes. locate infa-cache-service. infa-cache-service. Set the value to 0 to define no time limit. infa-cache-service. maxAgeSeconds Data Analyzer checks for objects to remove from the global cache at the following times: ♦ ♦ The wakeUpIntervalSeconds time period ends. By default. By default.800 seconds (30 minutes). If Data Analyzer runs on a machine with limited memory. 3.xml defines an idle time limit only for regions that contain user specific data. A global cache region reaches its maxNodes limit. Default varies for each region. Open the infa-cache-service. In the directory where you extracted the Data Analyzer EAR file. <attribute name="wakeUpIntervalSeconds">60</attribute> 5. For example. For example. the /Users region has a timeToLiveSeconds value of 1. Add the infa-cache-service. Locate the following text: name=”wakeUpIntervalSeconds” 4. Change the attribute values for the region according to your requirements. Defined for each region of the global cache.xml. you can define maximum age limits for the other regions so that Data Analyzer removes objects from the cache before the maxNodes limit is reached. Defined for each region of the global cache. If Data Analyzer runs on a machine with limited memory.xml file back to the Data Analyzer EAR file. Save and close infa-cache-service. to change the attribute values for the /Dashboards region. locate the following text: region name="/Dashboards" 6. Change the value of the wakeUpIntervalSeconds attribute according to your requirements. you can define idle time limits for the other regions so that Data Analyzer removes objects from the cache before the maxNodes limit is reached. Repeat steps 5 to 6 for each of the global cache regions whose eviction policy you want to modify. 9. Set the value to 0 to define no time limit. modify the following lines: <region name="/Dashboards"> <attribute name="maxNodes">200</attribute> <attribute name="timeToLiveSeconds">0</attribute> <attribute name="maxAgeSeconds">0</attribute> </region> 7.xml in the following directory: <PowerCenter_Install folder>/server/tomcat/jboss/server/informatica/ias/<reporting service name>/properties 2. 8.Table B-2. Eviction Policy Attributes Attribute timeToLiveSeconds Description Maximum number of seconds an object can remain idle in the global cache. Maximum number of seconds an object can remain in the global cache. Data Analyzer removes objects that have reached the timeToLiveSeconds or maxAgeSeconds limits.xml file with a text editor.xml defines a maximum age limit for only the /_default_ region. to locate the /Dashboards region. Data Analyzer also removes objects from any region that have reached the timeToLiveSeconds or maxAgeSeconds limits. Default varies for each region. For example. 140 Appendix B: Configuration Files . Data Analyzer removes the least recently used object from the region.

Data Analyzer does not support a single backslash as a file separator. If you specify a new directory. Table B-3 describes the properties in web.xml file contains configuration settings for Data Analyzer. you typically modify only specific settings in the file. After the user successfully logs in. Default is 100. login-session-timeout searchLimit session-timeout showSearchThreshold TemporaryDir Properties in web.xml file contains a number of settings. The directory must be a shared file system that all servers in the cluster can access. Although the web. Default is true. Default is 30. during synchronization. Data Analyzer resets the session timeout to the value of the session-timeout property. in minutes. You can specify a full directory path such as D:/temp/DA. Session timeout. Properties in web. this property determines whether Data Analyzer updates the groups in the repository when it synchronizes the list of users and groups in the repository with the LDAP directory service. Session timeout. for an inactive session. use the forward slash (/) or two backslashes (\\) as the file separator. Maximum number of groups or users Data Analyzer displays before displaying the Search box so you can find a group or user. in minutes.Properties in web. If the user does not successfully log in and the session remains inactive for the specified time period.xml The web. for an inactive session on the Login page. By default.xml that you can modify: Table B-3. Data Analyzer synchronizes only user accounts.xml 141 . When this property is set to false. Directory where Data Analyzer stores temporary files. Data Analyzer terminates sessions that are inactive for the specified time period. Default is 5. set this property to false so that Data Analyzer does not delete or add groups to the repository during synchronization. Maximum number of groups or users Data Analyzer displays in the search results before requiring you to refine your search criteria. the session expires. Data Analyzer creates the directory in the following default directory: <PCAEInstallationDirectory>/JBoss403/bin/ To specify a path. You must maintain the group information within Data Analyzer. If you want to keep user accounts in the LDAP directory service but keep the groups in the Data Analyzer repository. You can modify this file to customize the operation of an instance of Data Analyzer. not groups. Data Analyzer deletes the users and groups in the repository that are not found in the LDAP directory service. Default is tmp_ias_dir. Default is 1000.xml Property enableGroupSynchronization Description If you use LDAP authentication.

142 Appendix B: Configuration Files .

fromaddress property configuring 131 alerts modifying From email address 131 analytic workflows See also Data Analyzer User Guide importing reports 54 AND operator multiple data restrictions 16 api. 135 saving 79 viewing and clearing 79 administrative reports adding to schedules 96 Administrator’s Dashboard 93 description 93 list and description 97 public folder 94 setting up 94 Administrator’s Dashboard dashboard for administrative reports 93 AIX performance tuning 105 alert. 13 using wildcards 14 write permission 13 activity log configuring maximum rows 80.INDEX A access permissions change permission 14 creating 14 defined 13 Delete permission 14 exclusive 14 inclusive 14 read permission 13 schedules 24 setting 9.Fontsize property configuring 132 Chart.compatibility. 118 images directory 75 143 .Report.NoOfDaysToExpire property configuring 131 cached reports adding administrative reports to schedules 96 attaching to schedule after importing 26 importing 55 Calendar business days 29 daily view 28 holidays 29 leap years 28 monthly view 28 viewing 28 weekly view 28 change permission See access permissions Chart.GlobalCaching property configuring 131 Cache.Minfontsize property configuring 132 clearing activity log 79 event-based schedule histories 34 time-based schedule histories 24 user log 78 color schemes assigning 78 background image URL 75 creating 76 customizing 74.Subscription.Fontname property configuring 131 Chart.level property configuring 131 application server description 2 arguments Import Export utility 66 attaching imported reports to event-based schedule 37 reports to event-based schedule 32 B background image URL background image location 75 business days default 29 setting 29 C cache See global cache Cache.MaxDataPoints property configuring 132 Chart.

compressThreshold property configuring 132 configuration files DataAnalyzer. 20 exporting 45 importing 59 OR operator 16 data sources creating 94 creating for Metadata Reporter 94 description 3 data warehouses performance tuning 99 DataAnalyzer.datalength property configuring 133 date/time formats in localization 8 DB2 database performance tuning 101 default color scheme using 74 delete permission See access permissions deleting data restrictions 19.compressableMimeTypes property configuring 132 compressionFilter.list of color codes 121 login page image URL 75 logo image URL 75 primary 75 primary navigation 76 secondary 75 secondary navigation 76 selecting 77 using default 74 viewing 77 compression.properties configuring 130 datamart.alwaysCompressMimeTypes property configuring 132 compressionFilter. 16 Data Analyzer performance tuning 111 data lineage using 5 data restrictions AND operator 16 by fact table 17 by user or group 19 deleting 19.xml 141 contact information specifying for system administrator 84 creating event-based schedules 32 holidays 29 time-based schedules 22 CustomLayout.OldBehavior property configuring 133 datatype.MaximumNumberofColumns property configuring 132 DataRestriction.properties 130 infa-cache-service. 37 time-based schedule histories 24 time-based schedules 25 disabling event-based schedules 35 time-based schedules 25 E enableGroupSynchronization property configuring 141 error messages Import Export utility 69 event-based schedules access permissions 24 attaching imported reports 37 attaching reports 32 creating 32 defined 21 disabling 35 enabling 35 histories 34 managing reports 35 removing 35 schedule monitoring 29 starting immediately 34 stopping 35 stopping immediately 30 using PowerCenter Integration utility 33 exclusive permissions See access permissions exporting Data Analyzer objects dashboards 44 data restrictions 45 global variables 44 group security profile 45 metrics 40 overview 39 reports 42 security profile 45 time dimension tables 42 user security profile 45 using Import Export utility 65 external URL defined 83 registering 83 D daily view Calendar 28 dashboards exporting 44 importing 57 data access restricting 10.xml 137 web.defaultRowPrefetch property configuring 132 datamart.transactionIsolationLevel property configuring 133 144 Index . 20 event-based schedule histories 34 event-based schedules 35 scheduled reports 27.CLOB.

xml file configuring 137 J Java environment viewing 84 JBoss Application Server description 2 JDBC log file 81.url property configuring 134 HP-UX performance tuning 102 L language settings backing up and restoring Data Analyzer repositories 7 Data Analyzer repository 7 data warehouse 7 import and export repository objects 8 importing table definitions 8 language support display 7 LDAP authentication server settings 81 synchronizing user list 136 leap years Calendar 28 Linux performance tuning 101 localization Data Analyzer display language 7 date and number formats 8 displaying reports in Chinese or Japanese when exporting to PDF 8 language settings 7 overview 7 setting metric or attribute default values 8 I images directory color scheme location 75 Import Export utility error messages 69 format 66 options and arguments 66 repository objects 68 running 66 using 65 import.file property configuring 135 H header section UI configuration 118 headers configuring report headers 88 display options 88 heap size importing large XML files 63 help.append property configuring 134 jdbc.seconds property configuring 134 Index 145 .url property configuring 134 histories clearing 34 clearing schedule 24 holidays creating 29 host. 12 restricting data access 19 searchLimit parameter 91.files.log. 141 showSearchThreshold parameter 91.transaction. 134 user security profile 59 using Import Export utility 65 inclusive permissions See access permissions Indicator.timeout.F fact tables restricting data access 17 footers configuring report footers 88 display options 88 G global cache configuring 137 eviction policy 139 lock acquisition timeout 138 sizing 139 global variables exporting 44 importing 56 GroupBySuppression.log.pollingIntervalSeconds property configuring 134 infa-cache-service. 11.GroupOnAttributePair property configuring 134 groups displaying 91 removing from the repository 10. 134 jdbc. 141 imported reports attaching to schedules 26 importing dashboards 57 data in multiple languages 8 data restrictions 59 global variables 56 group security profile 60 large XML files 62 overview 49 reports 54 schema objects 50 security profile 59 transaction timeout 62.

xml 137 defining in web. 135 performance tuning AIX 105 Data Analyzer processes 111 database 99 DB2 database 101 HP-UX 102 Linux 101 Microsoft SQL Server 2000 101 operating system 101 Oracle database 100 Solaris 103 Windows 106 permissions See access permissions setting 9 post-session command using the PowerCenter Integration utility 33 PowerCenter Integration utility using in a post-session command 33 PowerCenter Workflow Manager using the PowerCenter Integration utility 33 predefined color scheme using 74 previewing report headers and footers 90 primary display item color scheme 75 properties defining in DataAnalyzer.log files JDBC 81 managing 78 logging.ShrinkToWidth property configuring 135 using 89.activity.properties 130 defining in infa-cache-service.maxRowsToDisplay property configuring 80.Directory property configuring 135 metrics exporting 40 importing 50 Microsoft SQL Server 2000 performance tuning 101 monitoring schedules 29 monthly view Calendar 28 multiple instances of Data Analyzer configuration files 129 N navigation color schemes 76 navigation bar UI configuration 118 notifyias using in PowerCenter post-session command 33 Q queries setting rules 85 query governing query time limit 85 report processing time limit 85 row limit 85 setting rules 85 specifying for users 12 query time limit defined 85 queryengine.maxRowsToDisplay property configuring 79. 135 login page image URL login page image location 75 login-session-timeout property configuring 141 logo image customizing 74 logo image location 75 P PDF.user.xml 141 providerContext. 135 logging.HeaderFooter.maxInMemory property configuring 135 M mail servers configuring 83 Maps.abortThresHold property configuring 135 providerContext.window property configuring 135 O operating system performance tuning 101 viewing 84 operational schemas setting data restrictions 17 operators AND 16 OR 16 options Import Export utility 66 OR operator multiple data restrictions 16 Oracle performance tuning 100 R read permissions See access permissions recurring schedules See time-based schedules 146 Index .estimation.

jscriptContentEncoding property configuring 137 servlet.useCompressionThroughProxies property configuring 137 session-timeout property configuring 141 showSearchThreshold property configuring 141 single sign-on See also Data Analyzer SDK Guide with Data Analyzer API 118 single-event schedules See time-based schedules Solaris performance tuning 103 SQL queries row limit 85 setting rules 85 time limit 85 starting event-based schedules 34 time-based schedules 24 stopping event-based schedules 35 running schedules 30 time-based schedules 25 synchronization scheduler customizing 136 system administrator using Import Export utility 65 system information viewing 84 system log configuring 80 Index 147 .syncOnSystemStart property configuring 136 servlet.compress property configuring 136 servlet.userReportDisplayMode property configuring 136 ReportingService.showSummary property configuring 136 report.maxSectionSelectorValues property configuring 136 report. 36 schedules See also event-based schedules See also time-based schedules attaching imported reports to schedules 26 for cached administrative reports 96 stopping 30 scheduling business days 29 Calendar 28 holidays 29 schemas restricting data access 17 scroll bars report table option 88 searchLimit property configuring 141 secondary display item color schemes 75 security access permissions 13 security profiles exporting 45 exporting user 45 group 45 importing 59 importing group 60 importing user 59 securityadapter.maxRowsPerTable property configuring 136 report.compress.removing See deleting report processing time limit defined 85 report.maxSectionsPerPage property configuring 136 report.batchsize configuring 136 reports See also Data Analyzer User Guide adding administrative reports to schedules 96 administrative reports overview 93 attached to time-based schedules 25 attaching imported reports to event-based schedule 37 attaching to event-based schedule 32 attaching to schedule after importing 26 deleting from time-based schedules 27 displaying scroll bars in tables 88 exporting Data Analyzer objects 42 header and footer display options 88 importing 54 in event-based schedules 35 list of administrative reports 97 previewing headers and footers 90 removing from event-based schedules 37 setting headers and footers 88 viewing in event-based schedule 36 viewing properties 27 repository database performance tuning 99 restore repository language settings 7 row limit query governing 85 row-level security restricting data access 16 running Import Export utility 66 S saving activity log 79 system log 80 user log 78 schedule monitoring defined 29 scheduled reports deleting 27 viewing 26.frequency property configuring 136 securityadapter.

131 URL parameter 119 UICONFIG URL parameter 119 URL background image for color schemes 75 login page image for color schemes 75 logo image for color schemes 75 URL API See also Data Analyzer SDK Guide using 117 user log configuring maximum rows 79.xml configuring 141 weekly view Calendar 28 wildcards searching user directory 14 Windows performance tuning 106 work days scheduling 29 write permissions See access permissions X XML files heap size for application 63 importing large files 62 temporary table space 63 U UI configuration default 118 properties 131 setting up 118. 135 saving 78 viewing and clearing 78 users displaying 91 restricting data access 19 searchLimit parameter 91.saving 80 viewing 80 V viewing activity log 79 histories for event-based schedules 34 reports attached to event-based schedules 36 reports attached to time-based schedules 26 system information 84 system log 80 time-based schedule histories 24 user log 78 T tasks properties 27 temporary table space importing large XML files 63 TemporaryDir property configuring 141 time dimension tables exporting Data Analyzer objects 42 time-based schedules access permissions 24 creating 22 defined 21 deleting 25 disabling and enabling 25 histories 24 managing reports 25 schedule monitoring 29 starting immediately 24 stopping immediately 25 viewing the Calendar 28 TimeDimension. 141 UTF-8 character encoding Data Analyzer support 7 148 Index . 141 showSearchThreshold parameter 91.useDateConversionOnPrimaryDate property configuring 137 timeout changing default for transactions 62 configuring for Data Analyzer session 4 transaction timeout changing the default 62 W web.

INDIRECT. THE IMPLIED WARRANTIES OF MERCHANTABILITY. NEGLIGENCE. EITHER EXPRESSED OR IMPLIED. INCLUDING. . STRICT LIABILITY. CONSEQUENTIAL OR OTHER DAMAGES ARISING OUT OF THE USE OF THE ODBC DRIVERS. FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. MISREPRESENTATION AND OTHER TORTS. IN NO EVENT WILL DATADIRECT OR ITS THIRD PARTY SUPPLIERS BE LIABLE TO THE END-USER CUSTOMER FOR ANY DIRECT. WITHOUT LIMITATION.NOTICES This Informatica product (the “Software”) includes certain drivers (the “DataDirect Drivers”) from DataDirect Technologies. an operating company of Progress Software Corporation (“DataDirect”) which are subject to the following terms and conditions: 1. 2. BREACH OF WARRANTY. WHETHER OR NOT INFORMED OF THE POSSIBILITIES OF DAMAGES IN ADVANCE. INCLUDING BUT NOT LIMITED TO. BREACH OF CONTRACT. SPECIAL. THE DATADIRECT DRIVERS ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND. INCIDENTAL. THESE LIMITATIONS APPLY TO ALL CAUSES OF ACTION.

Sign up to vote on this title
UsefulNot useful