You are on page 1of 417

Cognos® 8 Planning

CONTRIBUTOR

ADMINISTRATION GUIDE
Product Information
This document applies to Cognos® 8 Planning Version 8.3 and may also apply to subsequent releases. To check for newer versions of this
document, visit the Cognos Global Customer Services Web site (http://support.cognos.com).

Copyright
Copyright © 2007 Cognos Incorporated.
Portions of Cognos® software products are protected by one or more of the following U.S. Patents: 6,609,123 B1; 6,611,838 B1; 6,662,188
B1; 6,728,697 B2; 6,741,982 B2; 6,763,520 B1; 6,768,995 B2; 6,782,378 B2; 6,847,973 B2; 6,907,428 B2; 6,853,375 B2; 6,986,135 B2;
6,995,768 B2; 7,062,479 B2; 7,072,822 B2; 7,111,007 B2; 7,130,822 B1; 7,155,398 B2; 7,171,425 B2; 7,185,016 B1;7,213,199 B2.
Cognos and the Cognos logo are trademarks of Cognos Incorporated in the United States and/or other countries. All other names are trademarks
or registered trademarks of their respective companies.
While every attempt has been made to ensure that the information in this document is accurate and complete, some typographical errors or
technical inaccuracies may exist. Cognos does not accept responsibility for any kind of loss resulting from the use of information contained
in this document.
This document shows the publication date. The information contained in this document is subject to change without notice. Any improvements
or changes to either the product or the document will be documented in subsequent editions.
U.S. Government Restricted Rights. The software and accompanying materials are provided with Restricted Rights. Use, duplication, or
disclosure by the Government is subject to the restrictions in subparagraph (C)(1)(ii) of the Rights in Technical Data and Computer Software
clause at DFARS 252.227-7013, or subparagraphs (C)(1) and (2) of the Commercial Computer Software - Restricted Rights at 48CFR52.227-19,
as applicable. The Contractor is Cognos Corporation, 15 Wayside Road, Burlington, MA 01803.
This software/documentation contains proprietary information of Cognos Incorporated. All rights are reserved. Reverse engineering of this
software is prohibited. No part of this software/documentation may be copied, photocopied, reproduced, stored in a retrieval system, transmitted
in any form or by any means, or translated into another language without the prior written consent of Cognos Incorporated.
Table of Contents

Introduction 13

What’s New? 17
New Features in Version 8.3 17
Extended Language support 17
Microsoft Vista Compliance 17
Microsoft Excel 2007 17
Select Folders in Cognos Connection 17
Select a Framework Manager Package for an Administration Link 18

Chapter 1: Cognos 8 Planning - Contributor 19


Extending the Functionality of Contributor 19
Using Contributor Applications 19
Cubes 19
Dimensions 19
e.Lists 19
Access Tables and Saved Selections 20
D-Links 20
Managing Contributor Applications 20
Multiple Administrators 20
Moving Data 20
System Links 21
Automating Contributor Tasks Using Macros 21
Publishing Data 21
Creating a Contributor Application 21
Developing the Plan in Analyst 22
Designing the e.List 22
Assigning Rights 22
Creating the Application 22
Creating the Production Application 23
Running Jobs 23
Testing the Web Site 23
The Administrator 24
The Planner 24
The Reviewer 24
The Toolbar 25

Chapter 2: Security 27
Cognos Namespace 27
Authentication Providers 28
Deleting or Restoring Unconfigured Namespaces 29
Users, Groups, and Roles 29
Users 30
Groups and Roles 30

Administration Guide 3
Table of Contents

Setting up Security for a Cognos 8 Planning Installation 32


Configure Cognos 8 to Use an Authentication Provider 33
Add or Remove Members From Planning Rights Administrators and Planning Contributor
Users Roles 34
Enabling Planning Roles in Cognos 8 35
Restricting Access to the Everyone Group 35
Recommendation - Creating Additional Roles or Groups for Contributor 36
Configuring Access to the Contributor Administration Console 36
Granting Access Rights to Administrators 37
Access Rights for Macros 40
Assign Scheduler Credentials 42

Chapter 3: Configuring the Administration Console 45


Creating Planning Tables 45
Add a Datastore Server 46
Datastore Server Information 47
Jobs 47
Types of Jobs 47
Run Order for Jobs 48
Actions That Cause Jobs to Run 49
Securing Jobs 49
Managing Jobs 50
Reconciliation 52
Deleting Jobs 53
Managing Job Servers 53
Manage a Job Server Cluster 54
Add a Job Server and Change its Content Store 54
Add Applications and Other Objects to a Job Server Cluster 55
Add Objects to a Job Server 56
Remove Job Servers 57
Monitor Application Folders 57
Creating, Adding and Upgrading Applications 58
Remove Datastore Definitions and Contributor Applications 58
Adding an Existing Application to a Datastore Server 58
The Monitoring Console 59
Managing Sessions 59
Sending Email 61

Chapter 4: Creating a Cognos 8 Planning - Contributor Application 63


Creating a Contributor Application 63
Application Folders 66
Model Details 66
Running the Script.sql file (DBA Only) 67
Application Information 68
Configuring the Contributor Application 68
Configure Global Web Client Settings 69
Set the Cube Order for an Application 69
Set the Order of Axes 70
Change Grid Options 70
Change Application Options 72

4 Contributor
Table of Contents

Create Planner-Only Cubes 75


Creating General Messages and Cube Instructions 76
Maintaining the Contributor Application 76
Save Application XML for Support 77
View Application Details 77
Admin Options 77
Select Dimensions for Publish 79
Set Go to Production Options 79
Datastore Options 80

Chapter 5: The Cognos 8 Planning - Contributor Web Application 83


The Contributor Web Site 83
The Tree 83
The Table 83
Set Web Site Language 84
Contributor for Microsoft Excel 84
Accessing Contributor Applications 85
Configure Contributor Web Client Security Settings 85
How to Link to Earlier Versions of Contributor Applications 86
Independent Web Applications 86
Working Offline 86
Example 87
Steps to Work Offline 87
The Offline Store 88

Chapter 6: Managing User Access to Applications 89


The e.List 89
Multiple Owners of e.List Items 91
Import e.List and Rights 92
Export the e.List and Rights 97
Managing the e.List 98
Rights 103
Actions Allowed for Review e.List Items 104
Actions Allowed for Contribution e.List items 105
Rights File Formats 106
Modify Rights Manually 107
Validating Users, Groups and Roles in the Application Model and Database 109

Chapter 7: Managing User Access to Data 111


Saved Selections 111
Editing Saved Selections 112
Access Tables 114
Access Tables and Cubes 115
Rules for Access Tables 116
Creating Access Tables 119
Large Access Tables 125
Multiple Access Tables 132
Changes to Access Tables That Cause a Reconcile Job to Be Run 133
Access Tables and Import Data 134
Access Levels and Contributor Data Entry 134

Administration Guide 5
Table of Contents

Force to Zero 134


Reviewer Access Levels 134
Cut-down Models 134
When Does the Cut-down Models Process Happen? 135
Limitations 135
Cut-down Model Options 135
Cut-down Models and Translation 136
Cut-down Models and Access Tables 136
Restrictions to Cutting Down Dimensions 137
Estimating Model and Data Block Size 138
Cut-down Model Example 138

Chapter 8: Managing Data 141


Understanding Administration, System, and Local Links 142
Using Links to Move Data Between Cubes and Applications 144
Using Links in Model Design 144
Administration Links 145
Create an Administration Link 148
Map Dimensions Manually 152
View Items in a Dimension 153
Remove a Dimension 153
Running Administration Links 153
Exporting and Importing Administration Links 154
Tuning Administration Links 154
System Links 158
Create a System Link 159
Run a System Link 160
Importing Data from Cognos 8 Data Sources 161
Create a Data Source Connection 161
Create a Framework Manager Project and Import Metadata 164
Create and Publish the Cognos Package 165
Working with SAP BW Data 166
Recommendation - Query Items 167
Recommendation - Hierarchy 167
Recommendation - Hiding the Dimension Key Field 168
Working with Packages 168
Troubleshooting Detailed Fact Query Subject Memory Usage 168
Deploying the Planning Environment and Viewing the Status of Deployments 168
Export a Model 168
Import a Model 169
View the Status of Existing Deployments 171
Troubleshooting Out of Memory Exception When Exporting During a Deployment 171
Importing Text Files into Cubes 172
Creating the Source File 172
Select the Cube and Text File to Load into the Cube 173
Load the Data into the Datastore 174
Prepare the Import Data Blocks 174

Chapter 9: Synchronizing an Application 177


Changes that Result in Loss of Data 177

6 Contributor
Table of Contents

Synchronizing an Application 178


Generate Scripts 178
How to Avoid Loss of Data 178
Example Synchronization 179
Advanced - Model Changes 180

Chapter 10: Translating Applications into Different Languages 183


Assigning a Language Version to a User 183
Translate the Application 184
Translate Strings Using the Administration Console 185
Exporting and Importing Files for Translation 187
Export Files for Translation 188
Import Translated Files 188
Search for Strings in the Content Language or Product Language Tab 189
Translating Help 189
System Locale and Code Pages 190
About Fonts 190

Chapter 11: Automating Tasks Using Macros 191


Common Tasks to Automate 191
Creating a Macro 192
Create a New Macro 192
Create a Macro Step 193
Transferring Macros and Macro Steps 196
Job Servers (Macro Steps) 197
Development (Macro Steps) 200
Production (Macro Steps) 208
Administrator Links (Macro Steps) 216
Macros (Macro Steps) 217
Session (Macro Steps) 219
Running a Macro 220
Run a Macro from Administration Console 221
Run a Macro from Cognos Connection 221
Run a Macro from a Cognos 8 Event 222
Run a Macro using Macro Executor 224
Run a Macro using Command Line 224
Run a Macro using Batch File 224
Troubleshooting Macros 225
Unable to Run Contributor Macros Using a Batch File 225

Chapter 12: Data Validations 227


Setting Up Data Validation 228
The Impact of Aggregation on Validation Rules 229
Define a Validation Rule 233
Define or Edit a Rule Set 235
Edit a Validation Rule 236
Associate Rule Sets to e.List Items 237

Chapter 13: The Go to Production Process 239


Planning Packages 240

Administration Guide 7
Table of Contents

Reconciliation 241
The Production Application 241
Model Definition 241
Data Block 241
Production Tasks 242
Cut-down Models and Multiple Languages 242
The Development Application 243
Development Model Definition 243
Import Data Blocks 243
Run Go to Production 244
Go to Production Options Window 244
Show Changes Window 245
Model Changes Window 246
Import Data Details Tab 250
Invalid Owners and Editors Tab 250
e.List Items to be Reconciled Tab 252
Cut-down Models Window 252
Finish Window 252

Chapter 14: Publishing Data 255


The Publish Data Store Container 256
Access Rights Needed for Publishing 256
Publish Scripts 256
Selecting e.List Items to Be Published 257
Reporting Directly From Publish Tables 257
Model Changes that Impact the Publish Tables 258
Data Dimensions for Publish 260
Selecting a Dimension for Publish for Reporting 261
The Table-Only Publish Layout 262
Database Object Names 263
Items Tables for the Table-only Layout 263
Hierarchy Tables for the Table-only Layout 264
Export Tables For the Table-only Layout 266
Annotations Tables for the Table-only Layout 267
Attached Document Tables for the Table-only Layout 269
Metadata Tables 270
Common Tables 272
Job Tables 272
The P_OBJECTLOCK Table 273
Create a Table-only Publish Layout 273
Options for Table-only Publish Layout 274
Create an Incremental Publish 275
The View Publish Layout 276
Database Object Names 277
Items Tables for the View Layout 277
Hierarchy Tables for the View Layout 277
Export Tables for the View Layout 278
Annotation Tables for the View Layout 278
Views 279

8 Contributor
Table of Contents

Create a View Layout 280


Options for View Layout 281
Create a Custom Publish Container 282
Configure the Datastore Connection 283
Remove Unused Publish Containers 284

Chapter 15: Commentary 287


Annotations 287
User Annotation Behavior 288
Linking to Web Pages, Files, and Email Addresses From Annotations 288
Delete Commentary 289
Attach Documents 290
Configuring the Attached Documents Properties 290
Attaching a Document 291
Viewing and Editing Commentary 291
Publishing Attached Documents 292
Copy Commentary 292

Chapter 16: Previewing the Production Workflow 295


Previewing e.List item Properties 295
Preview Properties - General 295
Preview Properties - Owners 296
Preview Properties - Editors 296
Preview Properties - Reviewers 296
Preview Properties - Rights 296
Workflow State Definition 297
Additional Workflow States 298
Workflow State Explained 298

Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products 301
Client and Admin Extensions 302
Client Extensions 302
Admin Extensions 303
Integrating with Cognos Business Intelligence Products 304
Using Cognos 8 BI with Contributor Unpublished (Real-Time) Data 304
The Generate Framework Manager Model Admin Extension 308
Generate Transformer Model 311
Excel and Contributor 313
Contributor for Excel 313
Print to Excel 315
Export for Excel 315
Financial Planning with Cognos Performance Applications and Cognos 8 Planning 315

Chapter 18: Example of Using Cognos 8 Planning with Other Cognos Products 317
Download and Deploy the Sample 317
Run a Contributor Macro to Import Data 319
Create and Publish a Framework Manager Package 319
Create a Report 321
Create an Event Studio Agent 326

Administration Guide 9
Table of Contents

Chapter 19: Upgrading Cognos 8 Planning - Contributor 329


Upgrade the Planning Administration Domain 330
Upgrade Contributor Applications 332
Upgrade Security 335
Accessing Contributor Applications 336

Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations 337


Designing an Analyst Model for Contributor 337
Analyst Library Guidelines 337
D-Cube Restrictions 338
D-Links 339
Dimensions 341
Creating Applications with Very Large Cell Counts 346
Break-Back Differences Between Analyst and Contributor 347
Analyst<>Contributor Links 347
Set Up a Link Between Analyst and Contributor and Between Contributor Applications 348
Analyst>Contributor D-Links 349
Contributor>Analyst Links 349
Contributor>Contributor links 350
Copying Analyst<>Contributor Links 350
Links and Memory Usage 351
Update a Link from a Computer That Cannot Access the Original Datastore 352
Multiple D-Links Using the @DLinkExecuteList Macro 352
Run D-Links While Making Model Changes 352
Effects of Fill and Substitute Mode on Untargeted Cells 353
Effect of Access Tables in Contributor 354

Appendix A: DB2 UDB Supplementary Information 355


The Contributor Datastore 355
Requirements for the DB2 UDB Database Environment 355
Background Information For DB2 UDB DBAs 356
Security and Privileges 356
Naming Conventions 357
Metadata 357
Backup 357
Standards 357
Preventing Lock Escalation 358
Large Object Types 358
Job Architecture 359
Importing and Reporting Data 359
Importing Data: Understanding the Prepare Import Job Process 359
Reporting Data: Understanding the Publish Job Process 360
Data Loading 360
Job Failure 360

Appendix B: Troubleshooting the Generate Framework Manager Model Extension 361


Unable To Connect to the Database While Using Oracle 361
Unable to Create Framework Manager Model 361
Unable to Retrieve Session’s Namespace 362
Unable to Change Model Design Language 362

10 Contributor
Table of Contents

Appendix C: Limitations and Troubleshooting when Importing Cognos Packages 363


Limitations for Importing Cognos Packages 363
Troubleshooting Modeled Data Import 366
Viewing Generated Files 366
Using Error Messages to Troubleshoot 369
Techniques to Troubleshoot Problems with an Import 372

Appendix D: Customizing Cognos 8 Planning - Contributor Help 375


Creating Cube Help 375
Detailed Cube Help 375
Using HTML Formatting 375
Using Images, Hypertext Links, and E-Mail Links in Contributor Applications 377

Appendix E: Error Handling 379


Error Logs and History Tracking 379
Application XML issues 380
Timeout Errors 380
History Tracking 380
Calculation Engine (JCE) error logs 382
General Error Logging 383
How errors are logged in the Administration Console 383
Using the LogFetcher Utility 385

Appendix F: Illegal Characters 387

Appendix G: Default Options 389


Grid Options 389
Application Options 389
XML location and filename 390
Admin Options 390
Go to Production Options 391
Go to Production Wizard Options 392
Publish Options-View Layout 392
Publish Options-Table Only Layout 392
e.List 393
Rights 393
Access Tables 393
Delete Commentary 394

Appendix H: Data Entry Input Limits 395


Limits For Text Formatted Cells 395
Limits for Numerical Cells 396

Glossary 397

Index 405

Administration Guide 11
Table of Contents

12 Contributor
Introduction

This document is intended for use with the Cognos 8 Planning - Contributor Administration Console.
This guide describes how to use the Contributor Administration Console to create and manage
Contributor applications.
Cognos 8 Planning provides the ability to plan, budget, and forecast in a collaborative, secure
manner. The major components are Analyst and Contributor.

Cognos 8 Planning - Analyst


Analyst is a flexible tool used by financial specialists to define their business models. These models
include the drivers and content required for planning, budgeting, and forecasting. The models can
then be distributed to managers using the Web-based architecture of Cognos 8 Planning -
Contributor.

Cognos 8 Planning - Contributor


Contributor streamlines data collection and workflow management. It eliminates the problems of
errors, version control, and timeliness that are characteristic of a planning system solely based on
spreadsheets. Users have the option to submit information simultaneously through a simple Web
or Microsoft Excel® interface. Using an intranet or secure Internet connection, users review only
what they need to review and enter data where they are authorized.
For more information about using this product, visit the Cognos Global Customer Services Web
site (http://support.cognos.com).

Best Practices for Cognos 8 Planning


The Cognos Innovation Center™ for Performance Management provides a forum and Performance
Blueprints which you can use to discover new ideas and solutions for finance and performance
management issues. Blueprints are pre-defined data, process, and policy models that incorporate
best practice knowledge from Cognos customers and the Cognos Innovation Center. These Blueprints
are free of charge to existing customers or Platinum and Gold partners. For more information about
the Cognos Innovation Center or the Performance Blueprints, visit http://www.cognos.com/
innovationcenter.

Audience
To use this guide, you should have an understanding of Cognos 8 Planning - Analyst. Some
knowledge of security and database systems would also be helpful.

Related Documentation
Our documentation includes user guides, getting started guides, new features guides, readmes, and
other materials to meet the needs of our varied audience. The following documents contain related
information and may be referred to in this document.

Administration Guide 13
Introduction

Note: For online users of this document, a Web page such as The page cannot be found may appear
when clicking individual links in the following table. Documents are made available for your
particular installation and translation configuration. If a link is unavailable, you can access the
document on the Cognos Global Customer Services Web site (http://support.cognos.com). Logon
credentials are available either from your administrator or by request from support.america@cognos.
com.

Document Description

Analyst User Guide Using Cognos 8 Planning - Analyst

Contributor Browser User Using the Cognos 8 Planning - Contributor Web client
Guide

Contributor for Microsoft Using the Cognos 8 Planning - Contributor for Microsoft Excel®
Excel® User Guide

Cognos Connection User Using Cognos Connection to publish, find, manage, organize, and
Guide view Cognos content, such as scorecards, reports, analyses, and agents

Cognos 8 Administration Managing servers, security, reports, and portal services; setting up
and Security Guide Cognos samples; troubleshooting; and customizing Cognos 8

Framework Manager User Creating and publishing models using Framework Manager
Guide

Guidelines for Modeling Recommendations for modeling metadata to use in business reporting
Metadata and analysis

Event Studio User Guide Creating and managing agents that monitor data and perform tasks
when the data meets predefined thresholds

Finding Information
To find the most current product documentation, including all localized documentation, access the
Cognos Global Customer Services Web site (http://support.cognos.com). Click the Documentation
link to access documentation guides. Click the Knowledge Base link to access all documentation,
technical papers, and multimedia materials.

Product documentation is available in online help from the Help menu or button in Cognos products.
You can also download documentation in PDF format from the Cognos Global Customer Services
Web site.
You can also read PDF versions of the product readme files and installation guides directly from
Cognos product CDs.

14 Contributor
Introduction

Getting Help
For more information about using this product or for technical assistance, visit the Cognos Global
Customer Services Web site (http://support.cognos.com). This site provides product information,
services, user forums, and a knowledge base of documentation and multimedia materials. To create
a case, contact a support person, or to provide feedback, click the Contact Us link. For information
about education and training, click the Training link.

Printing Copyright Material


You can print selected pages, a section, or the whole book. Cognos grants you a non-exclusive,
non-transferable license to use, copy, and reproduce the copyright materials, in printed or electronic
format, solely for the purpose of operating, maintaining, and providing internal training on Cognos
software.

Administration Guide 15
Introduction

16 Contributor
What’s New?

This section contains a list of new features for this release. It will help you plan your upgrade and
application deployment strategies and the training requirements for your users.
For information about upgrading, see the Cognos 8 Planning Installation and Configuration Guide.
To review an up-to-date list of environments supported by Cognos products, such as operating
systems, patches, browsers, Web servers, directory servers, database servers, and application servers,
visit the Cognos Global Customer Services Web site (http://support.cognos.com).

New Features in Version 8.3


Listed below are new features since the last release. Links to directly-related topics are included.

Extended Language support


This release provides support for Japanese and Swedish product strings. Contributor Web Client
strings can be translated into these languages without the need to enter translation strings. Japanese
and Swedish language content strings are also supported. For more information, see the following
sections.
● "Translating Applications into Different Languages" (p. 183)

● "Set Web Site Language" (p. 84)

Microsoft Vista Compliance


The Contributor Web application and Contributor for Excel can be used with Microsoft Vista. For
more information about installing and using Vista, see the Cognos 8 Planning - Contributor for
Microsoft Excel® Installation Guide. For more information about using Contributor with Excel
or the Contributor Web application, see the Contributor for Microsoft Excel® User Guide and
Contributor Browser User Guide.

Microsoft Excel 2007


This release supports Contributor and Analyst for Excel using Microsoft Excel 2007. For more
information about using Analyst or Contributor with Excel, see the Analyst for Microsoft Excel®
User Guide and Contributor for Microsoft Excel® User Guide.

Select Folders in Cognos Connection


This release supports folder selection in Cognos Connection when generating Framework Manager
and Transformer models. For more information, see the following sections.
● "Run the Generate Framework Manager Model Admin Extension" (p. 310)

Administration Guide 17
What’s New?

● "Generate Transformer Model" (p. 311)

Select a Framework Manager Package for an Administration Link


This release supports browsing in Cognos Connection folders to select a Framework Manager
package for an Administration Link. For more information, see "Steps to Create Links with Cognos
Package as the Source" (p. 150).

18 Contributor
Chapter 1: Cognos 8 Planning - Contributor

Cognos 8 Planning - Contributor is a Web-based planning platform that can involve thousands of
people in the planning process, collecting data from managers and others, in multiple locations.
Complex calculations are performed on the Web client showing totals as soon as data is entered,
preventing unnecessary traffic on the server during busy times. Information is then stored in a data
repository, providing an accurate and single pool of planning data.
In addition, users can use Contributor for Excel to view and edit Contributor data using Excel.
Administrators use the Contributor Administration Console to create and configure Contributor
applications, manage access settings, distribute Cognos 8 Planning - Analyst business plans, and
configure the user's view of the business plan.

Extending the Functionality of Contributor


Extensions are provided that extend the functionality of the Contributor Administration Console
and Web Client. There are two types of extensions: Admin Extensions and Client Extensions. Admin
Extensions run in Administration Console. Client Extensions are activated through buttons on the
Contributor grid. For example, you configure an extension to print Excel.

Using Contributor Applications


A Contributor application is an Analyst plan that is made available to users on the Web through
the Contributor Administration Console. A Contributor application consists of a series of linked
cubes that can be used for data entry by many people at the same time.

Cubes
A cube is similar to a spreadsheet. A cube always contains rows and columns and usually at least
one other page, making it multidimensional. It is used to collect data. Cells in cubes can contain
entered data or calculations.

Dimensions
The rows, columns, and pages of a cube are created from dimensions. Dimensions are lists of related
items, such as Profit and Loss items, products, customers, cost centers, and months. Dimensions
also contain all the calculations. One dimension can be used by many cubes.

e.Lists
The structure of an application is based on an e.List. An e.List is a kind of dimension that contains
a hierarchical structure that typically reflects the structure of the organization. For example, it may

Administration Guide 19
Chapter 1: Cognos 8 Planning - Contributor

include cost centers and profit centers. There is one e.List per application, and the e.List item is
assigned to a user, group, or role. There are two types of user: planners and reviewers. A planner
enters and submits data to be reviewed by a reviewer. There may be several layers of reviewer
depending on the structure of the e.List.

Access Tables and Saved Selections


Access to cubes is managed by e.List item, using saved selections and access tables. A saved selection
is a collection of dimension items that is assigned to e.List items using access tables. This means
that users can view only data that is relevant to them. For example, you may want to show only
travel expense items to one user and entertainment expense items to another user where both items
are held in the same dimension.
Using access tables, you assign different levels of access to e.List items, saved selections, dimension
items and cubes.
For more information, see "Managing User Access to Data" (p. 111)

D-Links
Cubes are linked by a series of D-Links in Analyst. A D-Link copies information in and out of
cubes, and sometimes to and from ASCII or text files.

Managing Contributor Applications


There is a single configuration process for all Contributor applications in an installation.
You can use application folders to organize your applications into related groups. When you have
created them, you can use the application folders to assign job servers, job server clusters and access
rights to these groups of applications.

Multiple Administrators
You can secure individual elements of the Administration Console and therefore allow multiple
administrators to access different parts of the Contributor application at the same time. For example,
you can give rights to a specific user to create and configure applications on a specific datastore.
You can choose to cascade rights to all applications on a datastore, or restrict rights to specific
applications. Contributor administrators have access only to those applications and operations that
they have rights for.

Moving Data
Administrators can use administration links to move data quickly and easily between applications
without having to publish, reducing the need for large applications. You can have several small,
focused applications. Smaller e.List structures provide quicker reconciliation times. Also, the need
for cut-down models and access tables is reduced.
Administration links give you the following benefits:

20 Contributor
Chapter 1: Cognos 8 Planning - Contributor

● You can achieve a matrix e.List structure.


For example, you can have a Company model where Human Resources reports into Country,
and this can be linked to a Corporate model where Country reports into Human Resources.

● You can import data from Cognos 8 data sources.


using administration links, you can import data into Cognos 8 Planning from packages that
were modeled and published in Framework Manager.

System Links
Administrators can set up links so that Web client users and Contributor for Excel users can move
data between Contributor cubes in the same or different applications. System links are run from
the target application.

Automating Contributor Tasks Using Macros


You can group related tasks into a single macro to run them in sequence. For example, you can
group the following tasks: Load Import Data, Prepare Import, and Go to Production. Macros can
be run in the Administration Console, from Cognos Connection, used in events created in Event
Studio, or by using external scheduling tools.

Publishing Data
Three publish layouts are available.
The table-only layout is designed to give users greater flexibility in reporting on Cognos 8 Planning
data, and for use as a data source for other applications. It is used with the Generate Framework
Manager Model extension, and the Generate Transformer Model extension.
The incremental publish layout publishes only e.List items that contain changed data. Users can
schedule an incremental publish using a macro or through Cognos Connection and Event Studio.
You can achieve near real-time publishing by closely scheduling incremental publishes.
The view layout, as supported in Contributor and Analyst version 7.2, is compatible with previous
Cognos Planning data solutions.
Data is always published to a separate publish datastore.

Creating a Contributor Application


Creating a Contributor Application involves
❑ developing the plan in Analyst

❑ designing the e.List

❑ configuring rights

❑ creating the application

Administration Guide 21
Chapter 1: Cognos 8 Planning - Contributor

❑ creating the production application

❑ running jobs

❑ testing the application in the Web site

Developing the Plan in Analyst


Create a business model for planning, budgeting, and forecasting using Analyst. This step is typically
performed by Analyst Model Builders.
You can specifically design the Analyst model to be optimized for Contributor (p. 337). Part of the
model development process involves establishing how the Contributor application is to be used in
the organization. This includes looking at who contributes data and reviews data. This information
is used to create an e.List.
For more information, see the Analyst User Guide.

Designing the e.List


An e.List is a very important part of the Contributor application. It specifies how the application
is distributed to end users, the hierarchy of the application, and security.
An e.List has a hierarchical structure that typically reflects the structure of an organization. The
dimension that represents the e.List is created in Analyst. The file containing e.List data is imported
into the Contributor Administration Console.
For more information, see "The e.List" (p. 89).

Assigning Rights
Rights determine whether users can view, save, submit, and so on.
For example, you want to allow a planner to view, but not save or submit, or to make and save
changes but not submit.
The rights a user can have are also affected by the view and review depth, set in the e.List window,
and the Reviewer edit setting in the Application Options window. A user can be directly assigned
rights, or inherited rights.
You can set up rights directly in the Administration Console after you create the Contributor
application, or you can create and maintain the rights in an external system, and import them.
For more information, see "The e.List" (p. 89).

Creating the Application


After the model is created and tested in Analyst, create the Contributor application using the
Administration Console. The application creation wizard helps you create a Contributor application.
After the application is created, you can
● configure the application

22 Contributor
Chapter 1: Cognos 8 Planning - Contributor

This establishes how the application appears and behaves in the Web browser.

● import the e.List data and rights

● restrict what users can see and do using saved selections and access tables.
For example, you may want to hide salary details from some users.

● import data from other applications

After you set up the application, run the Production process.

Creating the Production Application


There can be two versions of a Contributor application: development and production.
When you run the Go to Production process (p. 239), the development application becomes the
production application. The previous production application, if it existed, is archived and a new
development application is established. At this stage, the current production application and the
new development application are the same.
The production version is the live version of the application. It is the application that is online and
that users are working on.
Having two versions of an application means that you can make changes to the application without
having to take it offline, reducing the time that users are offline for as little as a minute. This is the
time taken to integrate the new e.List items into the hierarchy and set the correct workflow states
(p. 297).
Run the Go to Production process to formally commit a set of changes. When the Go to Production
process is complete, jobs run to ensure that all the data is up to date using a process named
reconciliation (p. 52). Reconciliation happens on the server side or, if the user tries to view or edit
the application before the e.List item is processed, reconciliation may happen on the client.

Running Jobs
A job is an administration task that runs on job servers and is monitored by the Administration
Console. You can start the process and monitor its progress. All jobs can be run while the application
is online.
Using the Monitoring Console (p. 59) in the Administration tree, you can manage and monitor the
progress of jobs in Contributor applications.
An example of a job is reconcile, which ensures that the structure of the e.List item data is up to
date, if required. This job is created and runs after Go to Production runs.
For more information, see "Jobs" (p. 47).

Testing the Web Site


To test the Web site, run Go to Production and log on as a user with rights to the Contributor
application. You should be able to view the application in a web browser.

Administration Guide 23
Chapter 1: Cognos 8 Planning - Contributor

The Administrator
Administration can be divided into separate functions depending on your business needs. A Planning
Rights Administrator assigns administrative access to Contributor applications and to functions
within applications. Administrators have access only to those applications and operations that they
have rights for. In addition, multiple administrators can access different parts of the Contributor
application at the same time.
You can restrict administrative access on a per application basis so that someone who can see only
database maintenance in application A can create applications in application B.
Administrators see only those applications that they have rights to, and only those functions within
those applications.
Depending on the rights assigned to them, administrators can
● assign functional rights to other administrators

● add job servers and job server clusters to the Planning Store

● create and configure a Contributor application

● create new e.List items

● make changes to the e.list hierarchy

● assign rights to e.List items

● import actual data to the application

● amend workflow states from the Administration Console where required

● monitor the progress of jobs

The Planner
Planners are responsible for entering data into the Contributor application using the Web browser,
or Contributor for Excel. This data is referred to as a contribution. Planners edit data only in the
selection assigned to them by the administrator. They cannot make structural changes to the
application. After data is entered, the planner can either save or submit the data. Submitted data
is forwarded to a reviewer and cannot be edited further by the planner unless the reviewer rejects
it.
A planner can be responsible for more than one e.List item and can view each e.List item individually
or view all e.List items in a single view, if configured by the administrator.

The Reviewer
Reviewers are responsible for approving contributions submitted by one or more planners. Reviewers
can view data and see the status of all submissions they are responsible for managing at any stage
in the planning and review cycle. Reviewers can edit contributions if they have appropriate rights.
After data is submitted, the reviewer has the following options:
● reject the data if they are not satisfied with it

24 Contributor
Chapter 1: Cognos 8 Planning - Contributor

Typically, a reviewer sends an email to the planner to give the reason for rejection.

● accept the data


When a complete set of data is viewed and considered satisfactory, it can be submitted to the
next reviewer in the e.List hierarchy.

● edit the data, if allowed

After the reviewer takes over editorial control of a contribution, the planner is no longer the owner
of the contribution. The reviewer has the right to submit it.
Any user can be both a planner and a reviewer for the same e.List item. When users have both roles,
they can view their review items and contribution items in the same Web page.
In addition, reviewers can annotate any changes they make to a Contribution e.List item (p. 287).

The Toolbar
The following functions are available on the Administration Console toolbar.

Email Sends e-mail to users defined in an application using your


default email tool.

Save Saves changes to the development application package.

Help Shows the Contributor Administration online help. You can


also use the F1 key.

Go to Production Starts the Go to Production process. Go to production can be


automated.

Set Online Makes the application visible in a Web browser. Set Online
can be automated.

Set Offline Prevents the application from being accessed in a Web browser.
Set Offline can be automated

Reset Resets the development application to the production


application, clearing any changes that were made since you
last ran Go to Production.

Administration Guide 25
Chapter 1: Cognos 8 Planning - Contributor

26 Contributor
Chapter 2: Security

Cognos 8 security is designed to meet the need for security in various situations. You can use it in
everything from a proof of concept application where security is rarely enabled to a large scale
enterprise deployment.
The security model can be easily integrated with the existing security infrastructure in your
organization. It is built on top of one or more third-party authentication providers. You use the
providers to define and maintain users, groups, and roles, and to control the authentication process.
Each authentication provider known to Cognos 8 is referred to as a namespace.
In addition to the namespaces that represent the third-party authentication providers, Cognos 8
has its own namespace named Cognos. The Cognos namespace makes it easier to manage security
policies and deploy applications.
For more information, see the Cognos 8 Security and Administration Guide.

Cognos Namespace
The Cognos namespace is the Cognos 8 built-in namespace. It contains the Cognos objects, such
as groups, roles, data sources, distribution lists, and contacts.
During the content store initialization, built-in and predefined security entries are created in this
namespace. You must modify the initial security settings for those entries and for the Cognos
namespace immediately after installing and configuring Cognos 8.
You can rename the Cognos namespace using Cognos Configuration, but you cannot delete it. The
namespace is always active.
When you set security in Cognos 8, you may want to use the Cognos namespace to create groups
and roles that are specific to Cognos 8. In this namespace, you can also create security policies that
indirectly reference the third-party security entries so that Cognos 8 can be more easily deployed
from one installation to another.
The Cognos namespace always exists in Cognos 8, but the use of the Cognos groups and roles it
contains is optional. The groups and roles created in the Cognos namespace repackage the users,
groups, and roles that exist in the authentication providers to optimize their use in the Cognos 8
environment. For example, in the Cognos namespace, you can create a group named HR Managers
and add to it specific users and groups from your corporate IT and HR organizations defined in
your authentication provider. Later, you can set access permissions for the HR Managers group to
entries in Cognos 8.

Administration Guide 27
Chapter 2: Security

Authentication Providers
User authentication in Cognos 8 is managed by third-party authentication providers. Authentication
providers define users, groups, and roles used for authentication. User names, IDs, passwords,
regional settings, personal preferences are some examples of information stored in the providers.
If you set up authentication for Cognos 8, users must provide valid credentials, such as user ID and
password, at logon time. In Cognos 8 environment, authentication providers are also referred to

as namespaces, and they are represented by namespace entries in the user interface.
Cognos 8 does not replicate the users, groups, and roles defined in your authentication provider.
However, you can reference them in Cognos 8 when you set access permissions to reports and other
content. They can also become members of Cognos groups and roles.
The following authentication providers are supported in this release:
● Active Directory Server

● Cognos Series 7

● eTrust SiteMinder

● LDAP

● NTLM

● SAP

You configure authentication providers using Cognos Configuration. For more information, see
the Installation and Configuration Guide.

Multiple Namespaces
If multiple namespaces are configured for your system, at the start of a session you must select one
namespace that you want to use. However, this does not prevent you from logging on to other
namespaces later in the session. For example, if you set access permissions, you may want to reference
entries from different namespaces. To log on to a different namespace, you do not have to log out
of the namespace you are currently using. You can be logged on to multiple namespaces
simultaneously.
Your primary logon is the namespace and the credentials that you use to log on at the beginning
of the session. The namespaces that you log on to later in the session and the credentials that you
use become your secondary logons.
When you delete one of the namespaces, you can log on using another namespace. If you delete all
namespaces except for the Cognos namespace, you are not prompted to log on. If anonymous access
is enabled, you are automatically logged on as an anonymous user. If anonymous access is not
enabled, you cannot access the Cognos Connection logon page. In this situation, use Cognos
Configuration to enable anonymous access.

28 Contributor
Chapter 2: Security

Deleting or Restoring Unconfigured Namespaces


You can preserve namespaces and all their contents in the content store even if they are no longer
configured for use in Cognos 8. When a namespace is not configured, it is listed as inactive in the
directory tool.
An inactive namespace is one that was configured, but later deleted in Cognos Configuration. The
namespace can be deleted from the content store by members of the System Administrators role.
You cannot log on to an inactive namespace.
If a new version of Cognos 8 detects a previously configured namespace that is no longer used, the
namespace appears in the directory tool as inactive. You can configure the namespace again if you
still require the data. If the namespace is not required, you can delete it.
When you delete a namespace, you also delete all entries in My Folders that are associated with
that namespace, and their contents.
An active namespace cannot be deleted, but can be updated.
To recreate a namespace in Cognos Configuration, you must use the original ID of the namespace.
For information about configuring and recreating namespaces, see the Installation and Configuration
Guide.

Delete an Inactive Namespace


If a namespace was removed from Cognos Configuration and is no longer required, a member of
the System Administrators role can delete it permanently in the directory tool. Deleting a namespace
also deletes all the entries in My Folders that are associated with the namespace.
To access the directory administration tool, you must have execute permissions for the directory
secured feature and traverse permissions for the administration secured function.

Steps
1. In Cognos Connection, in the upper-right corner, click Launch, Cognos Administration.

2. On the Security tab, click Users, Groups, and Roles.

If the namespace you want to delete does not have a check mark in the Active column, it is
inactive and can be deleted.

3. In the Actions column, click the delete button.


If the namespace is active, the delete button is not available.

The namespace is permanently deleted. To use the namespace again in Cognos 8, you must add it
using Cognos Configuration.

Users, Groups, and Roles


Users, groups, and roles are created for authentication and authorization purposes. In Cognos 8,
you can use users, groups, and roles created in third-party authentication providers, and groups

Administration Guide 29
Chapter 2: Security

and roles created in Cognos 8. The groups and roles created in Cognos 8 are referred to as Cognos
groups and Cognos roles.

Users
A user entry is created and maintained in a third-party authentication provider to uniquely identify

a human or a computer account. You cannot create user entries in Cognos 8.


Information about users, such as first and last names, passwords, IDs, locales, and email addresses,
is stored in the authentication providers. However, this may not be all the information required by
Cognos 8. For example, it does not specify the location of the users' personal folders, or format
preferences for viewing reports. This additional information about users is stored in Cognos 8, but
when addressed in Cognos 8, the information appears as part of the external namespace.

Access Permissions for Users


Users must have at least traverse permissions for the parent entries of the entries they want to access.
The parent entries include container objects such as folders, packages, groups, roles, and namespaces.
Permissions for users are based on permissions set for individual user accounts and for the
namespaces, groups, and roles to which the users belong. Permissions are also affected by the
membership and ownership properties of the entry.
Cognos 8 supports combined access permissions. When users who belong to more than one group
log on, they have the combined permissions of all the groups to which they belong. This is important
to remember, especially when you are denying access.

Tip: To ensure that a user or group can run reports from a package, but not open the package in
a Cognos studio, grant the user or group execute and traverse permissions on the package.

Groups and Roles


Users can become members of groups and roles defined in third-party authentication providers,
and groups and roles defined in Cognos 8. A user can belong to one or more groups or roles. If
users are members of more than one group, their access permissions are merged.

Groups and roles represent collections of users that perform similar functions, or have a similar
status in an organization. Examples of groups are Employees, Developers, or Sales Personnel.
Members of groups can be users and other groups. When users log on, they cannot select a group
they want to use for a session. They always log on with all the permissions associated with the
groups to which they belong.

Roles in Cognos 8 have a similar function as groups. Members of roles can be users, groups,
and other roles.
The following diagram shows the structure of groups and roles.

30 Contributor
Chapter 2: Security

Group Role

User Group User Group Role

You create Cognos groups and roles when


● you cannot create groups or roles in your authentication provider

● groups or roles are required that span multiple namespaces

● portable groups and roles that can be deployed are required


In this case, it is best to populate groups and roles in the third-party provider, and then add
those groups and roles to the Cognos groups and roles to which they belong. Otherwise, you
may have trouble managing large lists of users in a group in the Cognos namespace.

● you want to address specific needs of Cognos 8 administration

● you want to avoid cluttering your organization security systems with information used only in
Cognos 8

Cognos 8 Planning Roles


There are two predefined roles for Cognos 8 Planning:
● Planning Rights Administrators
This role enables you to access Contributor Administration Console, Analyst, and all associated
objects in the application for the first time following installation. You can then change the
roles, groups, and users who can access the Contributor Administration Console and to Analyst.

● Planning Contributor Users


This is the default role for users who want to access the Contributor Web client, Contributor
for Excel, or Analyst. However, anyone can be assigned rights to use the Contributor Web
client, or Contributor for Excel regardless of whether they are a member of the Planning
Contributor Users role. Analyst users must be members of the Planning Contributor User role.

Note: You do not have to use these roles, they can be deleted or renamed. If you decide not to use
the predefined roles, you must assign the access permissions and capabilities required by Cognos 8
Planning to other groups, roles, or users.

Capabilities
Capabilities are secured functions and features. If you are an administrator, you set access to the
secured functions and features by granting execute permissions for specified users, groups, or roles.
Users must have at least one capability to be accepted through the Cognos Application Firewall.

Administration Guide 31
Chapter 2: Security

The Planning Contributor Users role has the Planning Contributor capability by default. If you do
not want to use this role, you can assign the capability to any groups, users, or roles that you create
to replace this role by giving execute permissions to the appropriate members.
The Planning Rights Administrators role has the Planning Rights Administration capability by
default. To assign this capability to groups, users, or roles, you must give execute permissions to
the appropriate members. You must also give members permissions to traverse the Administration
folder.

Tip You change capabilities through Cognos Administration, by clicking the Security tab. For more
information, see "Securing Functions and Features" in the Administration and Security Guide.

Capabilities Needed to Create Cognos 8 Planning Packages


You can create a Planning Package during the Go to Production process, giving users access to
Cognos 8 studios from the Contributor application and enabling users to report against live
Contributor data using the Planning Data Service. To do this, the Planning Rights Administrators
role must be granted the Directory capability. Members of the System Administrator role are
automatically granted this capability, but Planning Rights Administrator members are not.

Setting up Security for a Cognos 8 Planning Installation


You must set up security for a Cognos 8 Planning installation.
To configure security for Cognos 8 Planning, do the following:
❑ Using Cognos Configuration, configure Cognos 8 to use an authentication provider

❑ Using Cognos Administration


● add Contributor Administration Console and Analyst administrators to the Planning Rights
Administrators role

● add Contributor application members and Analyst users to the Planning Contributor Users
role

● enable Planning Roles in Cognos 8

● restrict access to the everyone group

● create additional roles or groups for Cognos 8 Planning (optional)

Note: We recommend that you add groups of users as defined in your authentication
provider to the roles in Cognos 8 Planning, rather than individual users. This means that
changes in group membership are reflected immediately in the roles without having to make
changes in Cognos 8

❑ To configure Analyst security:


● configure integrated Windows authentication if you want to execute macros without
interaction

● specify a default library

32 Contributor
Chapter 2: Security

● assign access at object, library, or item level

❑ Using the Contributor Administration Console


● set access rights for Contributor administrators to Contributor administration functions

● set rights for Contributor application users for Contributor applications

Configure Cognos 8 to Use an Authentication Provider


Cognos 8 components can run with two types of access: anonymous and authenticated. By default,
anonymous access is enabled. To use Cognos 8 Planning, you must disable anonymous access so
that users are required to log on. Only authenticated users can access your Cognos 8 Planning
applications.
For authenticated access, you must configure Cognos 8 components to use a namespace associated
with an authentication provider used by your organization. You can also configure multiple
namespaces. At run time, users can choose which namespace they want to use.

Note: If you are using the Generate Transformer Model extension, you must add the Cognos Series
7 namespace. Local authentication export (LAE) files cannot be used.

Steps to Disable Anonymous Access


1. On each Content Manager computer, start Cognos Configuration.

2. In the Explorer window, under Security, Authentication, click Cognos.


The Cognos resource represents the Cognos namespace. For more information, see the
Administration and Security Guide.

3. In the Properties window, ensure that Allow Anonymous Access is set to False.

4. From the File menu, click Save.

Steps to Configure Authentication Providers


1. On each Content Manager computer, start Cognos Configuration.

2. In the Explorer window, under Security, right-click Authentication, and then click New resource,
Namespace.

3. In the Name box, type a name for your authentication namespace.

4. In the Type list, click the appropriate namespace and then click OK.
The new authentication provider resource appears in the Explorer window, under the
Authentication component.

5. In the Properties window, for the Namespace ID property, specify a unique identifier for the
namespace.

6. In the Properties window for Authentication, for the Allow session information to be shared
between client applications, set the value to True.

Administration Guide 33
Chapter 2: Security

This enables you to have single signon between multiple clients on the same computer. Note
that you cannot have single signon between a Windows application, and a Web client application,
for example, Contributor administration and Cognos 8.

7. Specify the values for all other required properties to ensure that Cognos 8 components can
locate and use your existing authentication provider.

8. Test the connection to a new namespace. In the Explorer window, under Authentication,
right-click the new authentication resource and click Test.

9. From the File menu, click Save.

Cognos 8 loads, initializes, and configures the provider libraries for the namespace.
For specific information about configuring each kind of authentication provider, see the Cognos 8
Planning Installation and Configuration Guide.

Add or Remove Members From Planning Rights Administrators and Planning


Contributor Users Roles
Using Cognos Administration, add Contributor Administration Console administrators and Analyst
administrators to the Planning Rights Administrators role. Add Contributor application and Analyst
users to the Planning Contributor Users role.

Steps
1. In Cognos Connection, in the upper-right corner, click Cognos Administration.

2. On the Security tab, click Users, Groups, and Roles.

3. Click on the Cognos namespace.

4. In the Actions column, click the properties button for the Planning Rights Administrators or
Planning Contributor Users role.

5. Click the Members tab.

6. To add members, click Add and do the following:


● To choose from listed entries, click the appropriate namespace.

● To search for entries, click the appropriate namespace and then click Search. In the Search
string box, type the phrase you want to search for. For search options, click Edit. Find and
click the entry you want.

● To type the name of entries you want to add, click Type and type the names of groups,
roles, or users using the following format, where a semicolon (;) separates each entry:
namespace/group_name;namespace/role_name;namespace/user_name;
Here is an example:
Cognos/Authors;LDAP/scarter;

34 Contributor
Chapter 2: Security

7. Click the right-arrow button, and when the entries you want appear in the Selected entries box,
click OK.

Tip: To remove entries from the Selected entries list, select them and click Remove. To select
all entries in a list, click the check box in the upper-left corner of the list. To make the user
entries visible, click Show users in the list.

8. Click OK.
For more information, see the Cognos 8 Administration and Security Guide.

Enabling Planning Roles in Cognos 8


Planning tasks that require access to the Cognos 8 data store, such as running macros, go to
production, and adding job servers, require additional security configuration. Access to the data
store is restricted to certain groups through Cognos Administration. You must have a system
administrator or a user from one of the following groups perform tasks that require the Cognos 8
data store. Optionally, you can add users to the required groups to perform the tasks.

Group Tool Task

Data Manager Framework Manager Only members of the Data Manager Authors
Authors group can import from a Framework Manager
data source.
You must have a Data Manager Authors group
member perform this task.

Directory Cognos Administration You must have a Directory Administrator create


Administrators Configuration, Data a data source named Cognos Planning -
Source Connections Contributor with a connection of type Cognos
Planning - Contributor before performing go to
production.

Report Cognos Administration A Report Administrators or Server Administrators


Administrators or Configuration, Content group member must publish and run macros in
Server Administration Content Administration.
Administrators

Restricting Access to the Everyone Group


The Everyone group represents all authenticated users and the Anonymous user account. The
membership of this group is maintained by the product and cannot be viewed or altered.
By default, the Everyone group belongs to several built-in groups and roles in the Cognos namespace.
To restrict access, remove the Everyone group the System Administrators role and replace it with
authorized groups, roles, or users. Optionally, remove the Everyone group from the Planning
Contributor Users role to restrict access to Contributor plans.

Administration Guide 35
Chapter 2: Security

For more information about the Everyone group, and System Administrators role, see "Initial
Security" in the Administration and Security Guide.

Recommendation - Creating Additional Roles or Groups for Contributor


To secure your Contributor applications, you may want to create roles or groups for the following
users:
● Contributor client extensions

● Contributor work offline users


for example, create one work offline role per Contributor application and assign the offline
users to the relevant role. The application administrator must also belong to this role.

● system links

● translated applications

Planning Contributor User Roles


When you assign user rights to Contributor applications, the first time you click User, Group, Role
in the Rights window, a list of all the Users, Groups, and Roles that are members of the Planning
Contributor Users role is displayed. If the Planning Contributor Users role contains a large number
of members directly below, you can improve performance by creating a smaller number of groups
or roles below the Planning Contributor Users role to act as filters.

Note: Members of roles can be users, groups, and other roles. Groups can contain users and other
groups, but not roles.

Planning Rights Administrator Roles


If you have a large number of administrators, you may wish to create a roles, or groups for specific
tasks, and then add the individual users to this role or group. For example, a role Allow System
Links can be used for this task, and any user added to that role is assigned that right.
For more information about creating groups or roles, see the Administration and Security Guide.

Configuring Access to the Contributor Administration Console


Administrative access can be set for the Cognos 8 Planning environment or the datastore server
that it contains. It can also be set for the application or publish container objects in the datastore
server, the job server objects in the job server cluster, links, and macros. You can also set rights to
individual functions of these objects.
If you are an administrator with no access rights to an object, you cannot view the details for that
object in the Administration Console, and cannot select these objects. You will see the datastore
servers, even if you have no rights to it, because you may have rights to applications on the datastore
server. If you have no rights to any object, the Contributor Administration Console closes.

36 Contributor
Chapter 2: Security

Cascade Rights
If you set rights to operations for the Cognos 8 Planning environment, the datastore server, or the
job server, you are prompted to cascade the rights to the lower levels. Regardless of your response,
when you grant rights to a datastore server or job server cluster, the user automatically inherits the
same rights for any applications, publish containers, or job servers that you subsequently add.

Tips: To always cascade rights without being prompted, on the Access Rights window, click Cascade
rights selection.
Rights that are cascaded are indicated by blue text.
Operations are the functions that can be performed in the Contributor Administration Console.
Initially, a Planning Rights Administrator grants rights so that other Contributor administrators
perform these operations.

Granting Access Rights to Administrators


You can grant rights so that administrators can view who has write access to the development
model, select an application and cube as the source and target of a system or administration link,
add or remove a datastore server, and add a job server cluster.

Rights Privileges

Session Details Access You can grant the write to view who has write access to the
development model.

Global Administration You can grant the right to:


Access ● run Go to Production to create the production application

● modify the datastore connection details for applications and


publish containers

● set an application online or offline in the Web client

● assign access rights


You can assign access rights to datastores, applications, publish
containers, job server clusters, and job servers.

Links Access You can secure the ability to create, edit, execute, delete, import,
and export administration links. You can also secure previously
created administration links (administration link instances).
You secure Administration Link instances individually. To locate
them, scroll to the bottom of the Operations tree and look for
LinkLink Name.
You can grant the right to select an application and cube as the
source and target of a system or administration link.

Administration Guide 37
Chapter 2: Security

Rights Privileges

Datastores Access You can grant the right to add or remove a datastore server.

Application Containers You can grant the right to


Access ● upgrade or import a Contributor application from an earlier
version of Contributor

● link to an existing application

● create an application

● create a script that can be run by a database administrator to


create an application. This option is used when the Generate
Scripts option is set to Yes in the Admin Options table

● remove an application from the Planning environment

● assign or remove an application from an application folder

Publish Container Access You can grant rights for administering publish containers:
● link to a publish container

● create a publish container

● create a script that is run by a database administrator to create


a publish container.
This option is used when the Generate Scripts option is set to
Yes in the Admin Options table.

Publish containers are created the first time someone publishes.


The administrator must have the right to create a publish container
in order to publish.
For publish jobs to be processed, the publish container must be
added to a job server cluster or job server.
To modify a publish datastore connection you must have the
Global Administration right Modify connection document.

38 Contributor
Chapter 2: Security

Rights Privileges

Development Access You can grant the right to perform the following operations in a
development application:
● configure the Web client - navigation, orientation, options,
planner-only cubes and Contributor help

● configure application maintenance options

● import and maintain the e.List, and rights

● create and maintain access tables and saved selections

● import data

● synchronize

● set datastore options

● create and maintain translations

Production Access You can grant the right to perform the following operations on
the production version of the application:
● publish data

● delete annotations

● preview data
This option is important if you want to hide sensitive data

● manage extensions

Job Server Clusters Access You can grant the right to add or remove a job server cluster.

Job Server Access You can grant the right to update job server properties for the
Planning environment.
Go to a Job server cluster
● to enable and disable job processing of an application

● to add and remove job servers from a job server cluster

● to add and remove applications, and publish containers from


the job server

● to update job server properties

Steps
1. Click Access Rights.

Administration Guide 39
Chapter 2: Security

2. Click Add.

3. Click the appropriate Namespace and select the user, group, or role.

4. Click the green arrow button and then click OK.

5. Under Name, select the name.

6. Click the operations needed.

Tip: you can filter operations by datastore, application, publish container, job server cluster,
and job server.

7. Click Save.
Access rights apply as soon as the changes are saved.

Access Rights for Macros


A macro consists of one or more macro steps that you select when you create a macro. For example,
a macro that imports data into a cube, named "Import Expenses", might contain the following
macro steps:
● Upload an Import File

● Prepare Import

● Go To Production

You can secure the rights to create, edit, execute, delete, and transfer macros, and the ability to
create individual macro steps.
By default, when a user with the Create Macro right adds a new macro instance, they are granted
all rights to it - edit, execute, delete and transfer. For other users, the access to that instance are
determined by their rights.
After a link or macro is created, only a Planning Rights Administrator can change the instance
rights.
For example, consider a user who is granted create, edit, and execute macro access rights. By default,
this user has all access rights to macros they create. However, they only have edit and execute rights
to those created by other users. A Planning Rights Administrator can subsequently grant or revoke
any rights to those macros for any user.
The Execute Command Line macro step is secured by default. This is to minimize the risk of
unauthorized access to resources.
You can also secure the rights to edit, execute, delete and transfer previously created macro instances,
for example, "Import Expenses".

Rights Needed to Transfer Macros


Transferring a macro enables you to copy steps from one macro to another, add steps to another
macro, and make a copy of an existing macro. To do this, you must have:

40 Contributor
Chapter 2: Security

● transfer rights for the macro instance being transferred to or from

● edit rights for the target macro

● create macro step rights for all the macro steps that you are transferring

Authentication of Macros
Authentication is based on the security context under which the macros are run. For example, if
the macro contains a Go to Production step, the user specified in the authentication details when
you create a macro must have the rights to run Go to Production. This is separate from the access
rights used to secure the management of macros.

Recommendation - Granting Rights


We recommend that you secure the "Execute Command Line" step to prevent unauthorized access
to the functionality that you can execute from the commend line.
Grant the create macro step "Execute Command Line" rights to trusted users only, typically Planning
Rights Administrators who can schedule macros on the Contributor server.
Revoke edit, transfer, and delete rights from all macro instances that contain the "Execute Command
Line" step to prevent unauthorized change by another user. An administrator would have to re-grant
edit, transfer, and delete rights to permit any kind of maintenance changes to the macro.

Set Access Rights for Contributor Macros in Cognos Connection


Contributor macros that are published to Cognos Connection must be secured by the Planning
Rights Administrator or the System Administrator to ensure that only the required users, groups,
and role have access to execute or schedule macros.
You must be a member of the Planning Rights Administrators or System Administrator roles to
change user, group, and role capabilities.

Steps
1. In Cognos Connection, in the upper-right corner, click Cognos Administration.

2. On the Security tab, click Capabilities.

3. Click the actions button next to the Administration capability and click Set Properties button

. On the Permissions tab, grant the traverse permission to the required users, groups, and
roles and click OK.

4. Click the Administrator capability to show additional functions. Click the actions button next
to Run activities and schedules and click Set Properties. On the Permissions tab, grant execute
and traverse permissions to the required users, groups, and roles and click OK.

5. On the Configuration tab, click Content Administration.

6. Click the Set Properties button on the Administration page. On the Permissions tab, grant
read, execute, and traverse permissions to the required users, groups, and roles.

Administration Guide 41
Chapter 2: Security

If required, grant write permission if you want the user, group, or role to be able to modify the
contents of this folder. Grant set policy permission if you want the user, group, or role to be
able to change security permissions on this folder.

7. Click OK.

Members of the required users, groups, or roles now have access schedule and run Contributor
macros in Cognos Connection.

Assign Scheduler Credentials


All scheduled macros, and some jobs, run under the identity setup in the Cognos scheduler or the
scheduler credentials for a third party scheduler. This is because macros and jobs that run in the
background cannot prompt the user for authentication information. Scheduler credentials are also
used to lookup Contributor email addresses from workflow pages.
Scheduler credentials are associated with an authenticated user, which can include more than one
user logged on to different namespaces. The user, or group or role that they are a member of, must
have rights granted in the Access Rights window to run macros, jobs, and lookup email addresses.
If there are users from multiple namespaces in your application, you must have scheduler credentials
associated with those namespaces. When the Validate Users job is run, the scheduler credentials
must be associated with all the namespaces you imported users from. If you are logged on to only
one namespace, users that belong to other namespaces are considered invalid.
Macros that are run directly from the Contributor Administration Console through user interaction
run under the currently logged on user account and do not use the scheduler credentials. Macros
that are triggered through events run under the identity used to create the event, not the scheduler
credentials.
Only users who have the Planning Rights Administrator capability can modify the scheduler
credentials. By default, the scheduler credentials are associated with the user who opens the
Contributor Administration Console for the first time, if that user is a Planning Rights Administrator.
If that user is not a Planning Rights Administrator, a message states that the credentials are invalid,
or do not exist, and a Planning Rights Administrator must update the scheduler credentials.
Note that the following Contributor jobs use the scheduler credentials:
● Validate Users

● Reconcile

Steps
1. In the Systems Settings pane, click the Scheduler Credentials tab, and click Update.

2. Click Logon.

Initially, the Logon As button is disabled.

3. Enter the User ID and password to be used as the scheduler credentials and click OK.

4. If your logon is successful, the Logon button is disabled, and the Logon as button is enabled.

42 Contributor
Chapter 2: Security

This enables you to log on to different namespaces.


You must provide scheduler credentials for all namespaces that users of the applications are
imported from.

After you create and configure Contributor applications, the next step is to configure user access.
You do this by assigning users, roles, or groups to e.List items in the Rights window in the
Contributor Administration Console, either by importing a file, or by manually inserting rights.

Administration Guide 43
Chapter 2: Security

44 Contributor
Chapter 3: Configuring the Administration Console

When you start the Cognos 8 Planning - Contributor Administration Console for the first time you
must configure it before you can use it.
Before you can configure the Cognos 8 Planning - Contributor Administration Console, you must
be a member of the Planning rights administration capability which by default, is granted to members
of the Planning rights administration role. The Planning rights administration capability can be
granted to any user, group, or role. (p. 31).

Steps to Configure the Cognos 8 Planning - Contributor Administration Console


❑ Create the Planning tables in the Planning content store (p. 45).

❑ Add a datastore server (p. 46).

❑ Add job server clusters and job servers (p. 54).

❑ Configure administration security for the Contributor Administration Console.

When you have done these tasks, applications can be created, imported, or upgraded (p. 58).

Creating Planning Tables


The first time the Contributor Administration Console is run, a check is run to see if any Planning
tables exist in the Planning store. If not, you are prompted to create them.

You can either create the tables by selecting Create and populate tables now or you can choose to
create the tables using a script which is then run by a Database Administrator (DBA). Use this
option if you do not have access rights to create tables in the database. To choose the script option,
you select Generate table scripts and data files and enter the location of where to save the script.
The script is then created automatically and the DBA can run the script to create the tables.
If the filesys.ini was not specified during installation, you must specify the path when you create
Planning tables. You change this setting if the default path is not used.
If you want to work with a different FileSys.ini, and the associated properties and samples other
than the default, select the Allow use of non-default FileSys.ini check box.
The filesys.ini file is a control file used by Analyst. It contains file paths for the Libs.tab, Users.tab,
and Groups.tab that control the specific library and user setup. You can edit the filesys.ini path by
selecting Tools, Edit FileSys.Ini path.
Planning tables are typically prefixed with a P_, and hold information about
● datastore servers

● Contributor application datastores

● job servers and job server clusters

Administration Guide 45
Chapter 3: Configuring the Administration Console

● security

● macros

● administration links

● jobs

Add a Datastore Server


When you open the Contributor Administration Console for the first time, you must add an
application datastore server.

Steps
1. Right-click Datastores in the Administration tree, and click Add Datastore.

● Tip: You can also modify the existing application datastore connection by clicking the
Configure button on the Datastore server information page (p. 47).

2. Select the Datastore provider.


The options are SQL Server, Oracle or DB2.

3. Enter the Datastore server name, or click the browse button to list the available servers
(SQL Server only).

4. Enter the information as described in the table below:

Setting Description

Trusted Connection Click to use Windows authentication as the method for logging
on the datastore. You do not have to specify a separate logon
id or password. This method is common for SQL Server
datastores and less common, but possible, for Oracle.

Use this account Enter the datastore account that this application will use to
connect. This box is not enabled if you use a trusted
connection.

Password Type the password for the account. This box is not enabled if
you use a trusted connection.

Preview Connection Provides a summary of the datastore server connection details.

Test Connection Mandatory. Click to check the validity of the connection to


the datastore server.

5. If you want to configure advanced settings, click Advanced.

46 Contributor
Chapter 3: Configuring the Administration Console

Typically these settings should be left as the default. They may not be supported by all datastore
configurations.
Enter the following information.

Setting Description

Provider Driver Select the appropriate driver for your datastore.

Connection Prefix Specify to customize the connection strings for the needs of the
datastore.

Connection Suffix Specify to customize the connection strings for the needs of the
datastore.

Datastore Server Information


When you click the datastore server name in the tree, the datastore server information is displayed.
The datastore server connection area displays the connection string that is used to connect to the
datastore server. You can configure the datastore connection, change the datastore server, change
the account detail and test the connections by clicking the Configure button.

Jobs
A job is an administration task that runs on job servers and is monitored by the Administration
Console.
Additional servers can be added to manage applications, speeding up the processing of jobs. You
can run the job and monitor its progress. All jobs can be run while the application is online to Web
clients. This means that you can make changes to the development version of an application while
the production version is live. It reduces the offline time during the Go to Production process.
Each job is split into job items, one job item for each e.List item. If you are running a Publish job
for eight e.List items, eight job items are created. Contributor applications can be added to more
than one job server, or to a job server cluster. When a job is created, job items are run on the
different job servers, speeding up the processing of a job.

Types of Jobs
The following table describes each type of job:

Job Type Description

Commentary tidy Deletes user annotations and audit annotations, and references to
attached documents (p. 289).

Administration Guide 47
Chapter 3: Configuring the Administration Console

Job Type Description

Cut-down models Cuts down applications (p. 134).

Cut-down tidy Removes any cut-down models that are no longer required.

Export queue tidy Removes obsolete items from the export queue.

Import queue tidy Removes from the import queue model import blocks that are no longer
required.

Inter-app links Transfers data between applications.


To run successfully, requires that links are reconciled to e.List items.

Job test Test the Job sub system using a configurable Job Item.

Language tidy Cleans up unwanted languages from the data store after the Go to
Production process is run. This job is created and runs after Go to
Production is run.

Prepare import Processes import data ready for reconciliation (p. 174).

Publish Publishes the data to a view format (p. 255).

Reconcile Ensures that the structure of the e.List item data is up to date. This job
is created and runs after Go to Production is run. For more information,
see "Reconciliation" (p. 52).

Reporting publish Publishes the data to a table-only format (p. 262).

Validate links Updates the validation status of links.

Validate users Checks to see if the owner or editor of an e.List item has the rights to
access the e.List item. For more information, see "Ownership" (p. 91).
The job checks the rights of users and updates the information in the
Web browser. This job is created only if a change is made to the
Contributor model, and runs after Go to Production is run.

Run Order for Jobs


The order in which jobs are run depends on the number of job processors, the hardware used, and
the job polling interval. If there is one job processor, jobs are run in no specific order. If there is
more than one job processor, the first job processor to poll for a job picks a job in no specific order
and runs it. The additional job processors prioritize queued jobs over running jobs and pick one

48 Contributor
Chapter 3: Configuring the Administration Console

of the queued jobs to work on in no specific order. If there are enough job processors, all jobs can
be run at the same time.
Because jobs are independent of each other, they do not need to run in a specific order. The exception
is the publish job. The publish job can be started when a reconcile is running, but for it to complete
successfully, all e.List items that are being published must be reconciled.

Actions That Cause Jobs to Run


The following actions in the Administration Console cause jobs to run.

Action Job

Import data (p. 141) prepare import and reconcile *

Running administration links inter-app links, validate links

Publish data (p. 255) publish and reporting publish

Delete annotations (p. 289) annotations tidy

Create a new application reconcile *

Cut-down model options cut-down models and cut-down tidy *

Change planner-only cubes reconcile *

Synchronize with Cognos 8 Planning - Analyst reconcile *

* Triggered by Go to Production
Changes in the following areas do not cause jobs to be run:
● navigation

● orientation

● Contributor help text

● all application settings except for cut-down models

● the backup file location

Securing Jobs
Some jobs run under scheduler credentials. This is because jobs that run in the background cannot
prompt the user for authentication information. Scheduler credentials are associated with an
authenticated session, which can include more than one user logged on to different namespaces.
The following jobs run under scheduler credentials:

Administration Guide 49
Chapter 3: Configuring the Administration Console

● Validate users
When the Validate users job is run, the scheduler credentials must be associated with all the
namespaces you imported users from. If it is logged on to only one namespace, users that belong
to other namespaces are considered invalid.

● Reconcile

● all jobs launched by a scheduled macro

Only members of the Planning rights administrator capability can modify the scheduler credentials.
For information about setting the scheduler credentials, see "Assign Scheduler Credentials" (p. 42).

Managing Jobs
You can manage and monitor the progress of running jobs in applications from the Job Management
branch in the Administration tree, or from the Monitoring Console (p. 59).

You can also monitor the progress of jobs triggered by administration links from the Monitor Links
window.
The Job Monitor shows the following information:

Details Description

Status The status is one of the following:


● creating

● ready to run

● queued. The job is waiting to be run. The job may have to wait
for a job server to become available before it can be run.

● running

● complete - the job has finished running successfully

● cancelled - the job has failed and was cancelled. Double-click


on the line to find out more.

Succeeded The number of job items that ran successfully. If all did, All is stated.
A percentage is shown.

Failed The number of job items that failed, if any.

Total Items The total number of job items that the job is split into. Jobs are
broken down into atoms of work known as job items, enabling a
job such as Publish to be run over different threads.

Estimated Completion An estimated date and time of completion for the job in local time.

50 Contributor
Chapter 3: Configuring the Administration Console

Details Description

Average Duration The average interval between the completion of job items.

Next Item The next job item to be run.

Start The date and time when the job started, in the local format.

Last Completion The time when the last job item was completed.

Duration (min) The time in minutes the job task has taken to complete.

Description A description of the job type.

The lower area shows tasks running on job servers.

Details Description

Status Indicates when a job is running.

Start The date and time when the job started on that processor in local
format.

End The date and time when the job stopped running on that processor.

User The user account on the job server.

Process ID The process ID can be used for debugging.

Thread ID The thread ID can be used for debugging.

Job View Refresh Interval (seconds)


You can change the job view refresh interval to whatever you feel is appropriate. We recommend
that you do not set it to less than 5 seconds. For most people's needs, 15 seconds is adequate. If
network traffic is an issue, the interval can be longer.

Publish Jobs
The publish process is carried out by the reporting publish job for a table-only layout (p. 262), or
the publish job if a view layout (p. 276) is selected.
To monitor publish jobs in the jobs monitor, select the publish container from the box that is
available at the top of the job monitor.

Administration Guide 51
Chapter 3: Configuring the Administration Console

Cancelled Jobs
If a Job is cancelled, you can show information about why this happened by double-clicking on the
line.

Pausing Jobs
If you want to pause a running job, you must stop the Job Server. To do this, right-click the job
server or job server cluster name and click Disable Job Processing.

Reconciliation
Reconciliation ensures that the copy of the application that the user uses on the Web is up to date.
For example, all data is imported, new cubes are added, and changed cubes are updated.
Reconciliation takes place after Go to Production runs and a new production application is created.
It also takes place when an administration link or an Analyst to Contributor link to the production
application is run. However, in this case, only the imported data is updated. The application is
reconciled on the job server unless a user tries to access an e.List item before it is reconciled. For
more information, see "The Effect of Changes to the e.List on Reconciliation" (p. 101).
All contribution e.List items are reconciled and aggregated if
● the application was synchronized with Analyst

● changes were made to the Access Tables, Saved Selections, or the e.List that resulted in a different
pattern of No Data cells for contribution e.List items that are common to both the development
and production applications

Note: Changing an access setting to No data, saving the application, and then changing the
access setting to what it was previously also results in reconciliation.

● they have import blocks

Note: If you use the Prepare zero data option, an import data block is created for all e.List
items, so all e.List items are reconciled.

● children were added or removed

● they are new

Reconciliation on the Server


During reconciliation on the server, the following process occurs for each e.List item:
● The data block is loaded onto the job server.

● The transformation process occurs


New cubes are added, changes are made to existing cubes such as dimensions added or removed,
and access tables are applied.

● Data is imported.

● Data is saved.

52 Contributor
Chapter 3: Configuring the Administration Console

Reconciliation can be performed across multiple processors and job servers. For more information,
see "Manage a Job Server Cluster" (p. 54).

Reconciliation on the Client


If a user attempts to open an e.List item that is not yet reconciled, the e.List item is reconciled on
the user's computer. This may take a few minutes.
In the Web application, the user can determine the state of an e.List item by the workflow icon. If
an icon is enclosed in a red box, the item is out of date and needs reconciling. For more information,
see "Workflow State Definition" (p. 297).

If the Prevent client-side reconciliation check box is selected (p. 79), a user cannot open the e.List
item until the e.List item is reconciled on the server side.

Deleting Jobs
You can delete jobs, but we do not recommended it because it can leave your data in an unstable
state.

Tip: If you want to pause a running job, you can stop the job server. Right-click the name of the
job server or job server cluster and click Disable Job Processing.

Prepare import
If you delete a prepare import job, the next time you run Prepare import, it tidies this up.

Cut-down models
You typically delete a cut-down models job after you cancel the Go to Production process. Clicking
the Go to Production button reruns the cut-down models job. If you delete a cut-down models job
during Go to Production, Go to Production does not run.

Reconciliation
If you delete a reconcile job, all e.List items that were not reconciled stay unreconciled. e.List items
that were already reconciled by the job remain reconciled. When a user attempts to view an
unreconciled e.List item in the grid, if client side reconciliation is allowed, the reconciliation process
takes place on the client. If client side reconciliation is not allowed, users cannot view the e.List
items. Rerun the Go to Production process to trigger a repair reconcile job.

Managing Job Servers


A job server cluster groups the job servers that are used to process administration jobs. You can
stop, start, and monitor all job servers in a cluster at the same time. You can also choose to have
specific objects run on individual servers.
When you create a Contributor application, you are prompted to choose a cluster or server to run
the jobs.

Administration Guide 53
Chapter 3: Configuring the Administration Console

You can add applications and publish containers to multiple job servers. This speeds up the
processing of jobs, such as reconciliation, giving near linear improvements per processor.
The following two scenarios provide examples of how this might work.

Scenario 1
If you publish large amounts of data, you might want to assign the publish container to different
servers than those that are processing the main application. This is because if you assigned the
application container and the publish container to cluster X containing servers A, B,C, D, it is
possible that a large job, for example, publishing 5000 e.List items, could consume all the resources
in cluster X for some time, preventing other jobs from being processed. So in this case you might
want to assign the publish container to, for example, servers A and B, and the application container
to server C and D to enable other jobs such as prepare import, and server-side reconciliation to run
at the same time as the publish job. There is no control at job level.

Scenario 2
If you have some applications that are in production and are live, you might want to have one or
more job clusters with your best hardware to monitor these applications to ensure that they are
stable and available. For applications that are in development, you might want to have a different
job cluster containing your less efficient hardware.
For more information, see "Jobs" (p. 47). You can also automate job server management (p. 197).

Manage a Job Server Cluster


Before administration jobs can be processed, you must add a job server cluster to the Administration
Console. A job server cluster manages a group of job servers. For information on managing job
servers, see "Managing Job Servers" (p. 53).

Steps
1. Right-click Job Server Clusters and click Add Job Server Cluster.

2. Type a unique name for the job server cluster.

3. Click Add.
The next step is to add job servers to the job server cluster.

4. Remove a job server cluster by right-clicking it and selecting Delete Job Server Cluster.

5. Disable job processing on a cluster by right-clicking the cluster and selecting Disable Job
Processing.

6. To test communication with the job server, right-click the job server name and click Test. Any
errors are displayed in a message box.

Add a Job Server and Change its Content Store


You must add job servers to a job server cluster so that administration jobs can be processed.

54 Contributor
Chapter 3: Configuring the Administration Console

Job servers can exist in only one Planning content store. You can either manually delete the job
server (p. 57) and add it to the new Planning content store, or on the job server, change the content
store that it is associated with.

Steps to Add a Job Server


1. Right-click Job Servers and click Add Job Server.

2. Select a job server from Available Servers.


Computers that are job servers must have the Planning job service option enabled in Cognos
Configuration. For more information, see the Cognos 8 Planning Installation and Configuration
Guide.

3. Enter the Polling interval.


This sets how often the job server polls the database to see if there are any new jobs. The default
is 15 seconds.

4. Enter the Maximum concurrent jobs. The default is -1. This is 1 concurrent job per processor.
Typing 0 stops job execution.

5. Click Add.
You should now add any applications, publish containers, and other objects to either a job
server cluster, or an individual job server. Jobs such as reconcile are not run until an application
is added to a job server or job server cluster.

Tip: to modify the properties of a job server, right-click the server name and click Properties.

Steps to Change the Content Store for the Job Server


1. On the job server computer, open Cognos Configuration.

2. Under Data Access, Content Manager, Content Store, change the value for Database server
with port number or instance name.

Add Applications and Other Objects to a Job Server Cluster


Adding an object to a job server cluster means that when a job such as Publish is run, the job is run
by the job servers in the cluster.
You can monitor applications at cluster level and at job server level.
You must add the following objects to run jobs:
● a Contributor application

● an application folder

● View Publish Container


You need this if you are publishing using the view layout.

● Table-only Publish Container

Administration Guide 55
Chapter 3: Configuring the Administration Console

You need this if you are publishing using the Table-only Layout.

● a Planning Content Store

Step
● Click the job server cluster name, and click Add.
The window contains details about the objects monitored by the cluster. It lists only objects
that are directly assigned to the cluster, not those objects that are assigned to individual job
servers.

Tip: Start or stop all job servers in the cluster by right-clicking the cluster name and selecting
Enable Job Processing or Disable Job Processing as required.

Add Objects to a Job Server


You can add objects to individual job servers.

Tip: You can see whether a cluster or individual job server is started from the icons in the
administration tree:

Icon Description

Job server started.

Job server cluster started.

Steps
1. Click the name of the job server, and then click Monitored Applications.

2. Click Add.
Three panes of information appear.

Pane Description

Monitored objects held in the Shows the objects monitored by the cluster that the job server is in.
Job Server Cluster

Monitored objects held in Lists the objects monitored by the job server. It contains only details
this Job Server of objects directly assigned to the server.
You can assign an application folder and its contents to a job server
or job server cluster as a monitored object. You can also assign
each application within an application folder to a different job
server. For more information on monitoring application folders,
see "Monitor Application Folders" (p. 57)

56 Contributor
Chapter 3: Configuring the Administration Console

Pane Description

Job Tasks being processed by Monitors the jobs that are being processed by the server:
this Job Server ● data source: If you see N/A by an application folder, this
indicates that the folder may have more than one data source
(applications within an application folder can be on more than
one datastore).

● application name

● job type: the job being processed (p. 47)

● Process ID: identifies the process used to execute a job task.


Used for debugging.

● Thread ID: identifies the thread used to execute the job task.
A thread is created for each job task. Multiple threads can be
created per process. The number of threads per process is set
in the Maximum concurrent jobs option. Used for debugging.

3. Right-click the job server name and select Enable Job Processing or Disable Job Processing as
required.
You can automate this process, see "Job Servers (Macro Steps)" (p. 197)"Disable Job
Processing" (p. 197) or "Enable Job Processing" (p. 197).

Remove Job Servers


A Job server can exist in only one Planning Content store at a time. You can either manually delete
the job server from the Planning content store and add it to the new one, or on the job server,
change the content store that it is associated with (p. 55).

Step
● In the Contributor Administration Console, right-click the job server name and click Delete
Job Server

Monitor Application Folders


An application folder can be assigned to a job server or job cluster as a monitored object. This adds
the applications held in the folder, which are not already being monitored, to the job server or
cluster, allowing a group of applications to be monitored in one single step. If applications are
subsequently added to that application folder, they are not automatically assigned to the job server
or cluster.
You can also remove an application folder from a job server or cluster in a single step. This removes
all the applications contained and monitored in that folder by that job server or cluster. You can

Administration Guide 57
Chapter 3: Configuring the Administration Console

still add individual applications within an application folder to be monitored by different servers
or clusters.
● To add the contents of an application folder to be monitored, select the application folder row
and click Add.

● To add individual applications from within an application folder, select the applications required
and click Add.

Creating, Adding and Upgrading Applications


You can create new applications, add existing applications to the server, or upgrade applications
by right-clicking Applications and selecting one of the following options:

● Create New Application(p. 63).

● Link to existing Applications(p. 58). This adds Contributor applications that exist on the
datastore server to the Administration Console.

● Upgrade Application(p. 329). You can import and upgrade Contributor applications created in
an earlier version of Contributor.

Remove Datastore Definitions and Contributor Applications


When you remove a Contributor application and datastore definition, only the entries from the
Planning content store are removed, the datastores still remain on the server. To delete an application
from the server, you must have Database Administrator (DBA) permissions. If you do not, contact
your database administrator.

Step
● In the Administration Console, right-click on the server name, or the application name in the
Administration tree and click Remove Application.

Tip: You can also assign applications to application folders. For more information, see
"Application Folders" (p. 66).

Adding an Existing Application to a Datastore Server


There are several circumstances in which you might add an existing application to a datastore
server:

● During application creation, you selected the Generate datastore scripts and data files option.
In this case, before you can add the application, the script must be run by a database
administrator, see "Running the Script.sql file (DBA Only)" (p. 67)

● You chose to create scripts during an application upgrade

● You want to link to an application that was created in another Planning content store.

58 Contributor
Chapter 3: Configuring the Administration Console

To do this, you must first remove the application from the Planning content store table it
currently resides in.

Steps
1. Right-click Applications in the Administration tree and click Link to existing Applications.

2. The Add existing applications window lists the applications that exist for the currently selected
datastore server. Click the application you require.

3. If the application was created in a different Planning content store, add an application ID. This
is used by the Web browser to identify the application. It must be a unique character string.

4. For applications created using the Script.sql file, select the XML package. The location of the
package.xml is the same as the location of the script.

5. Click Add.
Ensure that the application is added to a job server or job server cluster. For more information,
see "Add Objects to a Job Server" (p. 56).

The Monitoring Console


The Monitoring Console enables you to monitor the progress of the following processes running
in the Contributor Administration Console from one location:

● application(p. 50)

● job server clusters(p. 56)

● administration links(p. 50)

● macros(p. 192)

● deployment(p. 168)

Managing Sessions
Multiple administrators may administer a Cognos 8 Planning - Contributor application at any one
time. However, to prevent data integrity issues, when a change is made to the development model,
it is locked. The lock is dropped when the administrator navigates to a different function or closes
the Administration Console. You can manually remove the lock by clicking the Remove button on
the Session Details area. Use this with caution because it could prevent other users from saving
changes.
You can automate the removal of an application lock. See "Remove Application Lock" (p. 219) for
more information.
The development model can be changed by the following:
● changing navigation

Administration Guide 59
Chapter 3: Configuring the Administration Console

● changing orientation

● selecting grid options

● selecting application options

● selecting planner-only cubes

● creating Contributor help text

● selecting dimensions for publish

● modifying e.List, and rights

● selecting saved selections and access tables

● synchronizing with Cognos 8 Planning - Analyst

● modifying datastore maintenance options

● resetting development to production.

The production model is updated during the Go to Production process. There are checks in place
to ensure that two jobs do not run concurrently. Additionally, when you create a job using the
Administration Console, you are prompted if a valid job of that type already exists and alerted that
this new job may overwrite the existing job.

Example 1
Administration Console 1: User A clicks Navigation in the Administration tree. At this stage, the
development model is not locked.
Administration Console 2: User B clicks e.List in the Administration tree, and makes a change. The
development model is then locked by User B. A further check is made to ensure that no changes
were made to the development application since it was last read by Administration Console 2. If
this is true, User B can continue to edit the e.List and save the changes.
Administration Console 1: User A edits Navigation. User A is informed that User B has the
application locked and is asked whether they want to take the lock. User A takes the lock from
User B. Because User B updated the development model since User A opened the Navigation page,
the development model is no longer up-to-date. The user is prompted and Navigation is re-loaded
with the updated development model and User A can continue to edit and save changes.

Example 2
Administration Console 1: User A opens Application Options and makes a change.
Administration Console 2: User B opens Orientation and makes a change. User B is prompted that
User A has the lock and User B takes the lock.
Administration Console 1: User A clicks Save. User A is prompted that User B now has the lock
and that changes made by User A will not be saved. The controls on Application Options are
disabled.

60 Contributor
Chapter 3: Configuring the Administration Console

Example 3
Administration Console 1: User A opens Grid Options and makes a change.
Administration Console 2: User B opens the Publish Data and selects the detail for a publish. Because
this does not change the development model, there is no need for a lock to be taken. Both User A
and User B can work on the application concurrently.

Sending Email
You can send email to a user defined in an application using your default email program.

Steps
1. Click in the application containing the user you are sending an email to.

2. Click Send email in the toolbar .


The first time you do this, you may be asked to supply your mail provider connection details.
Refer to your mail provider documentation.

3. Choose to send an email to All Users or All Active Users, as defined in the application.

4. Click on the group of users you want to send email to, then click Mail. This opens up a new
email message in your standard email tool.

5. Create and send the email.

You then return to the Send email dialog box.

Administration Guide 61
Chapter 3: Configuring the Administration Console

62 Contributor
Chapter 4: Creating a Cognos 8 Planning -
Contributor Application

To create a Cognos 8 Planning - Contributor application, you perform the following steps:
❑ If you have DBA rights to the datastore server, create a Contributor application using the
Application Wizard.
If you do not have DBA rights to the datastore server, create a script using the Application
Wizard, then send the script to the DBA who will run the script. After the script is run, add the
application to the Administration Console, "Adding an Existing Application to a Datastore
Server" (p. 58).
These processes create a development application. A development application is not seen by
the users, it is simply the application that you work on in the Administration Console.
This means you can make changes to the application without having to take it offline, reducing
the amount of time that users are offline to as little as a minute. This is the period of time taken
to integrate the new e.List items into the hierarchy and set the correct workflow states.

❑ Modify the application.


You can set the order in which the users are asked to go to each cube, click the dimensions that
make up the rows, columns and pages, add and modify the e.List, import users, define access
rights, and import data.

❑ Run the Go to Production process, see "The Go to Production Process" (p. 239).
This activates a wizard which creates a production application. This makes the application
available to end-users.

Creating a Contributor Application


Use the Application Wizard to create a Contributor Application.

Steps
1. In the Administration tree, under the name of the datastore server where the application is to
be created, right-click Applications.
A check is made to see if you are logged on with appropriate rights. If you are not, you are
prompted to log on. Click Next.

2. Select Create New Application from the menu.


A check is made to see if you are logged on with appropriate rights. If you are not, you are
prompted to log on. Click Next.

Administration Guide 63
Chapter 4: Creating a Cognos 8 Planning - Contributor Application

3. Select a library.
This is the Analyst library that your application will be based on. The D-Cube library list is for
information only and tells you which D-Cubes the selected library contains. Click Next.

4. Select the e.List.


The e.List is a dimension that is used to determine the hierarchical structure of an application.
For more information, see "The e.List" (p. 89).
When you select an e.List, the right-hand list tells you which D-Cubes in the application contain
this e.List.

5. Click Next.
The Administration Console checks the library to ensure that it can create the application. If
there are any errors or warnings, you can view these and save them to text file for more
investigation. If there are errors, the wizard will terminate. You can continue to create the
application if you only have warnings.

6. A window listing the statistics for the application will be displayed. This is for information
only. See "Model Details" (p. 66) for more information.
You can print the details on this window.

7. Enter application details as follows:

Detail Description

Application Display Defaults to the Analyst library name, but you can change this during
Name application creation. After an application is created, this cannot be
changed. There are no character restrictions. The maximum length
is 250 characters.

Datastore Name The name of the datastore application that contains the Contributor
application database tables.
This defaults to the Analyst library name, stripping out special
characters. Only the following characters are allowed: lowercase
letters a to z and numeric characters. No punctuation is allowed
except for underscore. Maximum 30 characters (18 for DB2 OS390)
SQL Server only: Reserved keyword words are not allowed, see
your SQL Server documentation for more information.

Application ID This is used by the Web browser to identify the application. Only
the following characters are allowed: lowercase letters a to z and
numeric characters. No punctuation is allowed except for
underscore.

64 Contributor
Chapter 4: Creating a Cognos 8 Planning - Contributor Application

Detail Description

Location of datastore Enter the file path where the SQL Server datastore files will be
files created on the administration server. You can browse the data
server file structure of file locations. Oracle and DB2 users do not
see this box, the location of datastore files is determined by your
datastore structure.

Location of datastore Enter a location for the datastore backup files. The file location
backup files must exist prior to creating an application and should be a different
location than the datastore location.

Create and populate This option creates the Contributor application and adds it to the
datastore now Administration Console tree. You only have this option if you have
appropriate DBA rights.

Generate datastore Select this option if you want to create a datastore script and data
scripts and data files files which can be used to create an application at a later stage. This
option is mandatory if you do not have DBA rights.To create an
application at a later date, the script must be run by a DBA and the
application added to the datastore. After the application is added
and you click on any of the branches, you are prompted to select
the package.xml file.

Save scripts in this folder Enter or browse for a file location to save datastore scripts and data
files to.

Advanced Oracle applications


Specify the following options:
● Tablespace used for data (defaults to USERS)

● Tablespace used for indexes (defaults to INDX)

● Tablespace used for BLOBS (Binary Large Objects) (defaults


to USERS).

● Temporary tablespace

(Defaults to TEMP). This can be changed. This tablespace is used


for any automatically created publish datastores.
These tablespaces must exist.

DB2 UDB
Specify the tablespace name for data, indexes and BLOBS. It defaults
to USERSPACE1. Customized tablespace names can be used.
Tablespaces need to be at least 8000 pages.

Administration Guide 65
Chapter 4: Creating a Cognos 8 Planning - Contributor Application

8. Select the job cluster or job server to run the jobs for this application.

9. Click Next and then Finish. The application creation progress is displayed.
If you generated datastore scripts and data files, send the script to the DBA. After your DBA
has run the script, you can add the application to the Administration Console "Creating, Adding
and Upgrading Applications" (p. 58).

Application Folders
You can use application folders to organize your applications into related groups. You can assign
job server clusters (p. 56) and access rights to groups of applications, making it easier to administer
multiple applications at the same time. For example, you can add or remove a group of applications
from a job server or job server cluster. An application folder can contain applications that exist on
more than one datastore.
When you assign an application to an application folder, it moves from under the Applications
branch of the tree to the relevant application folders.

Tips
● To remove an application from an application folder, right-click the application and select
Remove application from folder. The application is moved under Applications. If that was the
only application in the folder, the folder is removed from the Administration Console.

● To move an application between folders, you must first remove it from the original application
folder.

● Selecting Remove application removes the application from the Administration Console.

Steps
1. Before you create an application folder, at least one application must exist.
You must also have the right to assign applications to application folders.

2. Right-click the application name and click Assign Application to an Application Folder.

3. Click Create a new Application Folder and add the Application, or Assign the Application to
an existing Application Folder.

4. If creating an new folder, enter an appropriate folder name.

5. If you selected Assign application to an existing folder, select the folder name from the drop
down list.

6. Click Assign.

Model Details
The model details window in the Application Wizard displays information about the application.

66 Contributor
Chapter 4: Creating a Cognos 8 Planning - Contributor Application

Exceeding these numbers may not be a problem for your application, but could slow down the end
user, and a redesign of the model in Analyst could help.

Warning Number Description

Number of Cubes 10 This is an optimum number of cubes that can be


displayed in the Contributor Administration Console.

Number of D-Links 25 A large number of D-Links is not necessarily a problem.


However, greater than 25 D-Links could lead to some
specific performance issues that may be difficult to
predict. This may slow both runtime performance and
initialization times to an unacceptable level.

Total Number of Cells in 500,000 A large number of cells in the application will lead to
Application (per e.List slice) performance problems unless the model builder is able
to use no data settings in access tables to create e.List
specific models that are considerably smaller. Under
certain circumstances it is possible to distribute very
large models with Contributor, particularly if bandwidth
and server capacity is not an issue.

Largest Cube 200,000 This restriction is similar to the Total Number of Cells
in Application. A large single cube can lead to
performance problems at runtime, for example,
breakback and data entry can become slow, unless the
cube is cut-down using no data settings in access tables.

Total Number of D-List 2,500 A very large number of dimension items can cause the
Items in Application model definition to be very large. See also below.

Largest Dimension 1,000 The Contributor Web application is not designed to


carry a large number of dimension items across the rows
and/or columns. The application is optimized for views
that fit onto a single window. When lists are large, No
Data settings and cut-down models can be used to
reduce large lists down to a size that is more manageable
for each e.List item. There may also be usability issues
when manipulating large lists in the Administration
Console.

Running the Script.sql file (DBA Only)


The DBA runs script.sql file using SQL, Oracle or DB2 as appropriate.

Administration Guide 67
Chapter 4: Creating a Cognos 8 Planning - Contributor Application

Database To run the script, do the following:

SQL Server open and run the file using SQL Server Query Analyzer.

Oracle using SQL*Plus, at the prompt, enter @c:\script.sql.

DB2 run the script.sql using Command Center.

After you have run the script successfully, add the application to the Contributor Administration
Console. For more information, see "Adding an Existing Application to a Datastore Server" (p. 58).

Application Information
When you click the application name, the following application information is displayed:

Detail Description

Application Display Name The name of the application as displayed in the Administration
Console.

Datastore Name The name of the application datastore.

Application ID A unique identifier used to identify the application in the Web


client.

Library Name The name of the library in Analyst that is used to create the
application.

Library Number The number of the library in Analyst.

The e.List The name of the dimension that is used as the e.List
placeholder.

Configuring the Contributor Application


You can set options for the Contributor application:
● set global Web Client settings

● set the order in which the users are asked to go to each cube (p. 69)

● select the cube dimensions that make up the rows, columns, and pages of the cubes (p. 70)

● configure grid settings (p. 70)

● configure application settings (p. 72)

68 Contributor
Chapter 4: Creating a Cognos 8 Planning - Contributor Application

● designate which cubes can be viewed only by the planner (p. 75)

● create text for the users to see in the cube or Web client (p. 76)

Configure Global Web Client Settings


Configure Web client settings that apply to all Contributor applications. These settings can only
be applied by the Planning Rights Administrator.

Steps
1. In the Administration tree, click System Settings, and Web Client Settings.

2. To enable the Contributor applications to be deployed automatically over the Web, select Allow
automatic Cab downloads and installations.
Cab format is the compressed format in which the Contributor applications are stored.

3. To modify the separator that is used between names in emails sent from Contributor applications,
enter the separator in the Email character separator box.

4. To restrict the size of attached files:

● In the Attached Documents area, select Limit Document Size.

● Enter an amount (in megabytes) for the Maximum Document Size (MBs).

5. In the Allowable Attachment Types box, choose to either remove a selected file type by clicking
Remove or click Add to add a new allowable attachment type. At the end of the list of file type,
enter a label name and the file type extension in each box. Make sure you append the file type
extension with an asterisk (*).

Note: Changes made to the Attached Documents settings take effect almost immediately and
without the need to perform a Go To Production.

Set the Cube Order for an Application


Set the order in which the users are prompted to go to each cube. We recommend that you set the
order to be the same order in which the links run in the Cognos 8 Planning - Analyst model. Users
switch between cubes by clicking tabs.
You can provide instructions to be shown for each cube, see (p. 76).

Steps
1. In the appropriate application, click Development, Web-Client Configuration, and then
Navigation.

2. In the Set Cube Order box, click each cube name and move as required using the arrow keys.

3. Click Save.
The changes are visible to users after you run the Go to Production process.

Administration Guide 69
Chapter 4: Creating a Cognos 8 Planning - Contributor Application

Set the Order of Axes


Select the cube dimensions that make up the rows, columns, and pages of the cubes. You can also
nest dimensions.

Steps
1. In the appropriate application, click Development, Web-Client Configuration, and then
Orientation.

2. Click the tab for the cube that you want to modify.

3. Click a dimension and move it using the arrows, to set it as a page, row, or column. Repeat
with the other dimensions if required.

4. If you want to create nested (merged) dimensions, place two dimensions under either Row or
Column.
Planners cannot change nested settings, even if they reslice.

5. Click Save.
The changes are visible to users after you run the Go to Production process.
If cubes that have dimensions defined as pages have dimension items with different access
settings, such as Read and Write, the cube opens with the first writable page selected by default.
If all items have the same access setting, the cube opens with the first selected page as created
in Analyst.
If a user moves from one cube to another with the same dimension, the cube opens to the same
item selected in the previous cube.

Change Grid Options


Change grid options to affect the way the Web application appears and behaves.

Steps
1. In the appropriate application, click Development, Web-Client Configuration, and then Grid
Options.

2. Set the options you want.

3. Click Save.
Changes will be applied to the production application after running the Go to Production
process.

70 Contributor
Chapter 4: Creating a Cognos 8 Planning - Contributor Application

Option Description

Set Breakback When breakback is enabled and data is entered into a calculated cell,
Option data in other cells is automatically calculated from this data.
For example, you can distribute a total annual salary over twelve months
to calculate a monthly payment. For more information, see the Analyst
User Guide. All cubes have breakback selected by default.

Allow Multi e. If the Allow Multi e.List Item Views option is on, the user can select a
List Item Views multi e.List item view, or a single e.List item view. This means they can
edit or view all contributions they are responsible for in one window.
The default value is off because a large amount of memory may be needed
to open a multi-e.List item view.

Allow Slice and If the Allow Slice and Dice option is on, users can swap a row or column
Dice with a page or swap a page with a column or row heading. The default
value is on.

Recalculate After If the Recalculate After Every Cell Change option is on and a user types
Every Cell data into the application, the data is recalculated as soon as the focus
Change moves from the cell. The default is Off, meaning that data is calculated
when pressing Enter.

Select Color for You can specify the color of data in the grid for the following situations:
Changed Values Saved data: the color of data with no change. The default is black.

Typed data not entered: the color of data that is typed but not entered.
The default is green.

Data entered but not saved: the color of data entered in the current session
but not saved. The default is blue.
The default colors are different in Analyst where the color of data that
is entered but not saved is red, not blue, and where detail/total is blue/
black, not normal/bold.

Possible unexpected results example


You may get unexpected results if you select this option with breakback On. For example, with
Breakback on and Recalculate After Every Cell Change on, in the example shown, type 240,000
in the total, and then type 40,000 in July, the total changes to 260,000 and the remaining months
have 20,000.

Administration Guide 71
Chapter 4: Creating a Cognos 8 Planning - Contributor Application

With Breakback On and Recalculate After Every Cell Change Off, press Enter. The total holds as
240,000 and the remaining months have 18,182.

Change Application Options


Change Application Options to affect the operation of the production application.

Steps
1. In the appropriate application, click Development, Web-Client Configuration, and then
Application Options.

2. Set the options you want.

3. Click Save.
Changes will be applied to the production application after running the Go to Production
process.

Option Description

History Tracking Use the History Tracking option to track actions performed by users.
When you select Action time stamps and errors and Full debug
information with data, information is recorded in a datastore table
named history.
Choose one of the following:
● No History
Does not track changes

● Action time stamps and errors


Tracks the times of users’ actions, including saves, submissions,
and rejections. This option is selected by default.

● Full debug information with data


Tracks the times of users’ actions and records a copy of the data
at the point when each action takes place. Because this option
consumes a lot of memory, use it only for debugging purposes.

72 Contributor
Chapter 4: Creating a Cognos 8 Planning - Contributor Application

Option Description

Cut-down Models If cut-down models (p. 134) are used, a different model definition is
produced for each e.List item. This can significantly speed up
transmission to the client for models with large e.Lists. However, if
used inappropriately, it could slow down performance.
Choose one of the following:
● No cut-down models

● For each aggregate e.List item (review level model definition)

● For every e.List item

For more information about these options, see "Cut-down Model


Options" (p. 135).

Allow Reviewer Edit If the Allow Reviewer Edit option is on, users with review and submit
rights to an e.List item can edit an e.List item up to the review depth
level, which is assigned in the e.List window.

Allow Bouncing If the Allow Bouncing option is on, someone with appropriate rights
can take ownership of an e.List item by clicking the Edit or Annotate
button while the item is being edited or reviewed by another owner.

If the Allow Bouncing option if off, ownership of an e.List item can


only be taken when it is not being edited or reviewed by another user.
When a user has ownership taken away from them, they are issued a
warning and cannot save or submit their data to the server. They can
save to file by right-clicking in the grid.

Prompt to Send When this option is selected, user 1 is prompted to send an email to
Email When User user 2 when user 1 takes ownership of an e.List item from user 2. The
Takes Ownership email is copied to other people who have submit or save rights.

Use Client-side If the Use Client-side Cache option is on, model definitions and data
Cache blocks are cached on the client computer so that they do not have to
be downloaded repeatedly from the server. This provides a huge
reduction in the network bandwidth required and is invisible to the
user.
This is not possible on client computers where a security policy prevents
saves to the hard disk. When the user requests the data by opening the
grid, a mechanism checks to see whether the data is cached on the client
machine and whether that data changed.

Administration Guide 73
Chapter 4: Creating a Cognos 8 Planning - Contributor Application

Option Description

Prevent Offline If the Prevent Offline Working option is on, users cannot work offline.
Working Offline working is possible only when cache on the client server may
be used, but the client side cache does not have to be enabled.
For more information, see "Working Offline" (p. 86).

Prompt to Send Reviewers are prompted to send an email message to current owners
Email on Reject of contribution e.List items when they reject an item. The email is sent
to the person who submitted the e.List item, and copied to other people
who have submit or save rights and people who have assigned rights
for the review e.List item, that is the rights are not inherited through
the hierarchy.

Prompt to Send The user is prompted to email all immediate reviewers and copy (cc)
Email on Save all immediate owners when they save an item.

Prompt to Send The user is prompted to email all immediate reviewers and copy (cc)
Email on Submit all immediate owners when they submit an item.

Web Client Status You can change the interval of time that the server is polled to refresh
Refresh Rate the Web client status. Increasing the refresh interval decreases the
(minutes) amount of Web traffic. This may be desirable if there are a lot of clients,
but it also reduces the visibility of the data that the user gets on the
workflow state.

Record Audit This records actions taken in the Web client, such as typing, copying
Annotations and pasting data, and importing files. In addition, system link history
is stored as an annotation on the cube that was targeted by the link.
When a link is run, an annotation in the open e.List item. If the link is
rerun, the same annotation is updated. A history dialog box shows all
history related to the links that apply to the open e.List items.
If enabled, users can view audit annotations for any cells for which they
have at least view access.
This option can greatly increase the size of the application datastore,
and should be used with care. It is Off by default.

Annotations Import If a user imports a text file into the Web grid, this option determines
Threshold whether each row imported in a single transaction is recorded separately,
or all rows imported are recorded in a single entry. If the threshold is
set to 0, all rows imported in a single transaction are recorded as a
single entry.

74 Contributor
Chapter 4: Creating a Cognos 8 Planning - Contributor Application

Option Description

Annotations Paste If a user copies and pastes data into the Web grid, this option determines
Threshold whether each row pasted in a single transaction is recorded separately,
or all rows pasted are recorded in a single entry. If the threshold is set
to 0, all rows pasted in a single transaction are recorded as a single
entry.

Display Audit Hides or shows audit annotations in the Web Client.


Annotations in the
Web Client

Allow bouncing example


For example, if the Allow bouncing option is disabled, user 1 with edit rights to an e.List item
cannot edit while user 2 is editing. After user 2 stops editing, user 1 can edit after refreshing the
Web client status.
If the Web client status refresh rate is set to 0, user 1 can edit only if they close the grid and reopen
it. If the refresh rate is 0, the status is refreshed only when the user takes an action that connects
to the server. The default interval is five minutes, and the minimum interval is one minute. More
frequent polling has a negative effect on Web traffic over the network.
Automatic polling also refreshes the passport. If the Web client is inactive for half the passport
duration value, automatic polling ceases. For example, if default values are used, then after 1800
seconds of inactivity the Web client polling will cease and after an additional 3600 seconds, the
passport expires. Any further Web client activity by the user requires re-authentication.
If you must, ensure that the passport duration value is respected exactly. For example, to comply
with financial regulations, the refresh rate must be set to 0 (zero). The passport then expires after
the configured duration of inactivity.

Create Planner-Only Cubes


Planner-only cubes are cubes that are seen only by the planner, or a reviewer with reviewer edit
rights to an e.List item. The data in a planner-only cube is fed into cubes that are seen by both
planners and reviewers.
You make a cube planner-only to reduce the amount of data a reviewer must view, and reduce the
amount of data that is aggregated, speeding up the aggregation process.

Steps
1. In the appropriate application, click Development, Web-Client Configuration, and then Planner
Only Cubes.

2. Select the cubes you want to make planner-only.

3. Click the Save button on the toolbar.

Administration Guide 75
Chapter 4: Creating a Cognos 8 Planning - Contributor Application

Creating General Messages and Cube Instructions


You can create instructions that are shown to users in the Contributor application. An instruction
is text that appears when a user clicks Instructions on the Contributor application window.
Cube instructions can be different for every cube. You can create a brief one-line instruction that
appears below the cube tab name in the Web browser. You can also create a detailed HTML
formatted set of instructions, which the user views by clicking Help.
Use the full path to reference an image, so that it appears to all users.

Steps
1. In the appropriate application, click Development, Web-Client Configuration, and then
Contributor Help Text.

2. In the Enter Instructions text box, type instructions, using HTML tags if required.
You can create links to other Web pages, for more information, see "Creating Hypertext
Links" (p. 377) and "Customizing Cognos 8 Planning - Contributor Help" (p. 375).

3. In the Contributor Help Text window, click the cube name tab.

4. In the one-line box, type instructions if required.


This must be text only, with no additional formatting. We suggest that you use fewer than 100
characters to ensure that the text is visible.

5. In the larger text box, type either HTML formatted text, or plain text, up to 3000 characters.
For more information, see "Customizing Cognos 8 Planning - Contributor Help" (p. 375).

6. Click Save.
Changes are applied to the production application after running the Go to Production process.

Maintaining the Contributor Application


In the Application Maintenance window, you can configure application properties. In particular,
you can
● save application XML for diagnostic purposes (p. 77)

● show information about the application, such as the number of cubes, and the number of
D-Links (p. 77)

● set Admin Options (p. 77)

● select dimensions for publishing (p. 79)

● set Go to Production Options (p. 79)

76 Contributor
Chapter 4: Creating a Cognos 8 Planning - Contributor Application

Save Application XML for Support


Details about the Cognos 8 Planning - Contributor application are held in XML format. If you
have a problem with your Contributor application, you can save the XML in the current state, and
send the file to your technical support.
You can automate uploading the Development application XML by using a macro, see "Upload a
Development Model" (p. 207) for more information.

Steps
1. In the appropriate application, click Development, Application Maintenance, and then
Application XML.

2. Click the Application Type: Development or Production.

3. Enter the XML file name.

4. Click Save XML to File.

5. If the file location changed, click Save.

View Application Details


The Application Details window provides information about the number of items in the application.
For more information on this window, see "Model Details" (p. 66).

Admin Options
You can configure import and publish options on the Development, Application Maintenance,
Admin Options window.
These options should only be configured by Database Administrators and is only available to users
with DBA rights. They can also be set directly in the datastore.
You do not need to run the Go to Production process for these options to apply, they are applied
as soon as you save.

Option Description

Datastore Version For information only.


Number

Import Block Size The number of rows that are passed to the calculation engine at a time
during import. The default is -1, which means all rows are passed at once.

Import Location This is a temporary file location. Files are not deleted, but they may be
overwritten. When you import files, they are copied to this location on
the server.

Administration Guide 77
Chapter 4: Creating a Cognos 8 Planning - Contributor Application

Option Description

Import Options You can specify parameters to BCP (bulk copy utility for SQL Server
applications) or SQL Loader command line parameters (import tool for
Oracle applications).
A BCP example is:
-B 10000
This parameter sets import text files to be uploaded in batch sizes of
10,000. Other possible parameters are [-h "load hints"] and [-a packet
size].
A SQL Loader example is:
DIRECT=TRUE
This parameter affects the number of lines that are uploaded and may
speed up the import process considerably, but it may require a lot of
memory.
When importing data, ensure that the database code page parameter
reflects the underlying data being imported. For example, when importing
Western European language data, use the Windows code page for Western
European, 1252. The parameter to use is -C=-C1252. For non-Western
European data, verify with your specific database documentation what
code page to use to ensure that it imports correctly.

Publish Options If a foreign locale is used, the CODE PAGE parameter can be set here.
You can also set BCP options here.

Generate Scripts Set this option to Yes to generates a script when any actions are performed
that require DDL commands to be run in the datastore, publishing data
and synchronize with Analyst.

Table-only Publish Set this option to Yes if you want a full publish to occur after a model
Post-GTP change. The Reporting job detects if the model changes are incompatible
with the publish schema or link definition and performs a full publish to
correct the incompatibilities. If this option is set to No and errors are
detected, incremental publish is disabled.

Act as System Link Set this option to Yes if you want to allow the use of this application as
Source a source for a System Link.

Display Warning Setting this option to Yes displays a warning message if you select the
Message on Zero Zero Data option when importing data "Steps to Prepare the Import
Data File" (p. 175).

78 Contributor
Chapter 4: Creating a Cognos 8 Planning - Contributor Application

Option Description

Base Language This option determines language in which the Contributor application is
displayed if the user has not specified a preference in Cognos Connection.
For information about translating applications see "Translating
Applications into Different Languages" (p. 183).

Scripts Creation Path This options sets the default location for script creation on the Contributor
Administration server.

Select Dimensions for Publish


You can select the dimension to use when linking to another resource, such as Cognos 8 Planning
- Analyst. Because fewer records are published, the process is quicker.
Dimensions for Publish apply only for the View publish layout, and not the Table-only publish
layout. Dimensions for Publish in the Table-only layout are selected when you select cubes, or a
default dimension is used.
If a dimension is selected, that dimension is expanded while publishing to the datastore to incorporate
one column for each dimension item. For example, if a dimension named Months is selected, twelve
columns are published for each dimension item, rather than one generic Months column. This
significantly reduces the number of rows that are published and enabling differing data types per
column.
By default, None is selected. This means that none of the dimensions are expanded while publishing
and only one column exists for each dimension.
If you use data dimensions for publishing, select them before you run the Go to Production process.
You can then publish the production version of the application.

Steps
1. In the appropriate application, click Development, Application Maintenance, and then
Dimensions for Publish.

2. Click a tab to select a cube.

3. Click Select Dimension and click the dimension name to use as a data dimension.

4. Click Save.

Tip: Click Preview to view the data columns that will be published, either with a data dimension
selected or without.

Set Go to Production Options


You can set options that must be set prior to creating the production application.

Administration Guide 79
Chapter 4: Creating a Cognos 8 Planning - Contributor Application

Option Description

Prevent Client-side This option prevents client side reconciliation.


Reconciliation Typically, this option is used to prevent reconciliation on client machines
without much RAM. However, we do not recommend it.
When a production application is created, an online job runs that updates the
data blocks sequentially using the reconciliation process, see
"Reconciliation" (p. 52). Typically this happens on the job server. However,
if an end user requests an updated model before the reconciliation process
takes place, they are warned that the model is out of date and client side
reconciliation occurs. If the Prevent Client Side Reconciliation option is
selected, a message appears and users cannot access their e.List items until
their e.List item is reconciled.
If a user is editing either offline or online and this option is selected, when Go
to Production is run, they lose their changes. They can save the data by
right-clicking and saving to file.

Copy This option enables you to specify whether the publish setting in the e.List
Development e. window should overwrite the settings in the Publish view layout e.List window.
List Item Publish If you import an e.List file with publish settings, or you edit the publish settings
Setting to in the e.List window, this option is automatically selected, and any publish
Production settings you made are carried over to the production application, overwriting
Application any settings made in the production application. Clear this option if you do
not want to overwrite the settings in the production application.
This option is only applied if changes were made to the Contributor application
since the last time Go to Production was run.

Planning Package When you set Go To Production Options, you must name the planning
Settings package. Optionally, you can include a screen tip and a description for the
package.
Application access is restricted by the e.List. By default, when you create a
package, Overwrite the package access rights at the next Go To Production
is selected and the package access rights are based on the e.List. If, in Cognos
Connection, you make changes manually to the package access rights and
want your modifications to remain after the next Go to Production, you must
clear this check box. For information about the Got to Production process,
see "The Go to Production Process" (p. 239).

Datastore Options
Datastore options enables you to view the datastore tables that are associated with cubes in the
application, and perform datastore backups.

80 Contributor
Chapter 4: Creating a Cognos 8 Planning - Contributor Application

In Datastore Maintenance, you set the Datastore Backup location and perform ad-hoc datastore
backups. You can also view information about datastore objects, and correct translation problems.

Option Description

Datastore Datastore backup backs up the entire application datastore, including the
Backup production application immediately. This does not backup the publish datastore,
nor the CM datastore.
f you are using SQL Server, you can browse for a location. Oracle backups can
only be made to a location on the administration server, or machines with
access to the administration server. For DB2UDB applications, we recommend
that you backup manually. Refer to your provider documentation.
Restoring the application means that any contributions entered since the backup
was made will be lost. It is best practice to make backups when there is unlikely
to be much activity and you can stop the Web application from running while
the backup is being made.

Important: Depending on your organization's datastore policy you may not be


able to perform some datastore maintenance functions. If you do not have DBA
permissions, contact your DBA.

Publish Set the location for datastore container files created during publish.
Container Files

Datastore This table displays Model Objects. In this instance, Model Objects are cubes
Names and their associated datastore tables.
Datastore Object Name lists the import datastore table names. If you click
Display row counts, the number of rows in each datastore table is displayed,
enabling you to see if, for example, there is any data in the export table without
having to look in the datastore manually. This may take a few minutes to
appear.
Import datastore tables are prefixed with im and import errors are prefixed
with ie.

Translation When a translation is created, a row is created in the Language table in the
Maintenance datastore, and some information is added to the model XML about the
translation. If the information in the model XML and the information in the
database get out of step, you cannot run Go to Production. This may happen
if you have a problem in the Administration Console, or network problems.
The Datastore language tab box lists the rows that exist in the Language table
in the application datastore. The Model language table box lists the languages
that exist in the Model XML. If the languages listed in the two boxes are
different, click Synchronize.

Administration Guide 81
Chapter 4: Creating a Cognos 8 Planning - Contributor Application

Option Description

Tablespace The Tablespace window displays the tablespace options that were chosen during
application creation.
Tablespace Options:
● Data

● Index

● Blobs

It also displays the temporary tablespace for the current Contributor application.
This window is only visible if the application runs on Oracle or DB2 UDB.

82 Contributor
Chapter 5: The Cognos 8 Planning - Contributor
Web Application

To make a Cognos 8 Planning - Contributor application available to users, you must set up a
Cognos 8 Web site. This is described in the Cognos 8 Planning Installation and Configuration
Guide.
Users can access Contributor via the Web, or using Contributor for Excel. See the Contributor for
Microsoft Excel® Installation Guide for more information.

The Contributor Web Site


When users log on to a Contributor application, they see a graphical overview of all the areas they
are responsible for, and the status of the data.
To start using Contributor, in the tree on the left side of the window, users click an item. In the
table that appears, click the name of the item.
The Contributor interface has two panes. A banner across the top of the browser provides access
to User Instructions and Application Help.

The Tree
The tree on the left side of the page shows the names of the areas that users are responsible for
contributing to (Contributions) and the areas that they are responsible for reviewing (Reviews).
Both appear in a hierarchical form. Depending on their rights, users may see either one of these
branches, or both. When the user clicks an item in the tree, a table with the details for the item
appears on the right side of the window.
Each item in the tree has an icon that indicates the current workflow state. For more information,
see "Workflow State Definition" (p. 297).

The Table
The table gives information such as the workflow state of the item, the current owner, the reviewer,
and when it last changed.
When users open a contribution, they can view or enter data depending on their rights and the state
of the data.
Data that users can edit has a white background. Read-only data has a pale gray background.
Data can be edited only if the icon indicates that it has a workflow state of Not started or Work
in progress.
Users can annotate data (p. 287).

Administration Guide 83
Chapter 5: The Cognos 8 Planning - Contributor Web Application

Users can also reject contributions by clicking a reject button in the table.
If Contributor for Excel is installed, users can open their e.List items in Excel from the Contributor
Web application.

Set Web Site Language


You can set the language of the Contributor web site by assigning a translation to a user, group,
or role or in user defined properties in Cognos Configuration, for more information on creating
and assigning a translation, see "Translating Applications into Different Languages" (p. 183). You
can also set Cognos Connection language preference, see the Cognos 8 Administration and Security
Guide or by selecting the desired language during installation, see the Installation and Configuration
Guide.
If multiple languages have been assigned to a user, language precedence for the Contributor web
site are as follows:
● translation assigned to a user, group, or role

● translation assigned by user specified preference for Product Language defined in Cognos
Configuration

● language selected in user preferences in Cognos Connection

Contributor for Microsoft Excel


Client users use Contributor for Excel to view and edit Contributor data using Excel, giving them
the benefit of Excel formatting and Contributor linking functionality. They can do the following:
● Add bar charts and other graphs created from Contributor data.

● Create dynamic calculations from Contributor data.

● Create a calculation in Excel and link it to a Contributor cell.


As you update this calculation, you can choose whether to update the value in the Contributor
cell.

● Reuse custom calculations and formatting by saving the workbook as a template.

● Resize the worksheet so you can see more or less data on a page.

● Save data as an Excel workbook and work locally without a connection to the network.

For more information, see the Contributor for Excel documentation.

84 Contributor
Chapter 5: The Cognos 8 Planning - Contributor Web Application

Accessing Contributor Applications


You access Contributor applications from the Cognos Connection portal.

Steps
1. Type the following URL in the address bar of the browser:
http://server_name/cognos8

2. Click the Contributor link.


If users have access to just one application, they are taken directly to that application. If they
have access to more than one, they are presented with a list to choose from.

3. You can return to the Cognos Connection portal by clicking the Cognos Connection link on
the top right hand side of the page.

Tip: You can copy the universal resource location of a Contributor cell to the clipboard ready
to be used by other applications. This enables you to link directly to the cell from another
application. In order for the link to work, the Contributor application must be available on
the computer and the user must have appropriate rights.

Configure Contributor Web Client Security Settings


Contributor Web Client uses signed and scripted ActiveX controls. Ensure that each user’s security
settings for the Local Intranet zone are set to medium to accept these controls.

Steps
1. In Internet Explorer, select Tools, Internet Options, Security, Custom Level.

2. Under Reset custom settings, select Medium from the list, and then click Reset.

3. Click OK.

The Contributor Administration Console uses Microsoft Internet Explorer security settings to
communicate with the Web server. This may cause users to be prompted for multiple logons. To
prevent multiple logon prompts, ensure that each user’s browser security settings are set to one of
the automatic logon options.

Steps
1. In Internet Explorer, select Tools, Internet Options, Security, Local intranet, Custom Level.

2. Under User Authentication, select Automatic logon with current username and password, or
Automatic logon only in Intranet zone.

3. Click OK.

Administration Guide 85
Chapter 5: The Cognos 8 Planning - Contributor Web Application

How to Link to Earlier Versions of Contributor Applications


If you need to link to earlier versions of Contributor Applications, the Web sites for these applications
must be configured as described in the same version of the Contributor Administration Guide.

Independent Web Applications


You create an independent Web application if you want to customize an individual Web site for
an application. For example, you may want to use your own images.

Steps
1. Create a new directory and copy the webcontent files to a sub-directory of the new directory,
for example:
\\server\customweb\webcontent\

2. Set up the virtual directory alias to point to the new parent directory.
The virtual directories that you need to create are:

Alias Location Permission

cognos8 c8_location/webcontent Read

cognos8/cgi-bin c8_location/cgi-bin Execute

For more information about configuring the Web Server, see the Cognos 8 Planning Installation
and Configuration Guide.

Working Offline
Working offline means that users can continue to work in situations when they are not connected
to the network.
Users can work offline only if the Prevent off-line working option is cleared (p. 72), and if they are
a user, or belong to the group or role associated with offline users. Offline working is possible only
when cache on the client computer can be used. When an e.List item is taken offline, the data is
stored in the offline store on the user's computer, see (p. 88). When the user wants to do some
work, they open up the offline application and work and save data as normal. The application
appears to work just like the grid in the Web application, except there is no submit button.
Working offline should not be the standard working practice. Because reviewers cannot view the
current data and planners cannot receive updates when new data is imported. Ideally, users should
bring offline data online as soon as possible to keep the data changes visible.
A user can work offline, save their data, and end their session. If another user logs on to the same
computer as a different user, the new user cannot see the first user’s data.

86 Contributor
Chapter 5: The Cognos 8 Planning - Contributor Web Application

Only a single e.List item or a standard multi-e.List item view can be worked with offline.
Working offline does not give access to everything in the cache and does not provide private e.List
item save.
If there are multiple users with edit rights assigned to the e.List item, another user with appropriate
rights can edit the e.List item. The user who checked out the e.List item cannot check in the changes.
If the administrator makes more than one set of changes to the application before the user attempts
to check in the edited e.List item, the user cannot check in the changes. They can save the changes
to a .csv file when running the Go to Production process; the administrator is warned which users
will be terminated if they proceed. If the administrator made just one set of changes, the user can
check in the changes successfully.
When you bring offline data online, numbers that were changed and saved in the offline application
show as changed in the online grid.
Annotations made in the offline application are editable and can be deleted when been brought
online until the e.List item is saved.
When you take an e.List item offline, you cannot work online on other e.List items until you bring
the offline data online.

Example
A user is working offline and changes a cell. The administrator makes the same cell read only and
runs Go to Production. When the user comes back online, they see that the cell is now read only
and are unable to make changes. However, the cell will contain the value they entered in the offline
data.

Steps to Work Offline


When you are not connected to the network, you can work offline on a specific e.List item or a
standard multi-e.List item.

Steps
1. In the Contributor application, open the e.List item.

2. Click the Set Off-Line button.


This closes down the browser and opens up the offline application.
Other users can see that the e.List item was taken offline for editing.

Tip: From the front window of the Contributor application, in the Last Changed column, click
the down arrow for the appropriate cell. Status information appears on the lower half of the
window.

3. To go online, click the Set On-Line button.


The view that you took offline is opened with the current offline data.

Administration Guide 87
Chapter 5: The Cognos 8 Planning - Contributor Web Application

The Offline Store


When an e.List item is taken offline, the data is stored in the datastore in files called offlinestore.at
(allocation table) and offlinestore.store (object store) on the user's computer.
By default, the client computer cache is cleared after twelve days of inactivity. You can change this
using Cognos Configuration.

88 Contributor
Chapter 6: Managing User Access to Applications

You manage access to Contributor applications through the e.List and Rights windows.
The e.List defines the hierarchical structure of an application. It is used to determine who can enter
data, who can submit data, who can read data and so on.
Users are secured by Cognos 8 security (p. 27).
Rights are defined by assigning users, roles, and groups to e.List items, and then by giving each e.
List item, and user, group, or role pairing an access level of Read, Write, Submit, or Review.

The e.List
The e.List is used to determine who can enter data, who can only read data, who data is hidden
from, and so on.
An e.List is a dimension with a hierarchical structure that typically reflects the structure of the
organization. An e.List contains items such as departments in a company, for example, Sales,
Marketing, and Development. Each department may be divided into several teams, for example
Sales may be divided into an Internal Sales and an External Sales team.
Planners are assigned to items in the lowest level in the hierarchy. Reviewers are assigned to items
in the parent levels.
All Departments

Operations Corporate

Customer Production Procurement Sales Human Finance Marketing IS&T


Service & Distribution Resources

In the example shown, All Departments, Operations, and Corporate are reviewer e.List items. Any
users assigned to these e.List items with rights greater than view are reviewers. Customer Service,
Production & Distribution, Procurement, Sales, Human Resources, Finance, Marketing, and IS&T
are contribution e.List items, and any users assigned to these items with rights higher than view are
planners.
An e.List is created in two steps:
● The dimension that represents the e.List is created in Analyst.

● The file containing e.List data is imported into the Administration Console as a text file (p. 95).

The following example shows a simple e.List created in Analyst.

Administration Guide 89
Chapter 6: Managing User Access to Applications

All Departments is the parent of Operations and Corporate. The Analyst calculation for All
Departments is
+Operations+Corporate
Operations is the parent of Customer Service and Production and Distribution. The calculation for
Operations is
+{Customer Service}+{Production and Distribution}
The D-List used in Analyst to represent the e.List does not need to reflect the full hierarchy of the
e.List, but it must contain at least a parent and a child, and we recommend that it contains at least
one review item and two children so that weighted averages and priorities can be tested.
There are some circumstances when it helps to use the full e.List in the Analyst model, bearing in
mind that you still must import the e.List into the Administration Console as a text file or Excel
Worksheet.
For example:
● If you need to bring Contributor data back into Analyst for more analysis.

● When data already present in Analyst is needed in Contributor.

● If you are using Analyst simply as a tool for staging and tidying up external data.
If you export data from an Analyst D-Cube with the e.List into a Contributor cube, ensure that
the e.List item names in Analyst exactly match the Contributor e.List id.

The e.List should have a valid hierarchy. It is best practise not to have too many levels in the
hierarchy. Avoid having more than 20 child items assigned to a parent. This improves performance
and improves aggregation speed.

90 Contributor
Chapter 6: Managing User Access to Applications

Multiple Owners of e.List Items


You can assign more than one user, group, or role to an e.List item through the rights window. For
example, do this when
● multiple e.List item owners share responsibility for a contribution, that is, they are job sharing

● an e.List item owner wants to delegate responsibility for completing a submission

● submission needs to be completed by a number of users sequentially

● the usual e.List item owner is absent, so a substitute user needs to make the submission

You may want to consider creating a group or role to contain multiple users, rather than assigning
multiple users, groups or roles to an e.List item. This allows users to be changed in the authentication
provider without you having to run the Go to Production process for these changes to be reflected
in the Contributor application. For more information about users, groups, and roles, see "Users,
Groups, and Roles" (p. 29).
There can be multiple owners of review e.List items as well as contribution e.List items.
Any users with edit rights to an e.List item may edit the contribution e.List item when
● The e.List item is in a Not started state, no changes have been made to it

● The e.List item is in a Work in progress state, changes have been made, but they either have
not been submitted to the reviewer, or have been submitted but have been rejected back to the
planner

● The e.List item has been taken offline for editing by another user, group, or role

Anyone with appropriate rights can take control of an e.List item from another user who may be
editing it. If they try to do this, they will receive a warning.

Ownership
An owner of an e.List item is a user, group, or role that has rights greater than view. These rights
may be directly assigned, or may be inherited.
If more than one user, group, or role is assigned to an e.List item with rights greater than view, the
first one in the import file is the initial owner of the e.List item in the Contributor application. For
more information, see "Reordering Rights" (p. 108).

Unowned Items
If an e.List item has not been opened for edit, it is unowned. After it is opened for edit, the user,
group, or role that opened it is the owner.

Current Owner
The current owner is shown in the Contributor application and is the user, group, or role who is
editing or last opened an e.List item for edit. However, after they have opened the e.List item, they
can then choose to edit it, depending on the settings.
Someone can become the current owner by taking ownership of an e.List from another user.

Administration Guide 91
Chapter 6: Managing User Access to Applications

Note: After subsequent Go to Productions, the current owner is the last user, group, or role to have
edited the e.List item. The current owner is not reset.

Import e.List and Rights


You can automate the import of the e.List and rights. For more information, see "Import e.List
and Rights" (p. 206).
Before importing, you must create import files in the correct formats. Note that the default files
supplied with the samples are examples only. For more information, see "e.List File Formats" (p. 95),
and "Rights File Formats" (p. 106).
If you have an e.List with more than 3000 items, they will not be displayed in a hierarchical format,
but in a flat format. Typically, you get the best results if you show fewer than 1000 e.List items in
a hierarchical format. You can set the Maximum e.List items to display as hierarchy under System
Settings.

Steps
1. Click Development, e.List and Rights and then click either e.List, or Rights.

2. Click the Import button.


If no users, groups, or roles have been imported, you cannot import rights.

3. In the appropriate tab, type the name of the source file, or browse for it.

4. Click Import.

5. if your file contains a header row, click the First row contains column headers box. If you
browse for files, the header row is automatically detected.

6. Click Delete undefined items, to delete existing e.List items, or rights that are not included in
the file that is to be imported. For more information, see "Delete Undefined Items
Option" (p. 94).

7. Click Trim leading and trailing whitespace to remove extra spaces at the beginning and end of
text strings on import.

8. Click Quoted strings to remove quotation marks on import.

9. Click Escape Character, and enter a character, if required.

10. Click the File Type.


If you are importing an Excel file, enter the name of the worksheet (within the Excel workbook)
in the box next to the Excel Worksheet option.
You can have e.Lists, and rights in one Excel file, but on separate worksheets. Ensure you name
each worksheet separately in Excel.

11. Click OK and then Save.

92 Contributor
Chapter 6: Managing User Access to Applications

File Import Failure Message


If the import of your e.List and rights fails, for example, the structure of the e.List is incorrect, or
the import process itself fails, an error message appears. When errors are reported, some rows of
the file are not imported. You can view the rows that have errors. If the problem is caused by the
structure or contents of the file, correct the file and import again. If the failure was caused by an
application error, try to manually add an item. If this works, try re-importing the files again.
If the CamObjectName (name of user, group, or role) is not found during the import of the rights
file, a log file is created with the message: <ERROR - INVALID CAM OBJECT> on the row of the
CamObjectName that is not found. Amend the import file to ensure that the correct CamObjectName
is referenced, or add the missing user to the namespace. This will also result if the user is not logged
into the namespace specified in the file. It is reported as an INVALID CAM OBJECT. To ensure
success, log into the namespace using File- Logon As.
If you still have a failure, contact Technical Support. You may be asked to supply a log file; this
can be opened through the Tools menu by selecting Show local log file. The file is named
PlanningErrorLog.csv.
The following table displays some of the error and warning messages that you may receive when
importing the e.List and rights.

Error or Warning Message Description

WARNING Circular reference at row x column This occurs if e.List item a is the
y EListItemName parent of e.List item b and e.List item
b is the parent of e.List item a.

WARNING Duplicate e-mail address at row x Duplicate e-mail addresses do not


column y emailaddress cause technical problems.

ERROR Duplicate item at row x column y Duplicate items are not allowed. This
Username row will not be imported.

WARNING Duplicate item caption at row x Duplicate captions are not


column y UserCaption recommended.

ERROR Duplicate user logon at row x column This row is not imported.
y Userlogon

ERROR Empty item name at row x column y e.List item names are mandatory in
the e.List import file and the rights
import file.

ERROR Import table empty No rows are imported. Note that if


you import a file with just column
headings, it overwrites any existing
items and leaves them blank.

Administration Guide 93
Chapter 6: Managing User Access to Applications

Error or Warning Message Description

ERROR Invalid cell at row x column y This occurs if an expected parameter


parameter is incorrect, misspelled or missing.

ERROR Invalid characters (ASCII control See "Illegal Characters" (p. 387).
characters not permitted) at row x
column y Illegal character z

WARNING Invalid parent name at row x column This message appears if an e.List item
y does not have a valid parent. The e.
List item is still imported, but it does
not have a parent (and so is not part
of the hierarchy).

ERROR Item name too long (maximum 100 Item names and captions have a limit
characters) at row x column y of 100 characters.

ERROR Mandatory columns missing A mandatory column is missing.


columnheadername

WARNING Review depth greater than view depth Review depth cannot be greater than
at row x column y view depth. View depth is increased
to review depth.

Search for Items


You can search for items in the e.List and Rights windows. You select a column to search in, then
type the first few characters that you want to find. You cannot do a wild card search.

For example, if you want to find 001 New York, typing York will not find this string. Instead, type
001 New.

Steps
1. Select the column to search in, for example, Item Display Name.

2. Type the character string you want to search for.

3. Click Find.

Delete Undefined Items Option


This option deletes any existing items that are not included in import files. Selecting Delete undefined
items when importing e.Lists, and rights only has an effect when the e.List and rights tables are
already populated.
If importing an updated e.List which has e.List items removed, any children of this item become
the top level (they are not removed).

94 Contributor
Chapter 6: Managing User Access to Applications

If you delete e.List Items from the e.List window, any children of these items are also removed.
You can delete the entire e.List, and rights by importing files containing only headings and selecting
the Delete undefined items check box. However, if you import a file that is completely blank (no
headings), you will receive warnings to say that compulsory columns are missing and the e.List or
rights remain unchanged.

e.List File Formats


The e.List can be in either Excel worksheet (xls) format, or in text file format, and you can specify
the delimiter.
The use of column headings is optional. If you use column headings, the columns can be in any
order. The column heading names must be the same as the column headings listed below. If no
column headings are used the column order shown below must be used.

Column heading in Display Name Status


import file

EListItemName (p. 96) Item Id Compulsory

EListItemParentName Not applicable Compulsory


(p. 96)

EListItemCaption (p. 96) Item Display Name Compulsory but may be left blank.

EListItemOrder (p. 96) Not applicable Optional

EListItemViewDepth View Depth Optional


(p. 96)

EListItemReviewDepth Review Depth Optional


(p. 97)

EListItemIsPublished Publish Optional


(p. 97)

Example e.List File


The following table provides an example of an e.List file.

EListItem EListItem EListItem EListItem EListItem EListItem EListItemIs


Name ParentName Caption Order ViewDepth ReviewDepth Published
ALL ALL All Sales 1 -1 -1 YES
Regions
AMX ALL Americas 2 1 1 YES

Administration Guide 95
Chapter 6: Managing User Access to Applications

EAX ALL Asia 3 1 1 YES


Pacific
EUR ALL Europe 4 1 1 YES
CEU EUR Central 5 0 0 YES
Europe
NEU EUR Northern 6 0 0 YES
Europe
SEU EUR Southern 7 0 0 YES
Europe

EListItemName
A unique identifier for each e.List item. This is an editable box and is case sensitive.
The following constraints apply:
● Must not be empty.

● Must contain no more than 100 characters.

● Must not contain control characters, that is, below ASCII code 32, see "Illegal
Characters" (p. 387).

● Must be unique. Although it is case sensitive, differences in case do not count - the characters
must be unique.

This name is used when publishing data.

EListItemParentName
This column identifies which e.List item is the parent by referring to the e.List item name. The top
reviewer item refers to its own e.List item name as the parent name. This is case sensitive.

EListItemCaption
The name of the e.List item as it appears in the Contributor application.
The following constraints apply:
● May be empty (but will give a warning).

● Must not be more than 100 characters long (will be truncated, with a warning).

● Must not contain control characters (below ASCII code 32).

● May contain duplicates (will give a warning).

EListItemOrder
This is the order in which the e.List items appear in the application. This is optional, the default is
the order in the file.

EListItemViewDepth
The View depth column indicates how far down a hierarchy a user can view the submissions of
planners and reviewers.

96 Contributor
Chapter 6: Managing User Access to Applications

The following values may be used:


● -1 indicates all descendant hierarchy levels.

● 0 indicates no hierarchy levels.

● 1...n where n is a whole number.

Defaults to 1.

EListItemReviewDepth
The Review depth column indicates how far down a hierarchy a reviewer can reject, annotate and
edit (if they have appropriate rights) contributions and reject and annotate submissions of reviewers.
The following values may be used:
● -1 indicates all descendant hierarchy levels.

● 0 indicates no hierarchy levels.

● 1...n where n is a whole number.

Defaults to 1.

EListItemIsPublished
This indicates whether an e.List Item will be published. Possible values are Yes, Y, No, N (not case
sensitive).
Defaults to No.

Export the e.List and Rights


You can export the e.List and rights as tab separated files, with or without column headings. This
enables you to update them in an external system such as Excel.

Steps
1. In the Administration Console tree, click the application name, Development, e.List and Rights
and then either e.List, or Rights.

2. Click Export.

3. Enter or browse for the filename and location under e.List, or Rights, for example: c:\temp\
export_rights.txt.

4. Select the Include column headings box if you want the column headings exported.

5. Select the Export box for each file you want to export.

6. Click OK.

Administration Guide 97
Chapter 6: Managing User Access to Applications

Managing the e.List


The e.List table is populated with e.List data when you import an e.List.
Changes to the e.List are made to the production application following running the Go to Production
process (p. 239).
To manage the e.List you can
● Insert new e.List items (p. 98).

● Manually reorder e.List items in the hierarchy (p. 99).

● Define whether data will be published to a datastore or not (p. 98).

● Delete an e.List item (p. 100).

● Preview an e.List item (p. 100).

● Find items in an e.List (p. 94).

Insert New e.List items


We recommend that you modify the file used to import the e.List items and import the changed file
when you add new e.list items.
When this is not possible, you can manually insert new e.List items as described in the following
steps:

Steps
1. In the application tree, click Development, e.List and Rights and then click e.List.

2. Expand the parent e.List item and click in the e.List table where you want to insert the new
item. The new item appears above the item you clicked.

3. Click Insert and enter the information as follows:

Details Description

Item Display Name The name of the e.List item (typically a business location) as it is
displayed to the user in the Web browser.

Item Id The name of the e.List item when imported from an external resource
such as a general ledger system or datastore. This may be a code or
a name independent of the e.List item name.
You can edit the item Id, but it must be unique within the application.
This name is used when publishing data.

98 Contributor
Chapter 6: Managing User Access to Applications

Details Description

Publish Indicates whether this e.List item is to be published to a datastore or


not. This only applies to the View Layout, not the Table Layout. For
more information, see "Selecting e.List Items to Be Published" (p. 257).

Click either Yes or No.

View Depth Indicates how far down a hierarchy a user can view submissions from
planners and reviewers.
To assign view depth:
1. Click the View Depth cell of the appropriate e.List item.

2. Select or enter a number. To allow the reviewer to view all levels


below, click All. To disable the reviewer from viewing the levels
below, click None.

Note: When importing the e.List, All and None are represented by
-1 and 0 respectively.
For more information, see "View Depth Example" (p. 102).

Review Depth Indicates how far down a hierarchy a reviewer can reject (or edit if
allowed), submissions from planners and reviewers.
Note that this setting is also influenced by the user's rights and
whether Reviewer Edit is allowed in Application Options (p. 72).
To assign review depth:
1. Click the Review Depth cell of the appropriate e.List item.

2. Select the required review depth. To allow the reviewer to review


all levels below, click All. To disable the reviewer from reviewing
the levels below, click None.

Note: When importing the e.List, All and None are represented by
-1 and 0 respectively.
For more information, see "Review Depth Example" (p. 102).

4. To apply changes, click Save.

Manually Reordering e.List Items in the Hierarchy


You can manually reorder e.List items.
When you change the order of the e.List, making a planner a reviewer for example, a warning is
issued and any data that was entered with the user as a planner is lost when you run Go to
Production.

Administration Guide 99
Chapter 6: Managing User Access to Applications

If a contribution e.List item becomes a review e.List item, the rights of a user assigned to that e.List
item are changed to the equivalent review rights as shown in the following table.

Contribution e.List item Review e.List item

View View

Edit Review

Submit Submit

Steps
1. Click the e.List item.

2. Use arrows to move it to the required position.

The up and down arrows change the order of items, and the left and right arrows demote and
promote items in the e.List.

Deleting an e.List item


You can delete e.List items manually.
If you delete an e.List item, any data associated with this item is deleted when you run Go to
Production. When you save changes, a dialog box is displayed. This warns you of any data you
could lose, and you have the option to continue, and so lose the data, or cancel the deletion. If you
delete a review e.List item, all the items that make up the review e.List item are also deleted.
You cannot select and delete multiple e.List items at the same level, you can only delete one branch
of the e.List at a time.

Step
● In the e.List screen, click the e.List item and then click Delete and Save.

Previewing an e.List item


You can show individual e.List items in a pop-up box as they would be seen in the Web browser,
so you can see orientation, and the effects of selection.

Step
● In the e.List window, click an item and then click Preview.

100 Contributor
Chapter 6: Managing User Access to Applications

The Effect of Changes to the e.List on Reconciliation


If contribution or review e.List items have been added, deleted or moved, reconciliation takes place
after running the Go to Production process. Reconciliation is only required for the affected e.List
items. For more information, see "Reconciliation" (p. 52).
If e.List items have been added, new contribution or review e.List items and existing parents of new
e.List items are reconciled.
If e.List items have been deleted, parents of the deleted e.List items are reconciled. Note that the
actual e.List items are removed during the Go to Production process.
If e.List items have been moved, ancestor review e.List items of any moved e.List item are reconciled
(re-aggregated). This covers both review e.List items which were ancestors of the moved e.List item
in the old production application and review e.List items which are now ancestors of the moved e.
List item in the new production application. Note that if e.List items are simply reordered within
a hierarchy branch, no reconciliation is needed.
Moving an e.List item may also impact the pattern of No Data cells through saved selections and
access tables. Renaming an e.List item does not require reconciliation, but may impact the pattern
of No Data cells if saved selections based on names are used. If the pattern of No Data cells is
affected, reconciliation of all e.List items is required (data block transformation of contribution e.
List items and re-aggregation of review e.List items). Adding or deleting e.List items does not affect
the pattern of No Data cells.

Rights by e.List Item


Rights by e.List item displays the users, groups, and roles that are assigned to the selected e.List
item, and their rights. It can be displayed by selecting items in the e.List window, or from the Rights
window and clicking Rights Summary.
A table with the following information is displayed:

e.List Item Display Name The e.List item display name.

User, group, or role The user, group, or role assigned to the e.List item.

Rights The level of rights that the user, group, or role has to the e.List
item. See "Rights" (p. 103) for more information.

Inherit from If the rights have been directly assigned to the user, group, or role,
this cell will be blank. If the rights have been inherited, this indicates
the name of the e.List item the rights have been inherited from.

You can save the information on this screen to a text file by clicking Save to file and entering a file
name and location.

You can also print the screen by clicking Print.

Administration Guide 101


Chapter 6: Managing User Access to Applications

Review Depth Example


The Review Depth column indicates how far down a hierarchy a reviewer can reject, edit or submit
submissions from planners and reviewers, depending on their rights (p. 103). The review depth must
be less than, or equal to the view depth.
For example, you are the owner of the e.List item named Country A, which contains the regions
R1, R2, and R3, and each region has two cost centers: C1 and C2, and within each cost center are
three divisions: D1, D2, and D3 as shown below.

● If the review depth for Country A is 1, then the owner can only review the regions (the children
of your e.List item).

● If the review depth for Country A is 2, then the owner can only review the regions and the cost
centers.

● If the review depth for Country A is 3, then the owner can review the regions, cost centers, and
divisions.

● If the review depth for Country A is All (-1 in the import file), then the owner can review all
preceding e.List items. You can only set the review depth at All (-1) if the view depth is also
set at All (-1).

View Depth Example


The View Depth column in the e.List section indicates how far down a hierarchy a user can view
planners' and reviewers' submissions.
For example, you are the owner of the e.List item named Country A, which contains the regions
R1, R2, and R3. In each region are two cost centers: C1 and C2, and within each cost center are
three divisions: D1, D2, and D3 as shown below.

● If the view depth for Country A is 1, then the owner can only view the regions (the children of
your e.List item).

● If the view depth for Country A is 2, then the owner can only view the regions and the cost
centers.

102 Contributor
Chapter 6: Managing User Access to Applications

● If the view depth for Country A is 3, then the owner can view the regions, cost centers, and
divisions.

● If the view depth for Country A is All (-1 in the import file), then the owner can view all
preceding e.List items.

Rights by User
Rights by User displays the level of rights for a user to an e.List item. It can be displayed by selecting
items from the Rights screen and clicking Rights Summary.
A table with the following information is displayed:

Details Description

User, group, or role The name of the user, group, or role assigned to the e.List
item.

e.List item Display Name The e.List item display name.

Rights The level of rights that a user has to the e.List item. For more
information, see "Rights" (p. 103).

Inherit from If the rights have been directly assigned, this cell will be
blank. If the rights have been inherited, this indicates the
name of the e.List item the rights have been inherited from.

You can save the information to a text file by clicking Save to file and entering a file name and
location.

Rights
Rights for planners are determined by the setting in the rights screen. By assigning rights, you can
configure user roles in the Administration Console, determining whether users can view, edit, review,
and submit.
Typically, you import a rights file. But you can also manually insert rights, and modify or delete
existing rights. If you want to make changes to the rights file, we recommend that you export the
file to ensure you have correct information, modify this file using an external tool such as Excel
and then import the file again.
You can assign more than one user, group, or role to an e.List item. For more information, see
"Multiple Owners of e.List Items" (p. 91).
Rights for reviewers are determined by the following settings:
● The rights setting.

Administration Guide 103


Chapter 6: Managing User Access to Applications

● The view and review depth setting. Review depth gives the right to reject (or edit if reviewer
edit is on) to a specified depth. This is set in the e.List screen.

● The Allow Reviewer Edit option in the Application Options screen (p. 72).

● If a reviewer has two different levels of rights for the same e.List item the higher rights applies.
Rights may be assigned directly or inherited. See the following example:

If the reviewer is assigned with submit rights to a parent e.List item which has a review edit depth
of 1, and reviewer edit is allowed, the reviewer has the right to view, edit, reject and submit the
child e.List item. These are inherited rights.
The reviewer is also directly assigned to the child item with view rights. These are declared, or
directly assigned rights.
You can only directly assign one set of rights to a user for a specific e.List item. If you insert a
duplicate record you receive a warning, and the rights that appear lower down in the rights table
are deleted.

Tip: If you specify more than one reviewer for an e.List item, in the workflow page for the
Contributor application, an email link named email all is displayed. If you specify one reviewer,
the name chosen in the User, Group, or Role column is displayed, and you should ensure that a
descriptive name for the user, group or role is chosen.

Submit Rights
An e.List item can have no user with submit rights (that is, no user is assigned or has resolved rights
through Reviewer depth to the e.List item). If this is the case, contributions are not submitted and
the item and its parents in the hierarchy cannot be locked.
We recommend that the administrator reviews the rights screen to ensure every e.List item has at
least one user with submit rights (who may be a reviewer with appropriate rights).

Inherited Rights
If the reviewer is assigned with submit rights to a parent e.List item which has a review depth of 1,
reviewer edit is allowed, the reviewer will have the right to view, edit (contribution e.List item only)
reject and submit the child e.List item. These are inherited rights.
The following tables explains what rights mean when they are assigned to planners and to reviewers.
It also explains how the rights can be affected by different settings.

Actions Allowed for Review e.List Items


The following actions are allowed for Review e.List items.

104 Contributor
Chapter 6: Managing User Access to Applications

Rights Reviewers

Submit With reviewer edit on (p. 72), reviewers can


● edit, submit, and reject contribution e.List items if the e.List item they are
assigned to has sufficient review depth

● submit or reject child review e.List items if the e.List item they are assigned to
has sufficient review depth

● submit their own review e.List item

● annotate their own review e.List item and children to review depth

With reviewer edit off, reviewers can


● submit their own review e.List item

● reject children to review depth

● annotate their own review e.List item and children to review depth

Note: when Reviewer edit is off, reviewers cannot edit contribution items

Review With reviewer edit on, reviewers can


● edit contribution items if the e.List item has sufficient review depth

● submit and reject child e.List items if the e.List item they are assigned to has
sufficient review depth

● annotate their own review e.List item and children to review depth

With reviewer edit off, the reviewer cannot edit or submit any e.List items, but can
● reject child e.List items if the e.List item they are assigned to has sufficient
review depth

● annotate their own review e.List item and children to review depth

View View assigned e.List items and children to view depth. Cannot annotate, reject, edit,
or submit.

Actions Allowed for Contribution e.List items


The following actions are allowed for Contribution e.List items.

Rights Planner

Submit View, edit and save, submit and annotate assigned e.List items.

Edit View, and edit assigned contribution e.List items. Can annotate. Cannot submit.

Administration Guide 105


Chapter 6: Managing User Access to Applications

Rights Planner

View View assigned e.List items. Cannot annotate.

Rights File Formats


The rights can be in Excel Worksheet (xls) and text file format.
The use of column headings is optional. If you use column headings, the columns can be in any
order. The column heading names must be the same as the column headings listed below and are
case sensitive. If no column headings are used, the order shown below must be used.

Column heading in Display heading Description


import file

EListItemName Item ID Identifies the e.List item that you are setting rights
for. This must match an e.List item id in the e.List
import file and is case sensitive.

CamObjectName User, Group, Role The display name of the user, group, or role as it
appears in Cognos 8.

EListItemUserRights Rights See "EListItemUserRights" (p. 106).

CamNamespaceName Namespace The display name of the security namespace as it


appears in Cognos 8.

CamObjectType CAM Object Type Either User, Group, or Role.

EListItemUserRights
The following rights can be used. These are not case sensitive.

Rights Description Applies to

View View rights only. Contribution and review e.List items.

Edit View and edit, but cannot submit. Contribution e.List items.

Review View, submit contribution e.List items Reviewer e.List items.


(not review e.List items) if they have
sufficient reviewer depth rights, reject
and save to edit depth.

Submit View, save changes, submit. Contribution and review e.List items.

106 Contributor
Chapter 6: Managing User Access to Applications

If the import file specifies Review for a contribution e.List item, or Edit for a review e.List item, on
import, the Administration Console changes the settings so that Review becomes edit for a planner
and Edit becomes Review for a reviewer.
For more detail, see "Rights by User" (p. 103).
You can assign more than one user to an e.List item. If more than one user is assigned to an e.List
item with rights higher than View, the user that is first in the import file is the initial owner of the
e.List item in the Contributor Web application. When you insert rights manually, they are appended
to the bottom of the Rights table and it is not possible to reorder the rights at e.List level. The only
way to reorder the rights at e.List level is to export the file, delete the existing rights, modify the
import file and import the new file.

Note: If you have imported a rights file containing three columns, and you import a new rights file
containing only the first two columns, any new rights added will take the default value of submit,
and any rights that existed in both the old and the new files will remain unchanged. If the new
rights file contains three columns, any rights existing in both the old and the new files are overwritten
with the new rights.
For more information, see "Multiple Owners of e.List Items" (p. 91) and "Rights by e.List
Item" (p. 101).

Example Rights file

EListItemName CamObjectName EListItemUserRights CamNamespaceName CamObjectType


Finance Mary SUBMIT ntlm User
Production Planning Contributor EDIT Cognos Role
Users
Marketing Administrator VIEW ntlm User

Modify Rights Manually


In the Rights window, you can specify whether users can view, save, submit and so on.
Typically, you assign rights to users by importing a rights file, (p. 92). But it is also possible to
manually insert rights, and modify or delete existing rights. You can also export a rights file that
can be modified and imported again if necessary. For a detailed description of each level of rights,
see "Rights" (p. 103).
The rights a user may have are also affected by the view and review depth settings (e.List window)
and the Reviewer edit setting (Application Options window) (p. 72). A user may have directly
assigned rights, or inherited rights. You can see a summary of the rights per user and per e.List item
by selecting a line and then clicking the Rights Summary (p. 103).
You can assign more than one user to an e.List item, see "Multiple Owners of e.List Items" (p. 91)
for more information.

Steps
1. In the tree for the application, click Development, e.List and Rights and then Rights.

2. Click Insert.

Administration Guide 107


Chapter 6: Managing User Access to Applications

A blank line is inserted into the Rights table.

3. Select the e.List item by clicking in the Item Display Name cell.

4. Select the User, group, or role.


● Click the name you want to add.
If the name is not displayed, you can add a role or group to the filter.

Tip: You can choose to display the complete list of users, groups, or roles by selecting Show
all descendants. Depending on the number of items in the list, this may take a while to
display. If this box is not selected, only the direct members of the group or role are shown.

● Click the browse button.

● Click the appropriate namespace.

● Select the group, or role. Any users who are members of the group or role that you select
will be added to the list.

● Click the green arrow button and click OK.


You can now select the user, group, or role that has rights to the e.List item.

5. Select the rights by clicking the Rights cell. If the e.List item is a Review item, you can choose
View, Review, or Submit. If the e.List item is a contribution item, the rights you can select are
View, Edit, or Submit. For more information, see "Actions Allowed for Contribution e.List
items" (p. 105).

The Item Id is the external identifier for the e.List item and is the key for importing.

6. To order the rights by hierarchy, click Order by hierarchy.


If you assign multiple users to an e.List item, the user that is highest in the Rights table is the
current owner (p. 91) when the Go to Production process is run.
You can search for rights in this window. For more information, see "Search for Items" (p. 94).

Reordering Rights
If more than one user, group, or role is assigned to an e.List item with rights higher than View, the
user, group, or role that is first in the import file is the initial owner of the e.List item in the
Contributor application. When you insert rights manually, they are appended to the bottom of the
Rights table and it is not possible to reorder the rights at e.List level. The only way to reorder the
rights at e.List level is to export the file, modify the import file and import the new file.

Viewing Rights
You can view rights listed by e.List item and rights listed by user by selecting one or more lines in
the Rights table and clicking Rights Summary.
You can print and save to file the rights by e.List (p. 101) and rights by user (p. 103).

108 Contributor
Chapter 6: Managing User Access to Applications

Validating Users, Groups and Roles in the Application Model and Database
You can validate users, groups, and roles that are used by the application model and database
against the Cognos 8 namespace.
The validate function checks name information used by the Contributor Administration Console
against the Cognos 8 namespace. If any names have been changed or removed, you can update the
information used by the Contributor Administration Console to match the namespace.
If there are no invalid items and only changed items, the database table only is updated. Therefore,
there will be no cut-down models job run during Go to Production.
If there are invalid items and changes then the model is also updated and a cut-down model job
will run during next Go to Production.

Steps
1. In the tree for the application, click Development, e.List and Rights and then Rights.

2. Click Validate.

3. If there are invalid or out-of-date users, groups, or roles, and you want to update them, click
Update or click Cancel.

Administration Guide 109


Chapter 6: Managing User Access to Applications

110 Contributor
Chapter 7: Managing User Access to Data

You control access to cells in cubes, whole cubes, and assumption cubes using access tables. Saved
selections are groups of dimension items that support and simplify access tables.
For example, in an Overheads dimension, you might want to show only those items relating to
travel expenses. This allows you to show users only those items that are relevant to them.
You define access for contribution e.List items, but access is automatically derived for review e.List
items.
The key difference between using saved selections and defining access directly in access tables is
that saved selections created on dimensions within an application are dynamic. That is, they change
when definitions in the dimension upon which they are made are changed (when an application is
synchronized following changes to the Cognos 8 Planning - Analyst model).
Imagine the following scenario:
You have a dimension that contains:
● Product 1

● Product 2

● Total Products (sum of all)

● A saved selection is made which is the enlargement of the "Total Products" subtotal.

If a change is made to the Analyst model which modifies the dimension to now contain:
● Product 1

● Product 2

● Product 3

● Total Products (sum of all)

The saved selection, which is the enlargement of the "Total Products" subtotal, now includes all
three products without any change being made to it. In other words, it is dynamic and changes as
the definitions in the application change following synchronization.

Saved Selections
When you create saved selections, you name and save selections of items from a dimension. A
selection is a collection of dimension items, and could be lists of:
● Products sold by a particular outlet

● Product/Customer combinations

Administration Guide 111


Chapter 7: Managing User Access to Data

● Channel/Market combinations

● Employee lists

● Range of months for a forecast

Once you have created a saved selection, you can set levels of access to this item. For more
information, see "Creating Access Tables" (p. 119).
You cannot explicitly define an access table on a review e.List item. If you create a saved selection
on the dimension selected as the e.List, you cannot select any review e.List items.

Steps
1. In the application tree, click Development, Access Tables and Selections, and Saved Selections.

2. Click New in the Saved Selections form, and enter details as shown below:

Selection name Enter a name for the selection

Dimension Click this box to show a list of dimensions, then click one.

3. To edit the selection rules, click in the Selection Name or Dimension column of the saved
selection, then click Edit. This opens the Dimension Selection Editor, see "Editing Saved
Selections" (p. 112) for more information.

Editing Saved Selections


In the Dimension Selection Editor, you edit and refine saved selections.
The steps to edit a saved selection are:
● Open the Edit window for the selection you will be editing.

● Choose the items you want to show from the Show list box.

● Make your first selection.

● Refine your selection by making a second selection if required.

Steps
1. Open the Edit window.

In Saved Selections, click on the selection you are going to edit, then click Edit.

2. Choose the items you want to show.


The default settings show all the items in the dimension, or e.List. In the case of long and
complex lists, you may want to narrow down the dimension items you want to display by
selecting an item under Show:

The results are displayed under Show. A check mark indicates either a first or second selection,
or a result (=).

112 Contributor
Chapter 7: Managing User Access to Data

All Displays the full list of items in the dimension.

Detail Only Detail items are displayed, that is, all dimension items except
for calculations.

Calculated Only those items that are calculated are displayed.

Filter Shows a selection of items based on a filter that you define.


When you select a filter, the filter works on the dimension item name
only, not the values contained in the cells.

Select:= to select items that equal the criteria. <> to select items that
do not equal the criteria.

Use:? to represent any single character. * to represent any series of


characters. This must not be used as the first character in a string.
Enter the filter, for example selection = O* (with Case Sensitive
selected shows all items beginning with a capital O). You can make
the filter case sensitive by selecting the Case Sensitive box.
Example: Type 015* to filter on all dimension item names beginning
with 015.

First Selection Displays the results of the first selection.

Second Selection Displays the results of the second selection.

Results Selection Displays the results of the first and second selections.

3. To make a selection, click one of the following options from the First Selection list box:

All Selects all items in the dimension. This is useful when used in
combination with Second Selections, for example:
● First Selection: All

● Except check box: selected

● Second Selection: Filter =9*

This selects all items except those beginning with 9.

Detail Selects all detail items (all dimension items that are not
calculations). The benefit of using this item is that if the list of
detail items change, the saved selection is updated automatically.

Calculate Selects all calculated items. If the list of the calculated items
change, the saved selection is updated automatically.

Administration Guide 113


Chapter 7: Managing User Access to Data

List of items Click the items to be selected then move them to the First
Selection list box by clicking the right arrow.
If the list changes, the saved selection must be updated.

Enlarge Includes all items that make up a calculated item, either directly
or indirectly.
Click one or more calculated items and then move them to the
First Selection list box by clicking the right arrow.

Filter Shows a selection based on a text search criterion. For more


information, see "Editing Saved Selections" (p. 112).

If this is a simple saved selection, click OK to close the Edit Saved Selection window and then
click Save to save the selection.

4. Create a selection rule:


● Select one of the options:

Except All items selected in the first selection, except those selected in the
second selection.

Union The union of all items included in both selections.

Intersect Items that are the same in both the first selection and second
selections.

● Make a selection from the Second Selections list box.

The results are displayed under Show. Click OK.

● Click Save.

Deleting Saved Selections


You cannot delete a saved selection that is used in an access table. It must be removed from the
access table first.
To delete a saved selection:

● In the saved selections window, click the selection and then click Delete.

Access Tables
Create access tables to determine the level of access users have to cubes, saved selections, and
dimension items. Access tables can reduce the volume of data a user has to download, especially

114 Contributor
Chapter 7: Managing User Access to Data

when used in conjunction with cut-down models and the No Data setting. For more information,
see "Cut-down Models" (p. 134).
You can set access levels for an entire cube (contribution or assumption cubes) or for specific
selections of cells in a cube (contribution cubes only).
For entire cubes, you can choose Write, Hidden, or Read for contribution cubes and Hidden or
Read for assumption cubes (p. 116). Access set at cube level applies to all planners.
Access to specific selections of cells is controlled using access tables. Do this by choosing one or
more dimensions, and defining access to sets of items in these dimensions.

If you need cube-level access to vary by planner, select the Include e.List option. You must also
include one of the other dimensions of the cube (preferably the smallest), and select All items for
this dimension when creating the access table.
Access tables using more than two dimensions (this includes the e.List) should be avoided where
possible. This is because when you perform an action in the Administration Console that makes
use of the access tables, the system needs to resolve the access tables in order to determine what
access level applies to each cell, and which cells have data. If an access table is very large, this can
slow down the system considerably. For more information, see "Large Access Tables" (p. 125).
It is not possible to create planner-specific views of assumption cubes. If this is required, you should
convert the assumption cube to a contribution cube in Analyst by adding the placeholder e.List.
Then you should move any assumption data present in the Analyst D-Cube into Contributor using
Analyst<>Contributor links (p. 347).
You cannot explicitly define an access table on a review e.List item. If you create a saved selection
on the dimension selected as the e.List, you cannot select any review e.List items.

Access Tables and Cubes


There are two types of cube in Contributor, assumption cubes and all other cubes. The access level
you can set depends on the type of cube.

Assumption Cubes
An Assumption Cubes contains data that is moved into the Contributor application on application
creation and on synchronize.
● They do not contain the e.List, therefore data applies to all e.List items.

● They are not writeable. The default level is read-only.

● You can only set access levels to a whole assumptions cubes.

Other Cubes
These are all other Cubes used in Contributor.
● They must contain the e.List.

● Are writeable by default, but can also be set to be read-only, contain no data, or be hidden.

● May contain imported data.

Administration Guide 115


Chapter 7: Managing User Access to Data

● Are usually used for data entry.

● You can set access to selections of cubes, whole cubes, and to dimension items.

● Can be set to planner-only cubes (p. 75) which hides the cube from the reviewer.

When you create an access table, you select one or more dimensions, and define access to sets of
items in these dimensions. By default, access tables include the e.List, so you can vary access setting
by planner. You can opt not to include an e.List in an access table, in which case the setting applies
to all planners, for example, you might want to make a budget version read-only for everyone. You
cannot have planner specific access settings for assumption cubes. This is because they do not
contain the e.List.

Rules for Access Tables


You can apply rules when setting access levels for cubes, saved selections, and dimension items.
You can set the following levels of access for cubes, saved selections, and dimension items.
● Write

● Read

● Hidden

● No Data

Access rules are resolved in the order in which they appear in the table. If more than one rule is
applied to an item, the last access rule assigned is given priority. For example, you might want to
set all items to No Data, and then subsequently set individual items to Read, Write, or Hidden.
If you have defined more than one access table for a cube, the access setting that will apply is the
lowest level of access amongst all the access tables, for example, a hidden access setting has priority
over write.
No two access tables that control the same dimension can be applied to the same cube.
You receive a warning if you create an access table that contains more than two dimensions as this
can slow down the Administration machine if you import large amounts of data.
If no access levels are set, the following defaults apply:
● All cubes apart from assumptions cubes have a global access level of Write.

● Assumption cubes (cubes used to bring data into an application) have a global access level of
Read.

Selecting access levels of Read, Write, and Hidden have no affect on the way links or importing
data work. The access level No Data does affect links and importing data.

Access Level Definitions


You can set the access level to write, read, hidden, or no data.

116 Contributor
Chapter 7: Managing User Access to Data

Write
This is the default for all cells in a planner’s model. Write access means that users with appropriate
rights can write to this item, provided the e.List item is not locked (the locked state occurs when
data is submitted to a reviewer).
This option cannot be set on assumption cubes.
You can breakback from a writable calculation, unless all detail items used by the calculation are
set to read or hidden.
If a calculation uses other calculated items, and these calculated items are set to read, or hidden,
breakback is possible, unless all the items used by the calculation, detail or calculated, are set to
read.

Read
Cells marked as read are visible but cannot be changed by the planner. For example, a planner
cannot type into any read-only cells, paste will miss out read-only cells, and breakback will treat
read-only detail cells as held. However, planners can change values in read-only calculated cells;
read-only calculated items will still be recalculated (they are not treated as held). This extends to
breakback: if a writeable calculation, for example, Grand Total, uses some read-only calculated
items such as Total Group A, a planner can breakback from the writable calculation (Grand Total)
through the read-only calculations (Total Group A). This is only possible when at least some of the
items feeding a calculation are writeable. It is never possible to change a cell that is a D-Link target.
Detail cells targeted by D-Links are read-only as normal and will treated as held by breakback.
Calculated cells targeted by D-Links are also Read-only, but these cells cannot be changed by
forward calculation or breakback due to planner entry.
Read is the default value for the following:
● All cells in assumption cubes.

● All cells targeted by D-Links. These can never be changed directly by the planner.

● Calculated cells in cubes for which breakback is disabled.

● Calculated items are read-only when none of their precedent items are writable. For example,
a subtotal will automatically be Read-only if all items summed by the subtotal are read-only
(whether they are read-only due to access tables or D-Link targets, or due to submission of
contribution e.List items).

● Calculated items are set to read-only when breakback is not possible because of the type of
calculation: in particular, the result and outputs of BiFs, and constant calculations are always
read-only.

● A planner can never change read-only detail cells, but read-only totals are not held.

Hidden
Hidden cells are not visible to a planner, but otherwise they are treated in the same way as read-only
cells. For example, breakback does not target hidden detail cells, but goes through hidden calculated
cells if at least some of the detail cells the calculation uses are writeable. Hidden calculated cells are

Administration Guide 117


Chapter 7: Managing User Access to Data

recalculated, and so on. This means that intermediate calculations, for example, in a cube, or even
entire intermediate calculation cubes, can be hidden without affecting model calculation integrity.
If all cells in a cube are hidden for a particular planner, the cube is removed entirely in the Web
browser view, but the data is still downloaded to planners.

Note: You cannot breakback over hidden detail cells.

No Data
No Data cells do not contain any data. When used by calculations they are assumed to contain
zero.
No Data access settings, regardless of whether cut-down models are used, can affect:
● Volume of data processed in memory, which in turn affects calculation speed. This is because
the calculation and link engines do not process No Data cells where possible, so No Data areas
in general reduces memory requirements and speeds up recalculation.

● Data block size, which in turn affects download and upload speed (when opening the grid and
when saving or submitting) can reduce network traffic. This also affects the speed of aggregation
when data is saved, and hence reduces the load on the run time server components.

No Data access settings used in conjunction with cut-down models can affect the model definition
size. It can improve download speed on opening and reduce network traffic, but also increases the
time it takes to run the Go to Production process.

Updating No Data Access Settings


When you run Go to Production and changes have been made to access tables that result in a
different pattern of No Data cells, a reconcile job is run for all e.List items. This process updates
the contribution e.List item data blocks and reaggregates all of the review e.List items.
In order to understand how No Data access settings affects memory use and calculation speed, it
is necessary to describe how the access settings are applied. The process is as follows:
● All access tables are resolved.

● Items which are entirely No Data for all cubes in the model are identified.

● Items are removed from dimensions where possible (see restrictions below).

As a rough guide, each e.List item is approximately 1 Kb per item (this is where you have roughly
one user per e.List item). Each dimension item is between 100 and 250 bytes. The e.List item is
larger because it contains extra information.

How No Data Access Setting Affects Data Block Size


Data blocks are used for persistent storage of the data in the model. The data block contains data
for each cube for a particular e.List item, and only includes items for which there is data in that
cube. The restrictions relating to cut-down dimensions that apply when working with the model
in memory do not affect the persistent data blocks. These are cut-down to the maximum extent
possible to reduce the data block size.
The data blocks are created during the reconciliation process.

118 Contributor
Chapter 7: Managing User Access to Data

Reducing the data block size affects download and upload speed (when opening the grid and when
saving or submitting) and can reduce network traffic. It also affects the speed of aggregation up the
e.List when contribution data is saved, and hence reduce the load on the run time server components.

Creating Access Tables


Create Access Tables to determine the level of access that users have to cubes, saved selections, and
dimension items.
The access tables window is divided up into the following areas:

Cubes with Access Tables Assign access levels to dimensions and saved selections either
using the Administration Console, or by importing simple access
tables.

Cubes Without Access Tables Assign an access level to a whole cube, if it has no individual
access tables.

Assumption Cubes Set access levels for the whole cube (you cannot create individual
access tables for assumption cubes).

Note: If you create an access table after you have imported data, the entire import queue is deleted.
Making changes to access tables, e.List items or saved selections that affect the pattern of no data
in a cube can also result in data loss, see "Changes to Access Tables That Cause a Reconcile Job
to Be Run" (p. 133) for more information.
Before you can set access levels, you must first have imported an e.List.

Steps
1. In the application's tree, click Development, Access Tables and Selections, and Access Tables.

2. You can do any of the following:


● Assign access levels to dimensions and saved selections. You can either create rule based
access tables using the Administration Console, or you can import access tables created in
external applications.

● Assign an access level to a whole cube, if it has no individual access tables. For more
information, see "Cubes Without Access Tables" (p. 122).

● Set access levels for the whole cube (you cannot create individual access tables for
assumption cubes). For more information, see "Assumption Cubes" (p. 122).

Cubes with Access Tables


In the Cubes with Access Tables section you click the dimensions that you want to set access for,
then you click the cubes that this access level applies to. You can choose to make the access table
applicable to any or all these cubes by selecting the relevant cubes in Candidate Cubes.

Administration Guide 119


Chapter 7: Managing User Access to Data

You can choose to make the access table applicable to all or part of the e.List. If you click Include
e.List, you can select which parts of the e.List the access table is applicable to. If you do not include
this option, it will apply to all parts of the e.List.
The default value for cubes with access tables is write.
You can also import access tables, see "Importing Access Tables" (p. 122).

Steps to Create Access Tables


1. In the application's tree, click Development, Access Tables and Selections, and then Access
Tables.

2. Select one or more dimensions in Available Dimensions.


To select more than one dimension, hold down the CTRL key and then click the dimensions.
The list of Candidate Cubes shows which cubes contain all the selected dimensions. If you have
selected more than one dimension, only those cubes that contain all these dimensions can be
selected. Note that the more dimensions you include in an access table, the bigger the access
table will be. Large access tables can slow the system down considerably, see Large Access
Tables for more information.

3. Select one or more cubes from the Candidate Cubes list. Normally you will apply an access
table to all candidate cubes.

4. Ensure that Create rule based access table is selected (this is the default).

5. Choose whether to include the e.List. The default is for the e.List not to be included meaning
that the access settings apply across the whole e.List.

Note: If you create access level rules with the e.List included, clear the Include e.List option
and save, then subsequently decide to include the e.List again, you must reenter any e.List
specific access settings.

6. Click Add. This adds your selection to the list of access tables.

Tip: In the access tables list, you can edit the name of the access table.

7. Select one or more of the rows that you have just added to the access tables list and click Edit.
The next step is to assign access to dimension items, or to saved selections. For more information,
see Exporting Access Tables.
After you have edited the access table, you should save. If there is data in the import data queue,
you will receive a warning that the import data queue will be deleted. You may also receive a
warning such as:
Saving these changes will require a reconcile job to run next time you Go to Production. Do
you want to continue?
Reconciliation ensures that the copy of the application that the user accesses on the Web is up
to date. If you click Yes, the changes are saved and when you run Go to Production, a
reconciliation job is created. If you click No, the changes to the access table window are
discarded.

120 Contributor
Chapter 7: Managing User Access to Data

Change the Cubes to Which an Access Table Applies


You can choose specific cubes to which you want the access tables to apply.

Steps
1. Select the access table that you want to change.

2. Click the Cubes button.

3. Check those cubes you want the access table to apply to.

Editing Access Tables


When you edit an access table, you set access levels for selections and for combinations of dimension
items.
To reach the access table editor, you must first create an access table. For more information, see
Creating Access Tables.

Steps
1. Select the access level, and then click the saved selection, or dimension items from each list,
and e.List items (if included).

2. In any one list of saved selections and dimension items, you can click either one saved selection,
or a combination of dimension items. An <<ALL>> selection applies a rule to all items in the
dimension or e.List.

3. Click Add to create the access rule.

4. Repeat until you have created all the rules for the access table.
These access rules are resolved in the order in which they were assigned. If more than one rule
is applied to an item, the last access rule assigned is given priority and will apply. Use the arrows
to change the order in which access rules apply. If no rules are set, an access level of Write
applies.

Warning
Once you have created access level rules for an access table, if you decide to remove or include the
e.List again, you will lose any rules that you have set for this table and will have to reset them. See
Rules for Access Tables for more information.

Changes to Access Tables


If you edit access tables, and the change would make the memory usage of the model significantly
greater (for example, adding another dimension) you may find that although the access table saves
correctly, the next time you open the Administration Console, it may fail to load properly. If this
happens, you can do the following:

Administration Guide 121


Chapter 7: Managing User Access to Data

Note that The Reset Development to Production button below removes all changes made since the
last time Go to Production was run.

Steps
1. Check the log file. Click the Tools, Show Local Log File menu to see if it shows an "Out of
memory" message.

2. If this happens, you can work around this by clicking the Reset Development to Production
button in the toolbar and reapplying any changes made .

Cubes Without Access Tables


The Cubes without Access Tables section allows you to set the access levels for whole cubes that
do not have any access tables defined for them. The default access level for a cube is Write.
See Rules for Access Tables for information on the access levels and priorities.

Steps
1. In Access Tables, click in the appropriate Access Level cell.

2. Click a new value from the list.


If you set the global access level for cubes without access tables from Write to a different level,
such as Read, and then subsequently create and access table for this cube, See Cubes with Access
Tables, the global setting for the cube is reset to Write.

Assumption Cubes
Any assumption cubes in the application are listed in the Assumption Cubes section. The default
access value for assumption cubes is Read, and the other available value is Hidden. You can change
the default access value.

Steps
1. Click in the appropriate Access Level cell.

2. Click either Read or Hidden from the list. The default level is Read.
Assumption cubes contain data that is moved into the Contributor application when you run
the Go to Production process and when you synchronize. They do not contain the e.List.

Importing Access Tables


You can import access tables that already exist into cubes. They can be in Excel worksheet, comma,
delimited, tab delimited, or custom delimited format.
For information on the format of access tables, see "Format of Imported Access Tables" (p. 123).

122 Contributor
Chapter 7: Managing User Access to Data

You can also automate the import of Access Tables. See "Import Access Table" (p. 204) for more
information.

Steps
1. Open the Access Tables window by clicking Development, Access Tables and Selections, and
Access Tables.

2. In the Access Table window, click one or more dimensions in Available Dimensions. The
Candidate Cubes list shows which cubes contain all the selected dimensions with no conflicting
access tables. If you have selected more than one dimension, only those cubes that contain all
these dimensions are selectable.

3. Select one or more cubes from the Candidate Cubes list. Normally, you apply an access table
to all candidate cubes.

4. Select the Import Access Table option, and Include e.List if required.

5. Click Add. Once you have clicked the Add button, you cannot change whether a table is rule
based or imported. There is a check mark to indicate if you have selected Import access table
in the access table grid.

6. Click Import. The Import Access Table dialog box is displayed.

Note: You can use this dialog box to set the base access level without importing an access table.

7. Enter the file name and location for the access table import file.

8. Select the First row contains box names box if required.

9. Select the file format. If the file format is Excel Worksheet, enter the name of the worksheet
containing the access table.

10. Select the Import option.

11. Delete undefined settings: If an access table file has previously been imported for the access
table and you are importing a new one, existing settings are updated with the new settings
specified, and any previous settings that do not exist in the new file are kept, or if Delete
undefined settings is checked, are deleted.

12. Select the Base access level. This is the default level that is applied to any undefined items. No
Data is the default access level. See "Access Level Definitions" (p. 116) for more information.

13. Click OK.

14. To view the access table, click View. You can print this file and save to file. You can also export
the access table, see "Exporting Access Tables" (p. 125).

Format of Imported Access Tables


The Import access tables file can be in any of the following formats: Excel worksheet, comma
delimited, tab delimited, custom delimited.

Administration Guide 123


Chapter 7: Managing User Access to Data

● A column for every dimension that the access table will apply to (mandatory).The names of
dimension items must be identical in spelling and case to the way they are in the Analyst model.

● A column containing e.List items. If omitted, the access level applies to the whole e.List.

● A column containing access levels (optional). The following access levels can be set Hidden,
Read, Write, No Data (these are not case sensitive). If omitted, a default of Write will apply.

It is important to note that you cannot import saved selections or rule based access tables. Each
line of the access table (barring the headings if used) contains the following information:
dimension item name (from dimension a) [tab] dimension item name (from dimension b) [tab] e.
List item (if e.List included) [tab] AccessLevel

Column Order
The required order for the dimension columns in an access table import file with no column headers
is the same as the dimension order of the access table. You can see this in the Access table setup
window:

This shows the order of dimensions as: Versions and Channels. In this case, the third column would
be the e.List, and the fourth column would contain the access levels.
If the import file contains column headings, the columns can be in any order.

Column Headings
The use of column headings in the import file is optional. If column headings are used they should
be:

Name of the Dimension Name of e.List AccessLevel

This must be the same spelling This must be the same spelling As shown. Default is Write.
and case as in the Analyst and case as in the Analyst
model. model.

Column headings are the first row in the file.

Viewing Imported Access Tables


You can view access tables that you have imported.

124 Contributor
Chapter 7: Managing User Access to Data

You cannot edit an imported access table within the Administration Console. To make changes,
you should edit the source file and import again.

Steps
1. Click Development, Access Tables and Selections, and Access Table) to open the access tables
window.

2. Click the access table that you want to view and click View.

3. The View button is not enabled for access tables created in the Administration Console
(rule-based tables). To view rule-based access tables, click Edit.

Exporting Access Tables


You can export access tables in a format that you will be able to import. You will be able to export
both simple and rule-based access tables. Once you have exported a rule-based access table, you
can import it again, but it will be imported as a simple access table that you will not be able to edit
in the Administration Console.

Steps
1. Open the access tables window (click Development, Access Tables and Selections and then
Access Table) and click the access table to be exported.

2. Click Export.

3. Enter or browse for a file location and file name. You can export to a text file, tab separated
format.

4. If you want to include column headings, check Add column headings.

5. Click OK to create the file and Close to close the dialog box.

Large Access Tables


It is important to use small access tables where possible rather than single multi-dimensional access
tables. In general it is far easier to understand and maintain several small access tables than a single
large one. Replacing a large access table with small access tables may improve the performance of
the Administration Console. However, in some cases access to items in one dimension cannot be
defined without reference to the items of another dimension and so a multi-dimensional access table
must be used.
The following issues are associated with using large access tables:
● They can cause substantial performance problems in the Administration Console

● They can increase the physical size of the cut-down model so that the benefit of using cut-down
models is lost. This is because the access table needs to be resolved.

● If cut-down models are not used, resolving access tables on client machines can cause
performance problems.

Administration Guide 125


Chapter 7: Managing User Access to Data

Administration Console Performance Issues


When you perform an action in the Administration Console that makes use of the access tables,
the system resolves the access tables in order to determine what access level applies to each cell,
and which cells contain data. A large access table can slow down the system considerably. This is
because a check is made whenever you load the development application in the Administration
Console, and whenever you save, to see whether the pattern of access has changed to or from No
Data. This check is made primarily to determine whether a reconciliation job is required on running
Go to Production, and which e.List items must be reconciled. For example, if you are importing
data, a check is run to see if any changes have been made to the Access Tables, Saved Selections or
the e.List that result in a different pattern of No Data cells. Another example is when you run Go
to Production and the cut-down models job is run (if set on). The cut-down models process uses
the information from access tables to determine what information each user will get.

Memory Needed to Resolve an Access Table


The amount of memory needed to resolve an access table is determined by the product of the number
of items in each dimension multiplied by four bytes, and so the greater the number of dimensions
you include in an access table, the greater the memory required. If you use more access tables with
fewer dimensions in each, the memory requirements are reduced considerably (this is demonstrated
in Example 1). If the memory needed to resolve an access table is more than two Gb it will fail on
the server. This is because two Gb is the maximum memory that can be addressed by the operating
system. For example, an access table that includes an e.List of 1500, a products list of 1500 and
250 channels would exceed this limit. (1500 x 1500 x 250 x4 = 2,250,000,000).

Impact of Large Access Tables on Cut-down Models


Using large access tables can have a negative impact on the physical size of the individual cut-down
model definitions that are downloaded to the client computer.
The cut-down model definition contains the information about the individual e.List item or review
e.List item and its immediate children. It also contains the relevant part of the resolved access table
and is cut-down to the necessary D-List items needed to control access. Taking the example shown
in the preceding paragraph, the resolved access table will contain the products list of 1500 items
and the channel with 250 items. Multiplied by four bytes, this may increase the size of the cut-down
model definition by as much as 1.5 Mb (this could be less, for example, if only some of the channels
apply).

Examples
These examples set access to the following dimensions in a Revenue Plan cube:
The e.List (named Stores) contains these saved selections:
● High street

● Superstores

● Telesales Centers

These saved selections are subsets of the total (All). There are 1200 items in Stores.

126 Contributor
Chapter 7: Managing User Access to Data

Channels contains these saved selections:


● Retail

● Discount

● Mail Order

These saved selections are subsets of the total (All). There are 12 items in Channels.
Products contains these saved selections:
● Sanders

● Drills

Note that not all items are included in these saved selections. There are 400 items in Products.

Example 1
This example shows two different ways of setting access where available Channels vary by store
and product selection is the same for all stores and channels. Tables A and B achieve this in the
most efficient way. See the following calculations:
The size of Access Table A is:
● 12 (Channels) x 1200 (Stores) x 4 (bytes) = 57.6 KB

The size of Access Table B is:


● 400 (Products) x 4 (bytes) = 1.6 KB

The total size of the two access tables is 59.2 KB.


The size of the Access Table C is:
● 400 x 12 x 1200 x 4 = 23.04 MB

This means that Access table C takes 22.98 MB more memory to resolve than tables A and B
together.
Using two separate access tables is also easier to maintain. For example, if High Street stores started
selling through the Mail order channel, using two tables, you just add one line to Access table A.
But for access table C, you must add three lines.

A. Access Table for Channels

Channel Store

No Data All All

Write Retail High Street

Write Retail Superstores

Write Discount Superstores

Administration Guide 127


Chapter 7: Managing User Access to Data

Channel Store

Write Mail order Telesales

B. Access Tables for Products

Product

Write All

Read Sanders

Hidden Drills

C. Combined Access Table (Not Recommended)

Product Channel Store

No Data All All All

Write All Retail High Street

Read Sanders Retail High Street

Hidden Drills Retail High Street

Write All Retail Superstore

Read Sanders Retail Superstore

Hidden Drills Retail Superstore

Write All Discount Superstore

Read Sanders Discount Superstore

Hidden Drills Discount Superstore

Write All Mail order Telesales

Read Sanders Mail order Telesales

Hidden Drills Mail order Telesales

128 Contributor
Chapter 7: Managing User Access to Data

Example 2
This shows two different ways of setting access where products vary by store and channels vary by
store.
In this case, the e.List (Store) must be included in both access table. Superstores can write to drills.
Note that it is not necessary to put the line Write, Drills (Retail/Discount), Superstores into the
access table, you can leave this out. This was added for illustrative purposes.
The size of Access Table D is:
● 12 (Channels) x 1200 (Stores) x 4 (bytes) = 57.6 KB

The size of Access Table E is:


● 400 (Products) x 1200 (Stores) x 4 (bytes) = 1.92 MB

The size of the Access Table F is:


● 400 x 12 x 1200 x 4 = 23.04 MB

So even with the additional dimension in Access Table E, the combined total of Tables D and E of
1.98 MB is still 21.06 MB less than Access Table F.
Note that although tables D and E are separate, they interact. The channels that are shown are
dependent on which product is viewed, and the e.List item that is selected.

Access Tables for Channels

Channel Store

No Data All All

Write Retail High Street

Write Retail Superstores

Write Discount Superstores

Write Mail order Telesales

Access Tables for Products

Products Store

No Data All All

Write All High Street

Read Sanders High Street

Administration Guide 129


Chapter 7: Managing User Access to Data

Products Store

Hidden Drills High Street

Write All Telesales

Read Sanders Telesales

Hidden Drills Telesales

Write All Superstores

Read Sanders Superstores

Write Drills Superstores

Combined Access Tables (Not Recommended)

Product Channel Store

No Data All All All

Write All Retail High Street

Read Sanders Retail High Street

Hidden Drills Retail High Street

Write All Retail Superstore

Read Sanders Retail Superstore

Write Drills Retail Superstore

Write All Discount Superstore

Read Sanders Discount Superstore

Write Drills Discount Superstore

Write All Mail order Telesales

Read Sanders Mail order Telesales

Hidden Drills Mail order Telesales

130 Contributor
Chapter 7: Managing User Access to Data

Telesales: Read-only Access to Sanders for the Mail Order Channel

Superstores: Writeable Access to Drills for the Retail and Discount channels

Example 3 An Access Table that Cannot be Split


In this example, the product selection varies by Store and Channel.
Superstores can write to sanders for the discount channel only.

Products Channels Store

No Data All All All

Write All Retail High Street

Read Sanders Retail High Street

Hidden Drills Retail High Street

Write All Retail Superstores

Read Sanders Retail Superstores

Hidden Drills Retail Superstores

Write All Mail Order Telesales

Read Sanders Mail Order Telesales

Administration Guide 131


Chapter 7: Managing User Access to Data

Products Channels Store

Hidden Drills Mail Order Telesales

Write All Discount Superstores

Read Sanders Discount Superstores

Write Drills Discount Superstores

You do not need to include the Write, Drills, Discount, Superstores line. This is included for
illustrative purposes.

Multiple Access Tables


No two access tables that control the same dimension can be applied to the same cube, but you can
have multiple access tables using different dimension that apply to the same cube. Where this is the
case, a planner will get the lowest level of access amongst all the access tables.

Example
In the Versions dimension, item Budget version 1 is writable for the planner and item Budget version
2 is read-only. In the Expenses dimension, the item Telephone is writable and the item Donations
is hidden. The planner will get the following resolved access:

Budget version 1 Budget version 2

Telephone Write Read

Donations Hidden Hidden

When access to the cells of a cube needs to be controlled using more than one dimension (as in the
example above), you must decide whether to use multiple access tables (one for each dimension),
or one access table using all the dimensions. You should choose multiple access tables (one for each
dimension) wherever possible (as in the example above). In general this will be much easier to
understand and maintain. You should only use a multi-dimensional access table in circumstances
when access to items in one dimension cannot be defined without reference to the items of another
dimension.
Conflicting access tables are not allowed, that is you cannot apply multiple access tables to one
cube using the same dimension. For example, if you have applied an access table for the dimension
Months to a cube, you cannot apply another access table using Months, nor one that uses Months
and Versions, and so on. After choosing the dimension for an access table, your choice of cubes
which the access table can be applied to is limited to those cubes that contain the chosen dimensions.

132 Contributor
Chapter 7: Managing User Access to Data

Changes to Access Tables That Cause a Reconcile Job to Be Run


Changes to access tables may or may not cause a reconcile job to be run, depending on whether
they impact the pattern of No Data cells in a model.
When access tables are changed, the system determines whether there is any impact on the pattern
of No Data cells in a model. If the system determines that there is no impact, then no reconciliation
takes place.
If however the system determines that there is an impact, then all e.List items must be reconciled.
All contribution e.List item data blocks are updated, and all review e.List items are re-aggregated.
If an access table definition is changed, the system compares the resolved pattern of No Data cells
before and after the changes. If the pattern of No Data cells is not identical, then reconciliation of
all e.List items is required. If the pattern of No Data cells is identical, then no reconciliation is
required.
This comparison is only made for e.List items shared by the development application and the current
production application. So, the addition or deletion of e.List items in itself will not require
reconciliation of all e.List items.
If an entire access table is added or deleted, then reconciliation of all e.List items is required.
If the set of cubes an access table applies to is changed, then reconciliation of all e.List items is
required.
Changes to the e.List which affect saved selections used in access tables can also cause reconciliation
to happen. For more information, see "The Effect of Changes to the e.List on Reconciliation" (p. 101).

Changes to Access Tables and Saved Selections and the Effect on Reconciliation
When access tables are changed, the system determines whether there is any impact on the pattern
of No Data cells in a model.
If an impact is determined, all e.List items are reconciled. Access tables can be changed indirectly
by changing a saved selection used by an access table, or by making certain changes to the e.List if
the e.List is used in a saved selection used by an access table. Note that changes made to other
dimensions may impact access tables via saved selections, but these changes are introduced via
synchronize which always requires full reconciliation.
The cases where changes to the e.List affect saved selections are:
● A saved selection on the e.List uses Filter and existing contribution items are renamed in the e.
List.

● A saved selection on the e.List uses Enlarge (of a review e.List item) and existing contribution
items are moved between review items in the e.List.

This only applies to e.List items shared by the development application and the current production
application. So, the addition or deletion of e.List items in itself will not require reconciliation of all
e.List items.

Administration Guide 133


Chapter 7: Managing User Access to Data

Access Tables and Import Data


Any changes made to existing saved selections or e.List that result in a different pattern of No Data
cells for contribution e.List items that are common to both the development and production
applications results in the import queue being deleted.
Creating access tables after you have imported data results in the import queue being deleted.

Access Levels and Contributor Data Entry


With the exception of the No Data access setting, access settings affect only planner data entry,
they do not affect D-Links, import or data to be published:
● D-Links may target Hidden or Read-only cells.

● Import may target Hidden or Read-only cells. All valid data present in an ASCII file is imported
into a cube. To limit the selection of cells targeted you should cut-down the ASCII file to contain
only the required data. Data in the source ASCII file that does not match an item in a cube is
not imported and is reported as an error. Import cannot target formula cells in a cube. Import
data should not target No Data cells.

● Published data can include cells that are marked as Hidden, as well as Read-only or Writeable.

Force to Zero
In Analyst, the calculation option Force to Zero forces calculations in other dimensions to return
zero. Contributor interprets this option differently, effectively as Force to No Data. This can cause
items to disappear from the Contributor grid.
If you do not want such items to disappear in Contributor, you should remove the Force to Zero
setting in Analyst.

Reviewer Access Levels


Access for reviewers cannot be defined explicitly. Access for any review e.List item is automatically
derived from access settings applied to the planners below the particular review e.List item.

Cut-down Models
Cut-down models are customized copies of the master Contributor model definition that have been
cut-down to include only the specific elements required for a particular e.List item. Note that the
e.List is also cut-down.
Cut-down models can reduce substantially the size of the model that the Web client has to download
when there are large dimensions containing hundreds or thousands of items, of which only a few
are required for each planner.
However, the cut-down model process significantly increases the amount of time it takes to run the
Go to Production process.
The process of creating the cut-down model for a particular e.List item is as follows:

134 Contributor
Chapter 7: Managing User Access to Data

❑ ● All access tables are resolved.

● Items which are entirely No Data for all cubes in the model are identified.

● Items are removed from dimensions where possible, for more information, see "Restrictions
to Cutting Down Dimensions" (p. 137).

● The cut-down model definition is saved in the datastore.

When Does the Cut-down Models Process Happen?


The cut-down model process is triggered in the following circumstances:
● The first time Go to Production is run and one of the cut-down model options have been
selected.

● Changes have been saved to the Contributor model, one of the cut-down model options have
been selected, and Go to Production is run.

Cut-down models are not created if no changes have been saved to the Contributor model. A change
is any action where you press the Save button. A notable exception to this rule is changes to existing
translations. If the only change you make is to an existing translation, the cut-down model process
will not be triggered.
Importing data does not change the Contributor model. This means that you can import data and
run the Go to Production process without causing the cut-down model process to be triggered.
When changes are saved to the Contributor model, the package GUID in the model (a unique
identifier that is used to reference objects) is also changed, causing cut-down models jobs to be
created. If no changes have been made, the GUID does not change so there is no need for cut-down
models to be run.

Limitations
The cut-down model process can cause the runtime load on the server to be adversely affected.
Without cut-down models there is a single model definition which can be cached in memory on the
server, reducing the number of calls to the datastore. When cut-down model definitions are used
there are too many of them to cache in memory on the server. As a result, the particular model
definition must be retrieved from the datastore every time.
Even if cut-down models are not being used, the same process of cutting down the dimensions
happens anyway when the model definition is loaded. The benefits of using No Data access settings
to reduce memory requirements and decrease block size apply regardless of whether cut-down
models are being used. See "Restrictions to Cutting Down Dimensions" (p. 137) for more information.

Cut-down Model Options


The following options are available for cut-down models:
● No cut-down models (default).

● For each aggregate e.List item (p. 136).

Administration Guide 135


Chapter 7: Managing User Access to Data

● For every e.List item (p. 136).

Create Cut-down Model Definition for Each Aggregate e.List Item (Review Level Model Definition)
In review-level model definitions, separate model definitions are produced for each review e.List
item and its immediate children. In this case all contribution e.List items below a particular review
item use the same model definition.
Because review-level model definitions require considerably fewer model definitions, they take less
time to produce or recreate. They should be used when the selections are not small subsets of those
required at parent level, or when it would take too long to produce or recreate the planner-specific
model definitions - typically with e.Lists with thousands of items. This option is a compromise
between no cut-down models and fully cut-down models.

Create Cut-down Model Definition for Every e.List Item


In planner-specific model definitions, all required model definitions are individually produced:
● One for each contribution e.List item.

● One for each review e.List item with its immediate children (extra model definitions for the
individual review e.List items are not required).

● One for each multi-e.List item "my contributions" view (where a planner has responsibility for
multiple e.List items). Model definitions are not produced where a planner owns all the children
of a particular review e.List item. The review and children model definition will be used instead.

The benefit of creating a cut-down model definition for every e.List item is that performance is
optimized for each planner. But it may take some time to produce or recreate the model definitions.
This option should be used when the appropriate selections for the children of one review e.List
item are small subsets of the selections required for the parent review e.List item.

Cut-down Models and Translation


A cut-down model is only created for the base language. When a user wants to view their slice of
the Contributor model in the Web client, the language that they see is determined at the point when
they ask to see the model. This means that the language can be changed without having to run Go
to Production.

Cut-down Models and Access Tables


When a planner opens the grid for a Contributor application the Web client receives two pieces of
data from the server. The first is the model definition (also referred to as the XML package) and
the second is the data block that contains the values that will populate the grid. Together these are
referred to as the model.
Access tables control which cells in a model are Writeable, Read-only, Hidden, or contain No Data.
Cut-down models are customized copies of the master model definition that have been cut-down
to include only the specific elements required for a particular e.List item.

136 Contributor
Chapter 7: Managing User Access to Data

Access tables must be carefully considered when setting up cut-down models. Potentially, the
overhead in terms of model size and memory usage for using access tables can be higher than the
benefit gained from using cut-down models.
When models are large, you should use access tables along with cut-down models so that the size
of the model to be downloaded to each client is reduced.
Cut-down model options are set in the Application Options window, see "Change Application
Options" (p. 72).

Restrictions to Cutting Down Dimensions


There are restrictions involved when cutting down dimensions.
Certain types of dimension are never cut down:
● Timescales (cutting down timescales would affect the result of BiF calculations).

● The data dimension of the source cube for a accumulation link (that is, the dimension which
contains D-List format items that are treated as if they were dimensions of the source cube).

● The data dimension of the target cube for a lookup link (that is, the dimension which contains
D-List format items that are treated as dimensions of the target cube).

● A dimension used in an assumption cube.

● A dimension that is also used as a D-List format.

Certain items will not be removed:


● Items are not removed if they are used in a calculation that is not a simple sum, unless the
calculation itself is also being removed.

● Items that are the weighting for a weighted average are not removed unless the average is also
removed.

The level of cut-down applied per dimension is the resolved level across all cubes. This is why it is
impossible to cut down a dimension that is used in both an assumption cube and a contribution
cube, because the entire dimension is required for the assumption cube. Where the same dimension
occurs in two or more contribution cubes with different access tables, it will only be cut-down to
remove items that are not required in any cube. As a result, there are cases where dimensions are
not cut-down as much as might be expected, resulting in greater memory usage. However, there
are ways in which to structure the model to avoid this situation:

Example 1
If you have a dimension that can not be cut down because it is used by an assumption cube.
You could create an identical dimension to substitute in to the assumption cube leaving the dimension
in other contribution cubes to be cut down.

Example 2
If the assumption cube is causing the problem.

Administration Guide 137


Chapter 7: Managing User Access to Data

An alternative is to add the e.List to the assumption cube and apply access settings to this cube so
that the dimension can be cut down.

Estimating Model and Data Block Size


The size of the model definition XML is primarily dependent on total the number of items in the
dimensions of the model, and is not dependent on the number of data cells. As a rough rule of
thumb the master model definition can be between 100 bytes and 250 bytes per dimension item.
e.List items are generally larger as they contain more information--approximately 1 Kb if there is
roughly one user per e.List item. Typically most of the other information in a model definition is
small in relation to the dimension items. The only exception is that the data for assumption cubes
is stored in the model--approximately 10 bytes per data cell. If the assumption cubes are large,
allow for this when estimating the model XML size.
If cut-down models are used, the same rule still applies for each cut-down model, but the number
of items in the dimensions will be reduced as a result of access tables. Even without access tables
the e.List will be cut-down.
The size of the XML data block for a particular e.List item is proportional to the number of dense
data cells in all the cubes for that item. However, this is very hard to estimate, because it depends
on the proportion of cells that contain non-zero data, and also the pattern of how these cells are
spread through the cube. Also certain data values take less space than others (small integers are
packed more efficiently than large integers or floating point values). As an approximate upper
bound, a data block should not be much larger than 16 bytes per cell in all the cubes.
Please note these are rough estimates.

Cut-down Model Example


In the e.List shown, Total and Div1 to Div10 are review e.List items, and CC1a to CC10j are
contribution e.List items (all owned by different users):

The model is an Employee Plan cube with an Employees dimension. Each cost center (CC) has 100
of its own employees with no access to other employees. You would use an access table to give each
CC write access to the appropriate 100 employees, with no data access to the rest.
With no cut-down models, each planner will receive a model definition including a 10,000-item
employee list, which is large in size (approximately 2.5 MB). Only one model definition needs to
be produced and updated.

138 Contributor
Chapter 7: Managing User Access to Data

With planner-specific model definitions, each planner’s model definition will contain only the
required 100 items from the Employees dimension (approximately 25KB). Each dimension item is
around 250 bytes. One hundred and eleven model definitions must be produced and updated.
With review-level model definitions, the dimension definition downloaded to each planner will
contain 1,000 items--thus this element of the model definition will be ten times larger than it needs
to be, but still ten times smaller than the full version (approximately 250kb). Eleven model definitions
must be produced and updated.
to decide which cut-down method to use, consider these factors:
● The application structure itself.

● Whether bandwidth is an issue (are there many dial-up connections?)

● Time taken to produce and re-create the cut-down models.

● Number of e.List items.

● e.List hierarchy, for example, with review-level model definitions it may be sensible to reduce
the number of contribution e.List items per review e.List items by introducing dummy review
e.List items to reduce the size of the model definitions.

Administration Guide 139


Chapter 7: Managing User Access to Data

140 Contributor
Chapter 8: Managing Data

The following types of data can be imported into and exported from Cognos 8 Planning -
Contributor.

Type of data Function Target version

Data in other Contributor Administration links (p. 145) in the production or development
applications and cubes Administration Console

Data in other Contributor System links (p. 142) in the production


applications and cubes Administration Console. Executed in
the Web client using the Get Data
extension.

Data in Contributor Export a Contributor model or model development, test, or


applications, Analyst with data, administration links, production
libraries, macros, and macros, and Analyst libraries and
administration links import them into a target
environment or to Cognos support
(p. 168)

Text files Import Data (p. 172) in the development


Administration Console

Cognos 8 Business Import from Cognos Package using development


intelligence data sources, Administration links (p. 145)
including SAP BW

Text files and Contributor Local links in the Web client. See the production
cubes Cognos 8 Planning - Contributor
Browser User Guide.

Data in Analyst Analyst>Contributor links in Analyst. production or development


See Analyst User Guide

If you are moving data between Contributor cubes and not making model changes, use an
administration link to move data into the Production application. The data is processed using an
activate process, you do not have to run Go to Production. Note that there is no option to back up
the datastore when targeting the Production application. You can target only the development
application if you are importing data from Cognos 8 Packages.

Administration Guide 141


Chapter 8: Managing Data

Administrators can also set up links that are run from a Web client session enabling Web client
users to move data between a cube in a source application and a cube in a target application. For
more information, see "Administration and System Links" (p. 142).
When you import data into a Contributor application, the data is first put into an import queue.
There are two import queues, one for the development version of an application and one for the
production version of an application. The import queues are independent of each other and contain
the data in import blocks that are applied to an e.List item during a reconcile job.
For each e.List item in an application, there is a model import block. The data from importing data,
administration links, Analyst>Contributor, or Contributor>Contributor links is placed there, ready
to be moved into the cube by a reconcile job (p. 52). For links that target the development
application, the reconcile job is created during the Go to Production process. For links that target
the production application, an activate process creates a reconcile job.

Important: Be aware that if two reconcile jobs are run while users are working offline, the users
will be unable to bring the data online. See "Editor Lagging " (p. 251) for more information.
Because you can have multiple cube import blocks per cube, you can run administration links and
Analyst>Contributor links as well as import data concurrently.
Note that a model import block is represented by a row in the import queue table in the application
datastore. An individual cube import block cannot be seen in the datastore.

Understanding Administration, System, and Local Links


Administrators can move data between Contributor cubes and applications using administration
and system links. Administrators can also import data from Cognos 8 packages using Administration
links. Web client users can move data between Contributor cubes using local links.

Administration and System Links


Administrators can create and run administration links to move large amounts of data between a
source application and a target application and from a Cognos 8 package to a Contributor
application.
Administrators can create system links to allow users to move small amounts of data between a
cube in a source application and a cube in a target application. These links are run from a Web
client session by the Web client user.

Note: The Get Data extension must be configured before you can create a system link. For more
information about configuring the Get Data extensions, see "Configure Client Extensions" (p. 303).
The difference between administration and system links is described in the following table.

142 Contributor
Chapter 8: Managing Data

Administration Links System Links

Run in the Contributor Administration Run on the Contributor Web client by the
Console and using macros by the Contributor application user (but created by
administrator. the administrator in the Contributor
Administration Console).

Designed to move large amounts of data and Designed to move small amounts of data on
can be scheduled. an ad-hoc basis.

Run on the job servers. Run on the Web client computer.

Stored in the Content Manager datastore. Stored with the target application.

When moving data between Contributor Can contain only one element, and as a result
application, can contain multiple elements can contain only one source and target cube.
(sub-links) enabling a single link to have
many cubes as the source and target.

Can map an e.List dimension to a non-e.List Can only map an e.List to an e.List
dimension, enabling you to move data dimension.
between applications that do not share an e.
List.

Can run a link to a locked e.List item. Cannot run a link to a locked e.List item.

When moving data between Contributor Cannot be tuned for optimal performance.
applications, can be tuned for optimal
performance.

Can be sourced from Cognos Packages. Cannot be sourced from Cognos Packages.

Local Links
Local links allow Web client users to load data into the Contributor application from external data
sources, and from the active Contributor grid. You create and run local links in the Web client
using the Get Data client extension. These are similar to system links. For best performance, we
recommend that users import into one e.List item at a time from external sources.
Local links are similar to system links, except for the following differences:
● Local links are created in the Web client, and not the Contributor Administration Console.

● Local links can be used to import data from external data sources.

● In local links, users can only import data from tabs in the active Contributor grid (system links
can import data from source cubes to which the user has no access rights).

Administration Guide 143


Chapter 8: Managing Data

Using Links to Move Data Between Cubes and Applications


The kind of link you use depends on your role and what you want to do.
Web client users (planners) can move data into the Web client from external sources, or from the
active Contributor grid, using local links. For best performance, we recommend that users import
data into one e.List item at a time from external sources.

Note: The Get Data extension "Configure Client Extensions" (p. 303) must be configured before
users can create a local link.
Web client users can also move data between cubes for one e.List item at a time, using system links
created by the administrator.
Administrators move data from one production application to another, or to development
applications by using administration links. The administration link process uses the job system and
so enables you to move large amounts of data. It is quicker to move data into the production
application if you have no model changes to make. This is because if you move data into the
development application, you must run Go to Production before the data is available to the Web
client.
If you run more than one link to the same application, and the same cell is targeted, the most recent
value is returned.
Administrators can also move smaller amounts of information from Analyst to Contributor using
Analyst >Contributor links. This process does not use the job system. We recommend that you use
the @SliceUpdate macro to split one large link (across the entire e.List) into smaller links that deal
with smaller numbers of e.List items at a time. A slice update sample is available on the Cognos
Global Customer Services Web site.
For more information, see the Cognos 8 Planning - Analyst User Guide.

Using Links in Model Design


If you are designing new models, you can use administration and system links to improve
performance and make security maintenance easier.
Instead of creating one large model targeted at many Web client users, and then controlling what
they can see through access tables, you can create several smaller models, each targeted at a smaller
specific user group and link the models together.
Smaller models mean there is less need for access tables to control what users see. Go to Production
times are typically faster because you can have a shorter e.List. The number of cut-down models
can also be reduced, shortening the processing time.
The following examples show how you can use links in your model design.

Cascaded Models
Using administration links, you can create several small models that contain a high level of detail,
targeted at regional managers, and roll them up into a larger application with less detail so that the
top executives see only the numbers that they are interested in.
For example, you can have America, Asia, and Europe models rolling up into a Corporate model.

144 Contributor
Chapter 8: Managing Data

Matrix Management
Using administration links, you can create models that allow data to roll up both on a regional and
departmental basis, with approvals from both organization structures.
For example, you can have a Company model where Human Resources reports into Country, and
this can be linked into a Corporate model where Country reports into Human Resources.

Enhanced Security
Administration links allow you to separate cubes into applications by purpose. For example, you
can have a sales forecasting application, a travel planning application, and a salary planning
application. This separation of duty can improve security maintenance. An application containing
a salary plan model may require many access tables to specify who can view the cube, you can
simplify the cube by separating the access tables from the cube.

Administration Links
If you are an administrator, you can use administration links to copy data between Contributor
applications without having to publish data first. You can also use administration links to import
data from Cognos 8 data source such as Oracle data stores, SQL Server data stores, or SAP BW. If
importing from Cognos 8 data sources, you must first create a Framework Manager model and
publish it as a package to Cognos Connection. For more information, see "Importing Data from
Cognos 8 Data Sources" (p. 161).
If you are importing data from a Contributor application, you must have sufficient access rights to
select applications as the source and target of a link. If you are importing from Cognos 8 data
sources, you must have the rights to select applications as the target of a link, and be able to access
the source package in Cognos Connection. For more information, see "Configuring Access to the
Contributor Administration Console" (p. 36). If you have appropriate rights, you can secure the
ability to create, edit, execute, delete, import, and export administration links. You can also secure
previously created administration links (administration link instances).
Because data can be moved around easily, you can create smaller applications. Smaller applications
can improve performance because shorter e.Lists have quicker reconciliation times. Additionally,
smaller applications usually do not need as many access tables and cut-down models, so the time
taken to run the Go to Production is reduced. You can tune administration links for optimal
performance. For more information, see "Tuning Administration Links" (p. 154).
For more information see "Cut-down Models and Access Tables" (p. 136).
The source application must be a production application which means Go to Production must be
run at least once. Also, all e.List items in the source application must be reconciled otherwise the
link will not run. The target application can be either the production application or the development
application. An e.List must be defined. You can map an e.List dimension to a non e.List dimension
to move data between applications that do not share an e.List. This is not possible in a system link.
Administration links are similar to D-Links defined between Analyst D-Cubes, except that look-up
links, and Fill, Add, and Subtract modes are not supported. Administration links are also similar
to system links. Administrators set up system links between applications that can be run on the

Administration Guide 145


Chapter 8: Managing Data

Web client by end-users using the Get Data extension. System links also have to have a source e.
List mapped to a target e.List. Unlike administration links, a single system link can run from only
one source cube in one application to one target cube in one application. You can, however, set up
multiple system links.
Administrators set up a series of elements that define sub-links from the production versions of
applications to either the development or production versions of target applications.
If the elements are grouped together into a single link so they can be run at the same time, you can
move data simultaneously between multiple applications. For example, you may want to move data
between the following applications:
● Sales > Profit and Loss

● Marketing > Profit and Loss

● Personnel > Profit and Loss

Administration links do not run unless one or more Job servers are monitoring the Planning Content
Store. For information about adding the Planning Content Store to a Job server, see "Add
Applications and Other Objects to a Job Server Cluster" (p. 55).

Order of Link Elements


The order of elements in a link is important if individual elements in a link target the same
application, cube, and e.List item. The order matters because of the way the data load works. When
the administration link is run, each element of the link creates an individual cube import block for
each e.List item that it is targeting.
For example, if three link elements all target two e.List items in the Expenses cube in the Sales
application, six cube import blocks are created.
● When link element one is run, two cube import blocks are created, one for each e.List item.

● When link element two is run, it targets the same e.List items, and overlaps some of the cells
targeted by link element one. A further two cube import blocks are created.

● When link element three is run, it also targets the same e.List items, and two cube import blocks
are created.

Any cells that are updated by link element three that overlap elements two and one take the value
from element three. Each e.List item in the target application can have only one Model Import
Block (MIB) per application state type. The MIBs are stored in the import queue. There could be
one for development and one for production. Each MIB can hold many cube import blocks (CIB).
CIBs are inserted into the MIB in chronological order. There is no specific precedence related to
the source of the data. Each link element has its own CIB, as does each Analyst link, plus another
CIB for the relational import.

Note: Where multiple link elements exist in a link, the CIBs for those links will be in the order that
the link elements are defined in the link.

146 Contributor
Chapter 8: Managing Data

Link Mode
Administration links run in Substitute mode. This means that data in cells in the target area of the
D-Cube are replaced by the transferred data. If no data is found in the source for a particular cell,
the data in that cell is left unchanged.
If data is imported into a read-only cell that is a target of a D-Link, the D-Link will override the
current import value.

Link Order
The order in which you run links is important. For example, if you run the Analyst>Contributor
link before the administration link, the Analyst>Contributor link is applied to the cube first.
However, if you run the same Analyst > Contributor Link again after the administration link, the
first Analyst > Contributor link is overwritten, and the second Analyst > Contributor link is run
after the administration link.
Analyst>Contributor links are activated automatically after every administration link if run using
the Analyst user interface rather than macros.

Running Import Data and Links more than Once


You can import data into specific cells only once before running Go to Production. If you run
import data to the same cells twice, the cells affected by earlier cube import blocks are overwritten.
You can run multiple administration links and Analyst > Contributor links before running Go to
Production. If you are targeting the production version of the application with an administration
link, you do not need to run Go to Production. A reconcile job is triggered by an activation process.
If the links target different cells, separate cube import blocks are created. If the links target the same
cells, the link-affected cells from the first link are overwritten.

Making Changes to the Development Application After a Link or Import Data Process
Some changes to a development application may affect a link or import data in the following ways:
● If the cubes that you are importing data into changed, you may have to re-create the source
files and go through the complete import data process, or re-create and re-execute links. If
changes were made that do not affect the cube that you are importing data into, you need only
to rerun Prepare Import.

● Any changes made to the Access Tables, Saved Selections, or the e.List that result in a different
pattern of No Data cells for contribution e.List items that are common to both the development
and production applications result in the import queue being deleted.

● Creating access tables after prepare import has run causes the import queue to be deleted.

● If the items are mapped manually, the link must be updated or recreated after any changes to
dimension items.

Administration Guide 147


Chapter 8: Managing Data

Create an Administration Link


Create an administration link to copy data between cubes in Contributor applications, or to import
data from Cognos 8 data sources. If you are importing data from Cognos 8 data sources, you must
first create and publish a Cognos package containing the data you want to import. For more
information, see "Importing Data from Cognos 8 Data Sources" (p. 161).
Administration links will not run unless one or more Job servers are monitoring the Planning
Content Store. For information about adding the Planning Content Store to a Job server, see "Add
Applications and Other Objects to a Job Server Cluster" (p. 55).
You can copy commentary between applications using administration links.
When an administration link is run, a copy of the source file (annotation or attached document) is
attached to the target and becomes a new commentary (annotation or attached document).

Note: Any changes to the source file are not reflected in the target unless the administration link is
rerun.

Steps to Create Links Between Cubes


1. Click Administration Links, Manage Links.

2. Under the Administration Links pane, choose whether to add a link or edit an existing one:

● To add a link, click New.

● To edit a link, click Edit.

If the link definition specifies an application that no longer exists, the Select Link Source/Target
dialog box appears. Select a different source application, target application, or both, and then
click OK.
If you chose an application with an incompatible model structure, a message appears indicating
that the selected application is invalid and that the editor is empty. Close the editor, click Edit,
and then select a different application. Type a brief description of the source and target of the
link element.

3. Enter or edit the name and description of the link.


Both can have up to 250 characters. Link names must be unique and must not be empty or
consist only of spaces.

4. Choose Contributor Application as the data source type.

5. To tune administration link performance, click the Advanced button to adjust the amount of
e.List items that load in a single batch for both the source and target.

Note: this is not needed if selecting a Cognos Package.


For more information on tuning batch sizes, see "Tuning Administration Links" (p. 154).

6. To use the standard configuration, click OK.


The Administration Link-Element dialog box appears.

7. Select the source application and cube.

148 Contributor
Chapter 8: Managing Data

The source application must be a production application. You can preview the dimensions of
a cube in the right pane.

8. Select the target application and a target cube.


The application can be either the production or development application.

9. Click Map to map source dimensions to a target dimension manually (p. 152), or click Map All
to map dimensions with the same name. You need at least one set of matching dimensions in
order to use the Map All feature.

The mapped dimension pairs now appear in the lower set of Map source to target dimensions
lists. A single line connects paired dimensions.

Tips:
● Double-click the connecting line (or either dimension) to confirm that the items in the
dimensions are mapped correctly.

● To edit the properties of a mapped dimension, click the source, target, or line between the
source and target dimension names and click edit.

● To remove a map, click the map and click Clear. Clear all created maps by clicking Clear
All.

10. In the Additional Options window of the Administration Link-Element dialog box, you can
choose to include annotations or attached documents. Do one of the following:

● To include only Annotations, click Include Annotations.

● To include only Attached Documents, click Include Attached Documents.

11. Click Finish when you are done configuring the link element.

12. If you want to add a new element, click Yes. To return to the main Administration Links
window, click No.

Note: If you add a new element, it must match the source used in the original link.
Both actions save the current element. You can change the order in which the elements are run
using the arrow buttons. For more information, see "Order of Link Elements" (p. 146).

13. If you want to execute the link, click Execute.

If you want to monitor the progress of an administration link, under Administration links, click
Monitor Links. For more information, see "Jobs" (p. 47).

Tip: If you receive an error message stating that the batch sizes are too large to load data, you
need to adjust the batch sizes. For more information, see "Tuning Administration Links" (p. 154).
To automate this process, see "Execute Administration Link" (p. 216).

Note: Applications defined in a link may no longer be available since the administrator last
created or modified the link. An application becomes invalid when the application ID is changed

Administration Guide 149


Chapter 8: Managing Data

because the application was transferred from a development environment to a production


environment.

Steps to Create Links with Cognos Package as the Source


1. Click Administration Links, Manage Links.

2. Under the Administration Links pane, choose whether to add a link or edit an existing one:

● To add a link with Cognos Package as the source, click New.

● To edit a link, click Edit.

If the link definition specifies an application or package that no longer exists, the Select Link
Source/Target dialog box appears. Select a different source package, target application, or both,
and then click OK.
If you chose a package with an incompatible model structure, a message appears indicating
that the selected package is invalid and that the editor is empty. Close the editor, click Edit,
and then select a different application. Type a brief description of the source and target of the
link element.

3. In the Administration Link Properties dialog box, enter or edit the name and description of the
link.
Both can have up to 250 characters. Link names must be unique and must not be empty, or
consist only of spaces.

Select Cognos Package as the Data Source Type.


By default, the Administration Link will run a prepare import job to process import data ready
for reconciliation. Click the Advanced button and deselect Run Prepare Import Job to change
the default setting.

4. In the Select a Cognos Package as the Link Source dialog box, browse for a Cognos Package
in Cognos Connection by clicking the ellipses button.
If you select a package not published from Framework Manager you will get an error message
stating that the package you have selected cannot be used as a source for an Administration
Link because it was not Published from Framework Manager.

5. Click a Query Subject.

6. Select the available Query Items in the Query Subject and move them to the Selected Query
Items pane.

Select the Display preview of selected query item check box to preview the Query Items. The
preview option only works with Query Items that have not been selected, and helps you select
the correct Query Items.

7. Click OK to bring the Query Items into the link.

8. In the Administration Link-Element dialog box, select the target application and a target cube.
The application has to be Development.

150 Contributor
Chapter 8: Managing Data

9. Click Map to map source dimensions to a target dimension manually (p. 152), or click Map All
to map dimensions with the same name. You need at least one set of matching dimensions in
order to use the Map All feature.

The mapped dimension pairs now appear in the lower set of Map source to target dimensions
lists. A single line connects paired dimensions.

Tips:
● Double-click the connecting line (or either dimension) to confirm that the items in the
dimensions are mapped correctly.

● To edit the properties of a mapped dimension, click the source, target, or line between the
source and target dimension names and click edit.

● To remove a map, click the map and click Clear. Clear all created maps by clicking Clear
All.

10. If you want to select the columns containing the data, click Mark Data.

Note: Mark Data is not available once you have mapped your data.

11. In the Administration Link - Element dialog box, click Next to pick unmapped source Query
Items and unmapped target dimension items.

12. In the Additional Options window of the Administration Link-Element dialog box, you can
choose to include annotations or attached documents. Do one of the following:

● To include only Annotations, click Include Annotations.

● To include only Attached Documents, click Include Attached Documents.

13. Click Finish when you are done configuring the link element.

14. If you want to add a new element, click Yes. To return to the main Administration Links
window, click No.

Note: If you add a new element, it must match the source used in the original link.
Both actions save the current element. You can change the order in which the elements are run
using the arrow buttons. For more information, see "Order of Link Elements" (p. 146).

15. If you want to execute the link, click Execute.

If you want to monitor the progress of an administration link, under Administration links, click
Monitor Links. For more information, see "Jobs" (p. 47).
To automate this process, see "Execute Administration Link" (p. 216).

Note: Applications defined in a link may no longer be available since the administrator last
created or modified the link. An application becomes invalid when the following occurs:
● The application ID is changed because the application was transferred from a development
environment to a production environment.

● When changing the package or target application, you chose a package with an incompatible
model structure.

Administration Guide 151


Chapter 8: Managing Data

Map Dimensions Manually


Manually mapping dimensions may bring performance improvements for some links. A manually
mapped link filters the data at the source, so less data is moved. Auto-mapped links do not perform
such filtering, so it is possible that more data is moved than if the same link used manual mapping.

Steps
1. Click Map.

The Map Items dialog box appears. Any matching dimension items are highlighted.
If a source dimension does not map to any target dimension, it can be treated as an extra source
dimension. If the items in the source and target dimensions do not match, a manual map is
required. For example, if the source item is Jan-03 and the target item is 1-03, a manual map
is required.
If items in a source or target of the manually mapped link are added, the link must be manually
updated to account for the new items in order to correctly run the load.

2. If you want to map items based on capitalization, click Case Sensitive.

3. If you want to include calculated items (shown in bold) click Calculated items.

4. If matching dimensions are highlighted, click OK to accept them.

The Map Items dialog box closes and returns you to the Map Source to Target dialog box.

5. If some unmatched items remain in the Map Items dialog box, click Manually Map, select a
source dimension and target dimension, click Add, and click OK.

Note: It is okay to have unmapped items.


The matching pairs of dimensions move under the unmapped dimension fields to the mapped
dimension fields and a line connects the two dimensions.
If you have a long list of dimension items to map, you can filter them based on the first characters
in the item name.

Note: This filter applies only to items that appear in the Dimension Items list. It does not affect
what is loaded into the target.

6. In the Filter box, type the character you want to filter with.
Only the items that begin with that character appear.

Tip: To remove the filter, delete the character in the Filter box .

7. In the Map Items dialog box, click Substring.

The Select Substring dialog box appears with the longest item name in the dimension list.
When you use a substring, all the items that match the substring are rolled up into one item.
For example, if you have dimension items named Budget 1, Budget 2, and Budget 3 and you
applied the substring BUD, all three items are rolled into one dimension item to be loaded into
the target dimension.

152 Contributor
Chapter 8: Managing Data

Note: Unlike filtering by characters, using a substring applies to what is included in the load
as well as what is viewed in the Dimension Items list. You can use a substring when mapping
dimensions manually or automatically.

8. Click in the Substring box to place bars at the beginning and end of the substring. If the substring
appears in the front of the string, place a single bar at the end of the substring.
To remove the bar, right-click it.

9. Click OK. The dimension items are now filtered by the number of characters you selected.

View Items in a Dimension


You can preview the items that appear in a dimension.

Steps
1. Select either a source or target dimension.

2. Click the preview button .

Remove a Dimension
You can remove a selected dimension.

Steps
1. In the Map Source to Target dialog box, click the source dimension you want to remove.

2. Click the remove button .


This removes the description designation from a row or column. The row or column is now
treated as values.

Running Administration Links


Administration links are run using the job system and are scalable. They can be automated using
macros, see (p. 216).
When a link targets the Development application, you must run Go to Production so that users can
access the linked data. The data is moved into the prepared import blocks. You can see the results
of this in the Import Data, Prepared Import Blocks window (p. 174).
When a link targets the Production application, there is no need to run Go to Production. An
administration link job is created when the link is executed. At the job end, an activate process is
called. This moves the data into the import production queue and creates a snapshot of the data at
the time the link was executed. Then a reconcile job is triggered, which updates the e.List items
with the new data.

Administration Guide 153


Chapter 8: Managing Data

Exporting and Importing Administration Links


You can export administration links from one application and import it into another using the
Administration Console or the Deployment Wizard (p. 168). To import administration links using
the Administration Console, the source and target applications must still exist, and the metadata
must be unchanged. In addition, the application IDs must remain the same.
The process is as follows.
❑ Export the administration links from the Contributor applications. You can only export one
link at a time.

❑ Backup and remove the Contributor applications from the current Planning Content Store and
add them to the new Planning Content Store.

❑ Import the administration links into the new Planning Content Store.

Steps
1. Click Administration Links, and Manage Links.

2. Click the administration link and then click Export.

3. Select the name and location, and click Save. The administration link is saved with a .cal
extension.

4. To import an administration link, click the Import button, and select the administration link
file.
It is given an edit status of UNKNOWN. Check the administration link to ensure that the
source and target cubes are available, and all dimensions are mapped.

Note: When importing an administration link created using Cognos Planning version 7.3 SP3
or earlier, the source and target batch size setting is 1, which loads one target/source e.List item
into batch. This was the default behavior of the previous versions of Cognos Planning. For
more information, see "Tuning Administration Links" (p. 154).

Tuning Administration Links


Administration link performance can vary for a number of reasons, such as the e.List length of both
the target and source, the size of the target and source, and the complexity of the link. Link
performance may be slower on certain configurations, even when large data volumes are not present.
For more information, see "Variables That Affect Performance of Administration Links" (p. 155).
If necessary, you can tune administration links to get the best performance. e.List items are loaded
into memory in batches. By adjusting the batch sizes for both the source and target e.List items that
are loaded to process the administration link, you may improve performance. This is particularly
true when there is a link with a many-to-many relationship (p. 155), because these typically need a
lot of processing power.

Note: You cannot tune administration links that use Cognos Packages as their source. This is because
Cognos Packages do not load from e.List items.

154 Contributor
Chapter 8: Managing Data

A batch is a set of data that is to be transferred can include data from more than one e.List item.
A batch can also target multiple e.List items.

Important: When changes occur to a model you should evaluate whether you need to retune the
administration link.

When Should You Tune an Administration Link?


Many administration links do not require tuning adjustments. This is because the default settings
work well for many administration link scenarios. To see if your administration link needs tuning,
we recommend that you create and run the administration link using the default settings, and then
adjust the batch size limits if performance becomes an issue. For more information, see "Determine
Optimal Batch Size" (p. 156).
Tuning administration links affects only the time taken to move data, not the time taken to
incorporate the data into the target application via a reconcile job. Links that target the development
application spend all their time moving data through the inter_app_links job. These links do not
trigger reconcile jobs, and so that if they are taking a long time, you should consider tuning them.
Links targeting the production application spend time in two actions:
● moving data via an inter_app_links job.

● incorporating that data into the target application via a reconcile job

To determine whether or not tuning the administration link will be beneficial, review the amount
of time it takes to move data versus any time spent on the reconciliation.
You can see the inter_app_links job in the Monitor links window, and the reconcile job in the Job
Management window of the target application.

Tip: If you are running multiple administration links that target the same application, consider
targeting the development application and running Go to Production. This means that reconciliation
is run once instead of multiple times. Alternatively, instead of having multiple links, you can have
multiple link elements from different applications in the same link targeting the production
application. In this case, reconciliation is run only once.

Variables That Affect Performance of Administration Links


Many factors play a role in determining optimal performance for running administration links.

Types of Link
Where the source and target applications share the same e.List, and each source e.List item is mapped
to its matching target e.List item, the link has a one-to-one relationship. The amount of effort
required to run this link is determined by the number of mappings between e.List items.
Links that have a single source e.List item targeting multiple e.List items have a one-to-many
relationship. The effort required to run this link is determined by the number of target e.List items.
Links where many source e.List items target a single e.List item have a many-to-one relationship.
The effort required to run this kind of link is determined by the number of source e.List items.
Links where multiple source e.List items are mapped to multiple e.List items have a many-to-many
relationship. These links typically need a lot of processing power, because the effort needed to run

Administration Guide 155


Chapter 8: Managing Data

them is calculated by the number of source e.List items multiplied by the number of target e.List
items. You may get the most benefit from tuning these links.

Number of Processors
The number of processors and the amount of available RAM directly affect performance.
If any of your servers in the job server cluster have more than 4 CPUs available, we recommend
that you increase the Job Item Count multiplier per machine setting in the epInterAppLinkResources.
xml file (<install_location>\cognos\c8\bin). The default setting is 4 CPUs per job server. However,
having fewer than 4 CPUs does not negatively affect performance.
The file is installed as read-only. We recommend that you back up the file and reset the read-only
flag to write in order to change the CPU number. You must make the same change to the file on
all servers in the cluster.

Note: This setting only affects the administration link performance.

Model Changes
Changes to the model affect how the administration link performs. If you tune an administration
link and it shows improved performance and then a change occurs in the model, the optimization
may become invalid. This is because the change can affect the overall shape of the administration
link (e.List length, cube size, and so on) that the tuning was based on.

Determine Optimal Batch Size


Adjust the number of source and target e.List items processed at one time to optimize performance
of the administration link.

Steps
1. While the administration link runs, monitor the memory utilization on the least powerful server
in the job server cluster that the administration links run.

2. Adjust the batch size for both the source and target and rerun the administration link. We
suggest that you increase the source batch size where possible before increasing the target batch
size.
● For the source, if there are 150 source e.List items, try entering 75. If that does not work,
try 50 and so on.

● For the target, divide the number of e.List items by the number of physical processors
multiplied by 2.

For example: 2250 e.List items/(14 processors*2)


This gives you a figure of 80.
If this is too large, try multiplying the number of processors by 4 and so on.

The values for Limit To must be positive whole numbers and greater than zero in order for the
tuning settings to be valid.

3. Monitor the memory utilization on the same server to see if it has improved.

156 Contributor
Chapter 8: Managing Data

4. If not, adjust the numbers and run the administration link again.

Set Source Batch Size


You can set the amount of source e.List items that are processed at one time. By default, all applicable
e.List items are processed into the relevant target(s). The default setting is No Limit for newly
created administration links, which loads all the source e.List items in one batch. This means that
each e.List item is read once and then grouped and loaded into the targets. This setting may work
well, but if you have a large model, you may need to reduce the setting.

Note: When importing an administration link created using Cognos Planning version 7.3 SP3 or
earlier, the source and target batch size setting is 1, which loads only one source e.List item at a
time. This was the behavior of the previous versions of Cognos Planning.

Steps
1. In the Create New Link dialog box, click Advanced.

2. If you want to load all e.List items at once, ensure that No Limit is selected.

3. If you want to divide your loads into batches, type a number into the Limit To box.

Note: The values for the Limit To box must be positive whole numbers and greater than zero
in order for the tuning settings to be valid.

4. If the performance is acceptable, leave the Source Batch Size as no limit. If you get errors, reduce
the size.

Set Target Batch Size


You can set the number of target e.List items that are processed in one batch. The default setting
is 1 for newly created administration links, which loads the source batch into one e.List item at a
time.

Steps
1. In the Create New Link dialog box, click Advanced.

2. If you want to target all e.List items at once, select No Limit.

3. If you want to divide your loads into batches, enter a number into the Limit to box.

4. Click OK.
You now need to configure the link element (p. 148).

Tuning Existing Administration Links


Administration links created using Cognos Planning version 7.3 SP3 or earlier can be imported
(p. 154) and reused. These administration links processed each source and target item one at a time.
Now, source and target items are processed in batches and those items are held in memory, reducing
the number of transfers that occur.

Administration Guide 157


Chapter 8: Managing Data

When a previously created administration link is imported, the source and target batch size is set
to 1, which loads one source and target item into a batch for processing. We recommend that you
change the source batch size to No Limit, which is the default value for any newly created
administration link. By adjusting this setting you should see performance gains. You can then try
to adjust the batch size settings to further improve performance.

Troubleshooting Tuning Settings


An administration link will fail if the source and/or target batch sizes are too large because too
much data will be loaded into memory. If you receive the following error message, set the source
and target batch sizes to a smaller number and rerun the administration link.
Failed to load source data.
You could try setting the source batch size to less than
X so that fewer source e.List items are loaded at the same time.
You could try reducing the target batch size to less
than X so that fewer target e.List items are processed at the same
time.
(where X is the recommended batch size)

Troubleshooting Remote Call Time-Out


When you build or modify links in the Contributor Administration Console, queries are executed
to get metadata into the Contributor Administration Console for display. On particularly large
queries, you may receive a message similar to the following:
502 - Bad Gateway URL: http://localhost:80/cognos8/cgi-bin/cognos.cgi,
SOAP action: http://developer.cognos.com/schemas/
planningAdministrationConsoleService/1~~HTTP/1.1
502 Bad Gateway~~Content-Length: 252~~Content-Type: text/html~~Server:
Microsoft-IIS/6.0~~~~~~

We recommend that you increase the Remote Call Time-out in Seconds setting to 7200 seconds in
the epAdminLinksResources.xml file, located at <install_location>\cognos\c8\bin. The file is installed
as read-only. We recommend that you back up the file and reset the read-only flag to writable.
After changing this setting, the Planning service needs to be stopped and restarted on the machine
that is building or modifying the link.

System Links
Administrators can set up links that are run from a Web client session so that Web client users can
move data between a cube in a source application to a cube in a target application. A system link
is a pull link, rather than a push link.
A system link can target hidden, read-only, and writable cells.
System links move date from one source cube in one application to one target cube in one application.
System links are stored with the application, whereas administration links are stored in a separate
datastore. The target for system links must be in the production version of the application, whereas
the target for administration links can be in the production or development version of the application.
You cannot map an e.List dimension to an ordinary dimension in a system link, unlike in an
administration link. This is for performance reasons. If many e.List items must be loaded for a link,
this potentially takes a lot of resources. An administration link can run across job servers and is

158 Contributor
Chapter 8: Managing Data

scalable, so resources are usually not a problem. But a system link runs on the Web client computer
and is not scalable. If you must map an e.List dimension to an ordinary dimension, use an
administration link. A target e.List item can have only one source e.List item mapped to it, but one
source e.List item can be mapped to many e.List items.

To create a link, administrators must be granted the access rights System link as source, and System
link as target, for the relevant applications. In addition, the Admin options setting Act as system
link source must be set to Yes for source applications. For more information, see "Admin
Options" (p. 77). Otherwise, you can still create links using this source, but the Web user cannot
run the link. You assign the link to an e.List item in the target application.
To run the link, the user must have write access to the e.List item that the link is assigned to. They
do not require rights to the source cube. Links are executed on the client computer through the Get
Data client extension. For more information, see "Configure Client Extensions" (p. 303). They can
be run only from the target application. Client users cannot edit system links.

If the Get Data extension is configured and enabled for a user, group, or role, that user, group, or
role has rights to run system links and local links.
The Go to Production process does not have to be run after you set up a system link.
The history of system link actions is stored as an annotation for cubes and targeted e.List items, if
enabled. When a system link is run, a new annotation is created for that link in the open e.List item.
If the link is executed again by the same user or another user, the same annotation is updated. In
addition, a separate history dialog shows all history related to the links that apply to the open e.
List items.

Create a System Link


Before you can create a system link, you must configure the Get Data client extension. The Get
Data extension makes system links available users in the Web client. Users can also create local
links.
Applications defined in a link may no longer be available since the administrator last created or
modified the link. An application becomes invalid when the following occurs:
● The application ID is changed because the application was transferred from a development
environment to a production environment.

● When changing the source or target application, you chose an application with an incompatible
model structure.

To use an application as a source for a System Link, you must first set Act as system link source in
Admin Options to Yes. For more information, see "Admin Options" (p. 77).
You can copy commentary, such as file attachments or user annotations, between applications using
system links.

Steps
1. Click Production, System links.

2. Choose whether to add a system link or edit an existing one:

Administration Guide 159


Chapter 8: Managing Data

● To add a link, click New.

● To edit a link, click Edit.

3. If the link definition specifies an application that no longer exists, the Select Link Source/Target
dialog box appears. If this happens, select a different source application, target application, or
both, and then click OK.
If you chose an application with an incompatible model structure, a message appears indicating
that the selected application is invalid and that the editor is empty. Close the editor, click Edit,
and then select a different application.

4. Type a descriptive name for the system link.

5. Select a Source Application, and a Source Cube.


The source application must be a production application, which means it contains an e.List
and Go to Production was run.

6. Select a Target Cube.

7. Map the source dimensions to the target dimensions manually (p. 152), or click Map All to map
dimensions with the same name.
You must have at least one set of matching dimensions to use Map All.
The mapped dimension pairs move to the fields below, and a line connects the two. This line
signifies that these dimensions are a matched pair.

Note: The Substring option is unavailable to system links on the e.List dimensions because you
cannot have multiple sources or multiple targets due to the potentially large number of nodes
that would need to be downloaded to the client in order to execute the System Link.

8. In the Additional Options window of the Administration Link-Element dialog box, you can
choose to include annotations or attached documents. Do one of the following:

● To include only Annotations, click Include Annotations.

● To include only Attached Documents, click Include Attached Documents.

9. In the System Link dialog box, click Finish.

Run a System Link


When a system link has been created and is available to the user in the Web client, it can be run.

Steps
1. Open the e.List item and click the take ownership button if necessary.

2. Click File, Get Data.

3. Click the System Links tab, select the link, and click Run.

4. To view the history of the link, click History.

160 Contributor
Chapter 8: Managing Data

Tip: If the Get Data extension is reset, all settings and data are lost and all System Links created
for the application are deleted.

Importing Data from Cognos 8 Data Sources


You can import data into Cognos 8 Planning - Analyst and Cognos 8 Planning - Contributor from
any data source that can be published as a Cognos 8 package.
For more information about supported data sources, visit the Cognos Global Customer Services
Web site (http://support.cognos.com).
There are additional considerations when importing SAP BW data into Cognos 8 Planning. For
more information, see "Working with SAP BW Data" (p. 166).
For information on Cognos 8 Planning configuration requirements for SAP BW, see the Cognos 8
Planning - Installation and Configuration Guide.
You must have Framework Manager installed. If you are working with SAP BW data, you must
install the SAP gateway functions. For more information, see the Cognos 8 Planning - Installation
and Configuration Guide.
Importing data from Cognos 8 data sources involves the following tasks.
❑ In Cognos Connection, create a data source connection (p. 161).

❑ In Framework Manager, create a new project and import the metadata into the project (p. 164).

❑ In Framework Manager, model the source. See the Framework Manager User Guide for more
information.

❑ Create and publish the Cognos package to Cognos Connection (p. 165).

❑ If importing into a Contributor application, in the Contributor Administration Console, create


and run an administration link.

Tip: You can create and schedule macros that run administration links.

❑ If importing into an Analyst model, choose one of the following options:


● Select a Cognos package as a source in a D-List Import.

● Select a Cognos package as a source in a D-Link.

● Select a Cognos package as a source in an A-Table, or import a Cognos package as a Source


in an A-Table.

You can also automate the import of Cognos packages using the @DListItemImportCognosPackage
macro.

Create a Data Source Connection


When you create a data source, you must provide the information required to connect to the
database. This information is provided in the form of a connection string.

Administration Guide 161


Chapter 8: Managing Data

You can include authentication information for the database in the data source connection by
creating a signon. Users need not enter database authentication information each time the connection
is used because the authentication information is encrypted and stored on the server. The signon
produced when you create a data source is available to the Everyone group. Later, you can modify
who can use the signon or create more signons.

New Data Sources


You can create data sources in the portal or in Framework Manager. Because they are stored on
the server, data sources appear in both places, regardless of where they were created. Existing data
source connections can be edited only in the portal.
If you are an administrator, you can set up all required data sources before models are created in
Framework Manager so that all connections are available in the Framework Manager Import
wizard.
Data sources are stored in the Cognos namespace and must have unique names. For example, you
cannot use the same name for a data source and a group.

Physical Connections
A data source defines the physical connection to a database. A data source connection specifies the
parameters needed to connect to a database, such as the location of the database and the timeout
duration.

Note: The schema name in the connection string for an Oracle database is case-sensitive. If the
schema name is typed incorrectly, you cannot run queries.

Required permissions
Before creating data sources, you must have write permissions to the folder where you want to save
the data source and to the Cognos namespace. You must also have execute permissions for the Data
Source Connections secured feature.

Steps to Create a Connection


1. Open Cognos Connection.

2. In the upper-right corner, click Launch, Cognos Administration.

3. On the Configuration tab, click Data Source Connections.

4. Click the New Data Source button.

5. In the name and description page, type a unique name for the connection and, if you want, a
description and screen tip, and then click Next.

6. In the connection page, click the type of database to which you want to connect, select an
isolation level, and then click Next.

Note: For SAP BW data sources, the isolation level is read-only.

Note: For Cognos Planning - Contributor 7.3 data sources, select Cognos Planning - Series 7.
For Cognos 8 Planning - Contributor data sources, select Cognos Planning - Contributor.

162 Contributor
Chapter 8: Managing Data

The connection string page for the selected database appears.

7. Enter any parameters that make up the connection string, and specify any other settings, such
as a signon or a timeout.
One of the following options may apply depending on the data source to which you are
connecting:
● If you are connecting to a Cognos cube, you must enter the full path and file name for the
cube. An example for a local cube is C:\cubes\Great Outdoors Company.mdc. An example
for a cube on your network is \\servername\cubes\Great Outdoors Company.mdc.

● If you are connecting to a password protected PowerCube, click Cube Password, and then
type the password in the Password and Confirm Password boxes.

● If you are connecting to an ODBC data source, the connection string is generated from the
name you enter in the ODBC data source box and any signon information. The data source
name is an ODBC DSN that has already been set up. You can include additional connection
string parameters in the ODBC connect string box. These parameters are appended to the
generated connection string.

● If you are connecting to a Microsoft Analysis Services data source, select an option in the
Language box. If you selected Microsoft Analysis Services 2005 you must specify an instance
name in the Named instance since you can have more than one instance on each server.

● If you use a Microsoft Active Directory namespace and you want to support single signon
with Microsoft SQL Server or Microsoft Analysis Server, select An External Namespace,
and select the Active Directory namespace. For more information about configuring an
Active Directory namespace, see the Cognos 8 Planning Installation and Configuration
Guide.

● If you selected Microsoft Analysis Services 2005, you must specify an instance name in the
Named instance since you can have more than one instance on each server.

● If you selected Cognos Planning - Series 7, you must specify the Planning Administration
Domain ID and the namespace.

● If you selected Other Type as the data source type, you must build the connection string
manually.

Tip: To test whether parameters are correct, click Test. If prompted, type a user ID and password
or select a signon, and then click OK. If you are testing an ODBC connection to a User DSN,
you must be logged on as the creator of the DSN for the test to succeed.

8. Click Finish.

The data source appears in Data Source Connections on the Configuration tab, and can be
selected when using the Import wizard in Framework Manager.

Administration Guide 163


Chapter 8: Managing Data

Create a Framework Manager Project and Import Metadata


A project is a set of models, packages, and related information for maintaining and sharing model
information.

Steps
1. From the Windows Start menu, click Programs, Cognos 8, Framework Manager.

2. In the Framework Manager Welcome page, click Create a new project, and specify a name and
location.
You can add the new project to a source control repository, see the Framework Manager Help
for more information.

3. In the Select Language page, click the design language for the project.

You cannot change the language after you click OK, but you can add other languages.

Note: If an SAP BW server does not support the selected language, it uses the content locale
mapping in Cognos Configuration. If a mapping is not defined, Framework Manager uses the
default language of the SAP BW server.

4. In the metadata source page, select Data Sources.

5. Select a data source connection and click Next.


If the data source connection you want is not listed, you must first create it (p. 161).

6. Select the check boxes for the tables and query subjects you want to import.

Tip: For usability, create a package that exposes only what is required.

7. Specify how the import should handle duplicate object names.


Choose either to import and create a unique name, or not to import. If you choose to create a
unique name, the imported object appears with a number. For example, you see QuerySubject
and QuerySubject1 in your project.

8. If you want to import system objects, select the Show System Objects check box, and then select
the system objects that you want to import.

9. Specify the criteria to use to create relationships and click Import.


For more information, see the Framework Manager User Guide.

10. Click Next and then Finish.

Note: You save the project file (.cpf) and all related XML files in a single folder. When you
save a project with a different name or format, ensure that you save the project in a separate
folder.

164 Contributor
Chapter 8: Managing Data

Create and Publish the Cognos Package


You create and publish a package to make the data available to Cognos 8 Planning.

Steps to Create a Package


1. Click the Packages folder, and from the Actions menu, click Create, Package.

2. In the Provide Name page, type the name for the package and, if you want, a description and
screen tip, and click Next.

3. Specify whether you are including objects from existing packages or from the project and then
specify which objects you want to include.

4. Choose whether to use the default access permissions for the package:

● To accept the default access permissions, click Finish.

● To set the access permissions, click Next, specify who has access to the package, and click
Next.

You can add users, groups, or roles. See the Framework Manager User Guide for more
information.

5. Move the language to be included in the package to the Selected Languages box, and click
Next.

6. Move the sets of data source functions you want available in the package to the Selected function
sets box.
If the function set for your data source vendor is not available, make sure that it was added to
the project.

7. Click Finish and choose whether to publish the package.

Steps to Publish a Package


1. Select the package you want to publish.

2. From the Actions menu, click Package, Publish Packages.

3. Choose where to publish the package:

● To publish the package to the report server, click Cognos 8 Content Store. Click Public
Folders to publish the package to the public folder. You can create folders in the public
folder also. Click My Folders to create your own folder and publish the package to it.

● To publish the package to a network location, click Location on the network.

4. To enable model versioning when publishing to the Cognos 8 Content Store, select the Enable
model versioning check box and type the number of model versions of the package to retain.

Tip: To delete all but the most recently published version on the server, select the Delete all
previous model versions check box.

Administration Guide 165


Chapter 8: Managing Data

5. If you want to externalize query subjects, select the Generate the files for externalized query
subjects check box.

6. By default, the package is verified for errors before it is published. If you do not want to verify
your model prior to publishing, clear the Verify the package before publishing check box.

7. Click Publish.
If you chose to externalize query subjects, Framework Manager lists which files were created.

8. Click Finish.

Working with SAP BW Data


The SAP BW model is an OLAP source and is optimized for reporting rather than for high volume
access that is sometimes required for planning activities. To efficiently access data for Cognos 8
Planning, create a detailed fact query subject that will access fact data at a detail level suitable for
use with Cognos 8 Planning.

Tip: If you have OpenHub, you can use it to generate a text file or database table from SAP BW.
You can then manually create a Framework Manager model and Cognos Package from the tables
and then import the package into Planning using an Administration Link, D-Link, or D-List import.
For Cognos products to be able to access SAP BW as a data source, the user accounts used to connect
to SAP must have specific permissions. These permissions are required for the OLAP interface to
SAP BW and are therefore relevant to both reporting and planning activities.
For more information about guidelines for working with SAP BW data, see the Framework Manager
user Guide.
For more information about access permissions for modelling and reporting access, see the Cognos
8 Planning Installation and Configuration Guide.
For information about setting up your environment to work with SAP BW and Planning, see the
Cognos 8 Planning Installation and Configuration Guide.

Create a Detailed Fact Query Subject


The detailed fact query subject is a model query subject based on database query subjects and
calculations. The relational folder is where the SAP star schema is imported to. The detailed fact
query subject is the logical representation of the fact table and the query subjects in the relational
folder are the physical representation of the SAP fact table. We recommend that you do not modify
the contents of the relational folder, unless advised by customer support.

Steps
1. In Framework Manager, click the Key Figures dimension.

2. From the Tools menu, click Create Detailed Fact Query Subject.

3. In the metadata wizard, select the data source you want to use.

You can create a new data source by clicking the New button and specifying SAP BW for
Planning as the type.

166 Contributor
Chapter 8: Managing Data

4. Click OK.
Framework Manager creates a model query subject named Detailed_Key_Figures and a separate
folder containing references to the relational objects. The references to the relational objects
are the physical layer.

5. Create the package.

Note: Packages that contain the Detailed_Key_Figures query subject are only accessible or
supported for the report authoring tools, such as Query Studio and Report Studio if they are
hidden by doing the following:

● In the Define Objects screen click the down arrow and choose Hide Component and
Children.

● Click Detailed_Key_Figures and Relational_Objects.

6. Publish the package.

Recommendation - Query Items


It is a common requirement to concatenate two or more fields from a data source when creating
D-Lists in Analyst. When importing D-Lists from a Cognos Package, you perform the concatenation
in Framework Manager by creating a new query item. The query item can then be included in the
published package and imported into D-Lists and used in D-Links.
When working with SAP BW, you can use a concatenated query item to build a D-List in Analyst.
However, when you create a link, either in Analyst or Contributor, then the concatenated query
item cannot be used. Instead, use one of the underlying query items for the source and use a substring
on the target dimension.
When applying a filter in Framework Manager, you specify how it is used by selecting a usage
value. To see the filtered data when publishing a package in Planning, select Always or Optional.
See the Framework Manager User Guide for more information.

Recommendation - Hierarchy
These recommendations will help improve performance when working with the SAP BW import
process.
● Use manageably sized dimensions when importing SAP BW data. This is because Planning relies
on lookups against the SAP BW hierarchies during the import process, so larger hierarchies
slow down the import process. This may require modelling in SAP BW since it is at a higher
level of detail than the Planning process requires.

● Where possible, take data from the lowest level in the BW hierarchies. This is because data is
taken from the fact table level and aggregated to the level selected in the Planning link. The
further up the hierarchy that members are mapped into Planning, the more aggregations are
needed to be recreated during the import process. This may require modelling in SAP BW since
it is at a higher level of detail than the Planning process requires.

Administration Guide 167


Chapter 8: Managing Data

Recommendation - Hiding the Dimension Key Field


When working with SAP BW data, the Dimension Key field for any dimension should be hidden
in the Model (not the package) - both for the OLAP and Detailed Fact Query Subject access before
the package is published. It is not intended for direct use from within Cognos Planning.

Working with Packages


To avoid a high number of Query Subjects and Query Items when working with and creating
packages in Planning, you should make them as specific as possible so they contain only objects
that are useful to a Planning user.
Using a naming convention may also be useful, like using Planning as a prefix for your packages.
For advanced users, you could also create a single package that holds all of the source objects.

Troubleshooting Detailed Fact Query Subject Memory Usage


When executing administration links that use the Detailed Fact Query Subject, one of the internal
Cognos components builds temporary files in the Temp folder under the Cognos installation
directory. The temporary files are deleted after the query completes, but these files can be large,
depending on how much data is being retrieved. If the drive that contains the Temp folder does not
have enough space to contain the temporary files, the query will fail and you will receive the following
error:
Error Message: DM-DBM-0402 COGQF driver reported
the following:~~~~COGQF failed to execute query - check logon /
credential path~~~~DM-DBM-0402 COGQF driver reported the following:
~~~~RQP-DEF-0177
An error occurred while performing operation &amp;apos;sqlOpenResult&amp;apos;
status=&amp;apos;-28&amp;apos;.~~UDA-SQL-0114
The cursor supplied to the operation &amp;quot;sqlOpenResult&amp;quot;
is inactive.~~UDA-SOR-0005 Unable to write the file.~~~~~~DM-DBM-0402
COGQF driver reported the following:~~~~

Make at least 2 MB of memory available on the installation location’s drive. If you still receive the
error, then make more memory available.

Deploying the Planning Environment and Viewing the Status


of Deployments
You can export or import complete models, macros, administration links, or Analyst libraries with
or without the associated data from Cognos Planning - Contributor or Analyst. You deploy a model
by exporting it from one environment and importing it into another.

Export a Model
You can export a model structure, with or without data, to move between development, test, and
production environments or to send a model with or without data to Cognos Customer Support.

168 Contributor
Chapter 8: Managing Data

When you export a model, Cognos 8 Reports, Events, or Framework Manager Models associated
with the Planning Network are not exported.
The model structure and data are exported to the deployment directory location set in Cognos
Configuration.
You can backup an application by exporting it, but we do not recommend this as a substitute for
database backup.

Steps
1. From the Tools menu, click Deployment and then click one of the following:

● Export Model

● Export Model and Data

2. In the Welcome to the Export Deployment Wizard page, click Next.

3. Choose the objects you want to export and click Next.


Selecting a top level object will select all the children of that object.

4. Type a new name for the export, or choose a name from existing deployments and click Finish.

5. Click OK.

The export request starts on the server.

You can view the progress of the export in the Monitoring Console on the Planning Network
Transfer tab.
To transfer the deployment to a new environment, move the export folder from the source
deployment directory location to the deployment directory location for the target environment.
Compress the export folder to transfer your export to Cognos support.

Import a Model
You can import a model or object to move an application into a test or production environment.
Models for import must be in the deployment directory location set in Cognos Configuration.
You can import macros, administration links, applications, Analyst libraries, and security rights
from the source Planning Content Store that were exported during a previous deployment. You
can select exported objects for import or import an entire model. If a model was exported with
data, then the data will be used during an import.
Administration links and macros can be imported even if they reference an application that is not
in the target destination. If imported with a related application, macros and administration links
are automatically mapped to the target application.
Through the import process, you can change the target datastore and security for your model. The
Deployment wizard will attempt to map security settings for users, groups, and roles. If you are
using different namespaces or changing user, group, or role mappings, you may have to complete
some of the mapping manually.

Administration Guide 169


Chapter 8: Managing Data

The security settings for the source will be applied to the user, group, or role you map to. Source
users, groups, and roles can be mapped together or individually to any single target user, group, or
role. When mapping a number of users, groups, or roles, the target maintains the greatest level of
security privileges. Any unmapped items are mapped to Planning Rights Administrator and do not
appear individually as a users, groups, or roles in the target.
Application IDs and object names must be unique within a planning network. During the import
processes, if duplicate names or IDs are found, you are warned. If you proceed with the import
without changing names and IDs, then any existing applications or objects with common names
or IDs will be overwritten.
To import Contributor applications, you must have at least one configured datastore and the
Planning content store must be added to a job server. A datastore is not required to import Analyst
libraries, macros, or administration links.

Steps
1. From the Tools menu, click Deployment and then click Import.

2. In the Welcome to the Import Deployment Wizard page, click Next.

3. In the Deployment Archive Name page, select a deployment to import and click Next.

4. In the Import Object Selection page, select the objects for import and click Next.
Selecting a top level object selects all the children of that object.

5. In the Namespace Mapping page, select the target namespace for each source namespace, and
click Next.

6. The User Group Role Mapping page contains a tab for each namespace mapping. For each
mapping, assign the correct target user, group, or role to each source by clicking the ellipsis
(…) button.

7. On the Select entries (Navigate) page, in the available entries directory, click the namespace
that contains the target user, group, or role.

8. From the selected entries, select the target user, group, or role and click OK.

9. Complete the user, group, or role mapping for each Namespace mapping. Once you have
completed mapping each source user, group, or role to the target, click Next.

10. For each application or library with a warning next to it in the Object Mapping page, click the
ellipsis (…) button to change the configuration settings.

11. On the Configuration settings page, type new names, IDs and locations of files, and click OK.
For an Oracle or DB2 datastore, you must identify tablespaces for data, indexes, blobs, and a
temporary tablespace.

12. To avoid overwriting macros or administration links, for each object with a warning next to
it in the Object Mapping page, type a new name for the target object directly into the target
column.

170 Contributor
Chapter 8: Managing Data

13. Optionally, if you are importing a model without data, select the option to automatically go
to production with all imported applications during the import process.

14. If you are overwriting objects, you will be prompted to confirm the import, to continue, click
Yes.

15. Click Finish.

16. Click OK.

The import request starts on the server.

You can view the progress of the export in the Monitoring Console on the Deployments tab.
Once the transfer is complete, refresh the console and add any newly created applications to a job
server cluster, see "Add Applications and Other Objects to a Job Server Cluster" (p. 55).

Tip: During the import process, some application options are excluded from the transfer because
they do not apply to the new application location, for example, display names, backup location,
and publish options are excluded. If these options are required, they can be included by modifying
the AdminOptions to exclude during Limited transfer or AdminOptions to exclude
during Full transfer resource values in the <install_location>\bin\epPNHelperResource.xml
file.

View the Status of Existing Deployments


If the export or import deployment request fails, you can view the errors. You can also view the
status of export and import processes currently running on the server through the Monitoring
Console.

Steps
1. From the Tools menu, click Deployment and then click View Status of Previous Exports and
Imports.

2. In the Welcome to the View Existing Deployment Wizard page, click Next.

3. Select the request and click Next.

The log of the deployment request appears. Errors and warnings are shown for failed requests.

Troubleshooting Out of Memory Exception When Exporting During a Deployment


You may receive an "Out of Memory" message when doing an export during a deployment.
On all machines that are running the PlanningAdminConsole service, modify the resource memory
allocation for Java Command Line. From the Install location\cognos\c8\bin\ open the
epPNHelperResources.xml file and lower the memory usage in the following resource line as follows:

Original
- <Resource ID="Java command-line options"> - <![CDATA[ -Xms1024m -Xmx1024m ]
]>

Administration Guide 171


Chapter 8: Managing Data

Modified
- <Resource ID="Java command-line options"> - <![CDATA[ -Xms256m -Xmx256m ] ]>

Importing Text Files into Cubes


To import data into cubes, follow this process:
❑ Create the source file (p. 172).

❑ Select the cube and the text file to load into the cube (p. 173).

❑ Load the data into the datastore (p. 174).

❑ Prepare the import data blocks (p. 174).

❑ Run the Go to Production process (p. 239).

Creating the Source File


If the local import file name contains any special characters, such as á or %, the Administration
Console removes or replaces them when determining the remote file name on the administration
server. You can avoid this by specifically configuring the Import Options setting in the Admin
Options window (p. 77).
The file must be in the following format:
● tab separated

● dimensions in the same order as in the Cognos 8 Planning - Analyst cube

● value comes last

● no double quotation marks

One million rows per e.List item per cube is a good size limit.

Import Data Source File Sample


The following sample source file is an extract from a tab separated text file that can be used to
import data into the Corporate Expenses cube in the sample go_expenses_contributor library.

Budget Version 1 Asia Sales Nov 613300 Communications: mobile phone 670
Pacific
Budget Version 1 Asia Sales Nov 613500 Communications: telephone 370
Pacific equipment
Budget Version 1 Asia Sales Nov 615100 Supplies: computer supplies 680
Pacific
Budget Version 1 Asia Sales Nov 615300 Supplies: office supplies 300
Pacific
Budget Version 1 Asia Sales Nov 615400 Supplies: fax & photocopier 350
Pacific
Budget Version 1 Asia Sales Nov 615500 Supplies: catering 1280
Pacific

172 Contributor
Chapter 8: Managing Data

Budget Version 1 Asia Sales Nov 618200 Services: legal 14000


Pacific
Budget Version 1 Asia Sales Nov 619500 Services: recruitment 8000
Pacific
Budget Version 1 Asia Sales Dec 613100 Communications: line charges 340
Pacific
Budget Version 1 Asia Sales Dec 613200 Communications: long distance 450
Pacific
Budget Version 1 Asia Sales Dec 613300 Communications: mobile phones 670
Pacific
Budget Version 1 Asia Sales Dec 613500 Communications: telephone 370
Pacific equipment
Budget Version 1 Asia Sales Dec 615100 Supplies: computer supplies 680
Pacific
Budget Version 1 Asia Sales Dec 615300 Supplies: office supplies 300
Pacific
Budget Version 1 Asia Sales Dec 615400 Supplies: fax & photocopier 350
Pacific
Budget Version 1 Asia Sales Dec 615500 Supplies: catering 1280
Pacific
Budget Version 1 Asia Sales Dec 618200 Services: legal 14000
Pacific

Use the preview in the Import Data Copy tab to check that you have the source data in the correct
format.

Select the Cube and Text File to Load into the Cube
The copy process copies the import data file to a file location on the administration server and
specifies that the cube the data is to be loaded into. You can also check that your source file is in
the format expected by the datastore. The administration server destination is specified in Admin
Options, (p. 77), but should be modified only by a database administrator.
If your import files are large, it is quicker to manually copy the files to the administration server
destination. If you do this, you must follow the steps described below, but do not click Copy. As
soon as you have specified a valid file and location, the Administration Console registers which file
is to be loaded into a particular cube. You can only do this process one import file at a time.

Steps
1. Click Development, Import Data for the appropriate application.

2. On the Copy tab, in the Select cube box, click the cube to import into.

3. In the Select text file to load box, enter the text file and its location.

4. In preview, check that the order of columns in the text file matches the order expected by the
datastore.
The header row gives the names of the dimensions taken from the cube, and the final column
(importvalue) contains the value. The rows below the heading contain the data from the text
file.

Administration Guide 173


Chapter 8: Managing Data

5. If the data appears to be in the wrong columns, you should rearrange the column order in the
text file and repeat steps 1-5.

6. Unless you want to manually copy the files, click Copy and then repeat steps 2 to 5 until you
have selected all the required cube and text file pairings.

The next step is to load the data.

Load the Data into the Datastore


Load the data into staging tables in the datastore, one table per cube. An import table is created
for each cube during datastore creation. There is a column for each dimension, plus a value column.
If new cubes or new dimensions are added to the Analyst model after an application is created, new
import tables or columns in the tables are created when a synchronize runs and is saved.
The cube name associated with the import table is stored in the applicationobject table. The tables
are named im_cubename. Errors are stored in ie_cubename.
This process is not multi-threaded.
You can also automate the loading of import files (p. 203).

Steps
1. In the Import Data window for the appropriate application, click the Load tab.

2. Select the Load check box for each cube that you want to load data for.
Row Count indicates how many rows are currently in the import table from previously loaded
data.

3. Click Delete existing rows if you want previously loaded data to be removed, otherwise when
the names of previously loaded data match the newly loaded data it is replaced by the new data
and previously loaded data that is not matched remains in the staging table.

4. Click Load.
The next step is to prepare the data (p. 174).

Prepare the Import Data Blocks


To prepare the import data blocks, the data is taken from the import staging tables per cube, per
e.List item. The import staging table is cleared. The calculation engine validates the data and converts
it into import blocks, and errors are written to ie_cubename.
The import data block contains just the data required for an individual e.List item. Data for other
e.List items, data targeting No Data cells or formula items, and data not matching any items are
removed.
The process of converting data into import blocks uses the job architecture to run on multiple
computers and processors. It does not conflict with other online jobs for the application. Progress
logs are shown.

174 Contributor
Chapter 8: Managing Data

If you are importing a large file, you can test to check that the import file is valid to avoid time
consuming problems. When you test, a prepare job is created for only the first e.List item for the
selected cube in the import table. Any errors are listed, such as extra dimensions, columns in the
wrong order, and invalid e.List items.
If you have more than one Planning Administration Console service, you must load data into Import
Tables (im_cubename) prior to running a Prepare Import job.
You can also automate the preparing of import files (p. 203).

Prepared Data Blocks


The Prepared Data Blocks tab displays the e.List items that have import data blocks prepared. You
must wait for the prepare import job to run before there are any prepared data blocks. The number
of data cells prepared per cube are listed for each e.List item.

Deleting the Import Queue


If you decide you do not want to proceed with the import of data, you can click the Delete import
queue button.

Go to Production Process
If the Prepare Import process was not run, no data is imported when Go to Production is run. To
prepare import data blocks, you must cancel Go to Production and return to the Import window.

Steps to Test the Import File


1. In the Import Data window for the appropriate application, click the Prepare tab.

2. In the Prepare column, click the cubes that you want to test.

3. Click Test.

Any errors are listed in the Import errors pane and a prepare import job is created. You can
view the progress of this job in the Job Management window, or in the Monitoring
Console(p. 50). If the test is successful, you can run prepare for all the import data. This
overwrites test data.

Steps to Prepare the Import File


1. In the Import Data window for the appropriate application, click the Prepare tab.

2. In the Prepare column, click the cubes you are importing data into.

3. If you want import blocks created for all e.List items, and not just the e.List items that you are
importing data into, click the Zero Data option.
This zeros existing data in the cube before import takes place.

4. Click Display Row Counts to show the number of rows in the text file being imported.

5. Click Prepare.

Administration Guide 175


Chapter 8: Managing Data

If the Admin Option Display warning message on Zero Data is set to yes (p. 77), a warning
message is displayed if the Zero Data option is selected. This is to prevent accidental setting of
this option.

A prepare import job is created. You can view the progress of this job in the Job Management
window.
When you perform a Go To Production, the prepared data will be imported.

176 Contributor
Chapter 9: Synchronizing an Application

You use synchronize to update all cubes in an application when the underlying objects in Cognos
8 Planning - Analyst change. Changes include renaming dimensions, adding, deleting, or renaming
dimension items. When you synchronize an application, you are re-importing the definition of the
cubes in the application from Analyst. Synchronize also brings in new data for assumption cubes
(that is those cubes without the e.List).
Before making changes to an Analyst model, you should backup the library.
Synchronizing an application means that all e.List items will be reconciled, see
"Reconciliation" (p. 52), after the Go to Production process is run. We recommend that you back-up
the datastore before synchronizing.

Changes that Result in Loss of Data


The structure of the library is very important to a Contributor application. If changes are made to
the library in Analyst such as deleting a dimension item, and the application is synchronized, all
data associated with that dimension item is lost.
If there is a possibility that data may be lost, you may get a warning message similar to the following:
"Destructive Synchronize detected. If you save the changes, this may result in the loss of data."
If you receive this message, you should back up your application datastore before proceeding.
A synchronize is destructive (that is, results in loss of data) in the following circumstances:
● cube dimensions were added

● cube dimensions were deleted

● cube dimensions were reordered

● detail items from a dimension were deleted

● detail items from a dimension were changed to calculated

See "How to Avoid Loss of Data" (p. 178) for more information.
If you are automating the synchronize process, when a destructive synchronize is detected, you can
choose to stop the process, or continue.

Administration Guide 177


Chapter 9: Synchronizing an Application

Synchronizing an Application
You can synchronize an application with Analyst to make sure all cubes that are shared with Analyst
are updated.

Steps
1. Select Development, Synchronize with Analyst for the appropriate application. A check is made
to see if you are logged on with appropriate rights. If you are not, you are prompted to logon
via the Cognos Common Logon.

2. If the library or library name has changed, enter or browse for the new name in Library.
The Administration Console checks to see if the e.List selected when creating the application
still exists in the library. If it does not, a warning message appears.

3. Click the Synchronize button to begin the synchronization process.


A list of objects that could change is displayed, for example: which cubes were added, which
cubes were removed, which cubes had their dimensions changed, and whether dimensions were
added, deleted, or substituted.

Click Advanced to see more detailed information.


This displays detailed model information about what has changed, for more information, see
"Model Changes Window" (p. 246).

4. To save the synchronization changes, click Save.


The synchronization does not take place until Go to Production is run. If you decide not to go
ahead with the synchronize, click another part of the Administration Console without saving.
If you save, then subsequently decide to cancel the synchronization, you can click the Reset
development option. This, discards all changes made to the application since the last time the
Go to Production process was run.
You can automate this process, for more information see "Synchronize" (p. 203).

Generate Scripts
If the Generate Scripts option is set to Yes in Admin Options "Generate Scripts" (p. 178), a check
is made to see if the datastore needs to be restructured, for example if columns in tables must be
added or deleted. If they do, synchronize generates a script, Synchronize script.sql when synchronize
is run. This script must be run by a database administrator and it changes the columns in the
ie_cubename table as synchronize does.

How to Avoid Loss of Data


There are two ways of avoiding loss of data when you add, delete, reorder or substitute dimensions.
The method you choose depends on the size of the e.List.
The first method is to publish the production data using the View Publish Layout publish option.
The procedure for this is listed in the steps below.

178 Contributor
Chapter 9: Synchronizing an Application

An alternative method is to run a Contributor > Contributor link using Analyst. This is a simpler
process, but should only be used on applications with a small e.List.
Data is moved from the Production version of a Contributor source into the Development version
of the Contributor target via Analyst. Once this is complete, you must run the Go to Production
process.

Steps to publish the production data


1. Make the Analyst model change.

2. Synchronize the application.

3. Click the Set offline button to take the system offline.


This takes it offline on the Web client and prevents data changing after publish begins.

4. Publish the data with no data dimension selected (p. 276).

5. Transfer the data from the ev_cubename view, using a tool such as DTS, depending on your
datastore. Remember to reorder/change the columns as required.

6. Run Prepare Import (p. 174).

7. Run the Go to Production process (p. 239).

8. Click the Set online button .

Example Synchronization
In the example shown below, the first item under Restructured Cubes, the order of the dimensions
is changed. The dimension e.List has moved from fourth to second. The preview shows the new
order of the dimensions and the old order.

Administration Guide 179


Chapter 9: Synchronizing an Application

If you look at the expanded Product Price and Cost cube, you can see that the dimension e.List was
added to it, and in Compensation Assumptions, the dimension e.List was removed.

Under Dimensions, you can see that a dimension item (18V Trim Saw Drill/Driver Kit) was deleted
from Indoor and Outdoor Products.

Click Advanced to see more detailed information. This provides a detailed description of the
differences between the previous Analyst model and the current model. It lists the cubes and
dimensions that have changed and when you click an item, a detailed breakdown of the changes is
provided. Typically, this information is used for debugging purposes.

Advanced - Model Changes


You can display the Model Changes window to see detailed information that is typically used by
technical support and development in problem solving. This report may take some time to generate.
If you have problems, you may be asked to send a file containing this information.

Steps
1. Click the Advanced button on the Synchronize window, or during Go to Production, by clicking
the Advanced button on the Model Changes window.

2. In the empty box at the bottom of the window, enter a file location and name and click Save.
If you compare the information that you see in the Synchronize preview with the information
that is displayed by clicking Advanced, you will see in the Model changes window, one extra
cube is listed (Expenses) and an extra section named Common Links is listed. Common links
contains details of a D-Link that was changed as a result of the changes to the Compensation
Assumptions cube. The Expenses cube is listed under common cubes because it is the target of
the changed link.

180 Contributor
Chapter 9: Synchronizing an Application

Synchronize Preview

Advanced--Model Changes

Administration Guide 181


Chapter 9: Synchronizing an Application

182 Contributor
Chapter 10: Translating Applications into Different
Languages

The Translation branch enables you to translate, from an existing Cognos 8 Planning - Contributor
application, the strings that will appear in the Web client. The translated strings are held in the
Contributor datastore along with the rest of the application and are streamed to the Web client
when the users connect to the application. In addition to creating new language translations, you
can create custom text translations.
There are three roles involved in the translation cycle:
● The model builder who creates the Cognos 8 Planning - Analyst model using Analyst.

● The administrator who uses the Administration Console to create and manage the Contributor
application.

● The translator who translates the Contributor application.

If a translation is upgraded from Cognos Planning version 7.3 or earlier versions there may be
additional product strings or incompatible strings that require translation. New product strings
and incompatible strings introduced during an upgrade are not automatically filled with the base
language.

Tip: Client extensions must be configured and tested before starting a translation.
When changes, including renaming dimensions or adding dimension links, are made to the Analyst
model, the Contributor application must be synchronized and go to production must be run to
incorporate the changes into the application. Any changes to the Analyst model that the Contributor
application is based on may require changes to the translation.
When the translation is opened in the Translation branch, changes to the base strings are displayed,
however, you cannot see which strings have changed. In order to see which base strings have
changed, you should export the translation from the Content Language tab both before and after
synchronization of the application, giving the translations different names. You can then compare
the translations and see what has changed.

Assigning a Language Version to a User


There are two ways of assigning a language version to a user.
● By user properties defined in Cognos Configuration
If a translation exists in the language specified in the user’s product language preference
properties, the user sees the application in this language.
Use this method for a straightforward translation of the model text strings and application
strings from the base language into another language.

Administration Guide 183


Chapter 10: Translating Applications into Different Languages

If a user selects a preferred product language that is not a Cognos 8 Planning tier 1 language
and no translation exists, the Contributor Web client will use the model base language for
content strings and the application base language for product strings.
The application base language is configured in the Contributor Administration Console, Admin
Option, for each application.

● By user, group, or role


When creating a translation, the base language of the translation will default to the base language
of the application. The base language of the application defaults to the installation language
of the Contributor Administration Console.
You can assign a translation to a user, group, or role in the Translation branch. This method
is necessary for languages that are not supported by Cognos Configuration, or if you want to
create a translation that takes account of local differences. For example you may have European
French and French Canadian versions, or US English and UK English versions.

Translate the Application


You can create a new translation in the Contributor Administration Console.
Before you can translate the application, you must have translation access rights for the application.

Steps
1. In the Contributor Administration Console, expand the application to be translated so that
you can see the Translations branch.

2. Right-click Translations and click Create New Translation.

The Create New Translation dialog box appears. The system locale tells you which bitmap
fonts and code pages are defaults for the system that the Administration Console is running
on. This should be the same as the Translation Locale, otherwise the translation may not show
properly.

3. Type a translation name.

4. Select By user specified preference for Product Language, or By User, Group or Role.

Use By user specified preference for Product Language if you are creating a translation that
uses one of the supported locales. Users who have this language specified in their properties
get the translated version of the application, unless they are members of a group or role that
is assigned to a different translation.

Use By User, Group or Role to create a translation in a language that applies only to a specified
user, group, or role.

5. To select users, groups, or roles click the ellipsis (…) button.

6. Select the required namespace and then the necessary users, groups, or roles and click the add

button .

184 Contributor
Chapter 10: Translating Applications into Different Languages

7. Click OK.

8. Select a translation locale.


This tells the operating system which code page the Contributor application uses when running
in the new language. A code page ensures that the correct character sets and keyboard layouts
are associated with the Contributor application so that it will appear properly on the user's
computer. For more information, see "System Locale and Code Pages" (p. 190)

9. Select the translation base language from English, French, German, Japanese, and Swedish.

10. Click Create.


The translation is added to the datastore and the strings from the Contributor application are
loaded into the Translations branch.

11. Open the new translation. To do this, click the name of the translation under Translations.
You can now begin translation.

Translate Strings Using the Administration Console


There are two parts to the translation: content language and product language.
● Content language relates to the Analyst model specific information which includes D-Cube
names, D-List item names, e.List items names, and model name.

Note: When you create a translation with Japanese as the base language, the content strings
are not translated. Analyst does not support Japanese characters. To use Japanese in the
Contributor Web Client, you must translate the content strings.

● Product language relates to generic strings such as button names, status bar text, error messages,
menu and menu names item names. Product Language base strings for a language will be the
same for all models.

The Content Language and Product Language tabs separate the translation items into categories.
The total category on the Product Language tab is used when a multi-e.List view is displayed for
all contributions. There are a number of categories that appear on the Product Language tab if
client extensions are installed. These allow you to translate the buttons, wizards and any messages
that the user might see.

The strings are color coded to indicate the status of the string. The colored squares in the Category
column have the following meanings:
● Blue - the translated string cell has not changed in this session.

● Red - the translated string cell has changed in this session.

● White - the translated string cell is empty.

If strings on the Content Language tab are left blank, in the Web application they will default to
the base string. If strings on the Product Language tab are left blank, they will appear blank in the
Web application.

Administration Guide 185


Chapter 10: Translating Applications into Different Languages

If you do not have the correct system locale set on your computer, we recommend that you export
the file in .xls format, and use Excel to translate the strings. This ensures that the fonts appear
correctly when you are translating. For more information, see "Export Files for Translation" (p. 188).

In Product Language, some of the translatable strings contain parameters that must not be changed,
for example:
%1:currently annotating user% has been annotating %2:e.List
item% since %3:edit start time%.\r\nIf you continue and annotate
then %1% will be unable to save any changes.\r\nDo you still wish
to annotate %2%?

The following are parameters:


● %1:currently annotating user%

● %2:e.List item%

● %3:edit start time%

You cannot add, remove, or edit parameters. However, they can be moved or repeated within the
translation string.
These are some of the formatting codes that are used:

\r carriage return

\n new line

\t tab

For example:
Unable to create email :-\r\n\tTo: %1:to%\r\n\tCc: %2:cc%\r\n\tSubject:
%3:subject%\r\n\tBody:
%4:body%

Text in message boxes wraps automatically so it is not always necessary to use formatting codes.

Steps
1. In the Translations branch, click the name of the translation.

2. Click the Content Language tab or Product Language tab.

3. Enter the new strings directly in the Translated string column or into the Edit window. You
can choose to:

● Populate the empty column with the text in the Base string column and then edit the text.
To do this, right-click in the column and click Copy base to translated.

● Click on the first base string that you are going to translate. This will appear in the left
pane of the Edit window. Enter or edit the translated text in the right pane. Populate any
remaining blank cells with base string text if this is needed. To do this, right-click in the
column and click Fill empty with base.

186 Contributor
Chapter 10: Translating Applications into Different Languages

You can clear strings from the Translated string column by right-clicking and selecting Clear
translated.

4. Save the translation.


A Warning about empty translation strings apply only to the active tab. The translation will
not appear correctly in the Web application if there are any empty content or product strings.

Exporting and Importing Files for Translation


If you want to translate the Contributor application outside of the Administration Console, you
can use the import and export functions.
You can import and export files in the following formats:
● tab delimited text

● xls
Define an Excel worksheet (import only).

● xml

● other
Define a custom format.

Files that you import must match the format expected by the Administration Console. The best
way to ensure this is to first export a file in the format that you will be editing in. After you have
completed your translation, import the changed file.

Import file format


The format needed for import is:
String ID Context Base Translation Hint

Where:

String ID A unique identifier given to an object.


This should not be changed.

Context Groups the parts of the application into sections, for example, Model,
D-Cube, Hierarchy.
This should not be changed.

Base The string to be translated in the language in which the model was first
created.
This should not be changed.

Administration Guide 187


Chapter 10: Translating Applications into Different Languages

Translation This is the translated string.


This can be changed.

Hint This is a hint to help the translator.

Export Files for Translation


To translate the Contributor application outside of the Administration Console, you can export a
file for translation.

Steps
1. Open the translation and click either the Content Language tab or the Product Language tab.
Only one tab can be exported at a time.

2. Click Export.

3. Enter a file name and location.

4. Select the required file format. If you click Other, you must enter a custom delimiter, for example
";"

5. Select Include column headers if you want to include a header row. This is useful for files that
you may be opening in a tool such as Excel. It lets you see which text contains the translated
strings.
If you are modifying the export file to import it into the Administration Console, you should
edit only the text in the Translation column. If you edit the text in the String ID, Category, or
Base columns, you will have problems importing the file.

6. Click OK.

Import Translated Files


After you have completed your translation, you can import a translated file.

Steps
1. Open the translation that you are going to import the translated file into.

2. Click either the Content Language or the Product Language tab.


Select the same tab that you originally exported the file from.

3. Click Import.

4. Select the appropriate file format.


If you select xls, you should enter the worksheet name in the Excel worksheet box. If you select
Other, enter a delimiter in the Delimiter box.

188 Contributor
Chapter 10: Translating Applications into Different Languages

5. Click the First row contains field names option if applicable.

6. Enter the file name and location in the File name box and click OK.

7. Click Save.

Search for Strings in the Content Language or Product


Language Tab
You can search for strings in either the Content Language or Product Language tab using search
criteria.

Steps
1. Click Find.

2. Select the search criteria:

Find what Enter the text string. Previous searches are saved and you can
select them from the list.

Search All Categories, Select whether you want to search all categories, or selected
Selected Rows rows.

Direction Select the direction for the search, either Up or Down (default)
.

Search in Choose whether you want to search in translated string or base


string.

3. To begin the search, click Find next.

Translating Help
You can translate Contributor help text in the same way that you translate other strings.

Cube help is located on the Content Language tab under Cube help for <cube name>. The first row
has simple cube help, the second row is detailed cube help that may contain HTML tags. These
tags can be modified.

Instructions are also located on the Content Language tab under Application Headline and
Instructions. The top row is the headline and the bottom line is the instructions.
From the Contributor Administration Console, it is not possible to translate the default help supplied
with the Contributor Web site.

Administration Guide 189


Chapter 10: Translating Applications into Different Languages

System Locale and Code Pages


Before you begin translation, there are some issues you must consider. When you create a new
translation, you select a translation locale to be associated with the translation. The translation
locale tells the system which code page the Contributor application uses when running in the Web
client. The code page is a table that relates the binary character codes used by a program to keys
on the keyboard or to characters on the display and is used to ensure that text appears in the
appropriate language, as determined by the system locale of the computer. The available system
locales are determined by the installed language groups, which enable programs to show menus
and dialogs in the user’s native language.
When you create the translation, a check is run to ensure that the translation locale ID matches
that of the system locale and the code page. If it does not match, a warning is issued. This is because
the strings are not displayed in the Translations branch in the same way as they are in the Web
client. Any double-byte strings that contain characters that are not contained in the code page will
not be shown in that code page (typically they will appear as "?"). Even though the characters do
not appear correctly, this is only a display problem. The characters are stored correctly internally.
If you do not want to change the system locale to that of the language you are translating to, you
can export a text file containing the client strings and translate the strings using Excel. Excel handles
languages very well and you will see them appear correctly. You should then save the file in .xls
format, import it to the Translations branch, and click Save.

About Fonts
A font has both a name and a set of charsets that it supports. Some fonts support a variety of
charsets from the Unicode range and some do not. Some, such as MS Sans Serif, are not Unicode
fonts and only support the Western code pages.
Japanese has limited support from the set of fonts that are installed by default on a US/English
version of Windows 2000/XP. None of the standard fonts supplied with Windows 2000 support
the Japanese charset.
Once you install the Japanese language pack, however, standard Japanese fonts are installed on
your system (Japanese fonts all have unique names).
For Korean or Japanese translations to appear correctly in the Contributor Web client, you must
set them as the default character set for the operating system.

Tip: You can install fonts for East Asian languages through the Regional and Language Options
in the Windows Control Panel.

190 Contributor
Chapter 11: Automating Tasks Using Macros

You can automate common tasks that are performed in Contributor Administration Console by
using macros. Formerly known as Automation Scripts and created by the Contributor Automation
tool, the automation of tasks is now integrated with Contributor Administration Console using
macros and macro steps. This makes it easier to maintain and use automated tasks.
Macros, comprised of macro steps, can be
● run interactively within Contributor Administration Console

● triggered by events, such as the submitting of data

● scheduled to run in Cognos Connection

● using the Windows Command line interface

● using Windows scripts and batch files via a batch scheduling tool

Macros are stored in the Planning content store. Macros and macro steps can be exported and
imported again as an XML file.
❑ Create a new macro, see (p. 192).

❑ Run a macro from the Administration Console (p. 221), from Cognos Connection (p. 221), as a
Cognos 8 Event (p. 222), from the command line (p. 224), or as a batch file (p. 224).

❑ Set access rights for macros, see (p. 40).

❑ Troubleshoot macro errors, see (p. 225).

Common Tasks to Automate


Using the Contributor Administration Console, you can group related tasks into a single macro to
execute them in sequence. Macros can be run in the Administration Console, or by using external
scheduling tools.
For example, you can automatically
● import an e.List and rights from files

● run administration links

● import simple (non-rule based) access tables

● synchronize with Analyst

● publish only changed data

● generate a Framework Manager model

Administration Guide 191


Chapter 11: Automating Tasks Using Macros

The following are examples of tasks in the Administration Console that are performed frequently
and are often automated using either Macros or a batch scheduler tool.

Task Macro Steps included

Import and Process Data File (repeated for each Cube)


Prepare Import
Go To Production

Update Application with External Synchronize


Changes Import e.List
Import Rights
Import Access Table
Go to Production

Month End (Move data to other Set an Application Offline


systems) Execute Administration Link
Publish

Creating a Macro
Create macros using the Macros tool in Contributor Administration Console. Macros are used to
automatically perform tasks. Macros are stored in the Planning store.
❑ Create a new macro (p. 192)

❑ Create a macro step (p. 193) or transfer macro steps from another macro (p. 196)

Create a New Macro


Create a new macro to create an automated task. Macros published to Cognos Connection can be
run or scheduled from the Content Administration area of Cognos Administration or used to create
an event or job.

Steps
1. In the Contributor Administration tree, click the Macros icon .

2. Click New.

3. In the Macro Name box, type the name of the new macro. Click the Publish to Cognos
Connection check box to have the macro accessible in Cognos Connection and click OK.

The new macro appears in the Macros box. The Edit State is set to Incomplete because no steps
are added yet, and the Run State is set to Ready.

192 Contributor
Chapter 11: Automating Tasks Using Macros

Tip: Rename or delete a macro by selecting the macro in the Macros list and clicking Rename
or Delete.

Create a Macro Step


Use macros to group and run a number of macro steps in the specified sequence. A macro step holds
the parameters for the macro. For example, you want to run the following tasks to import some
data: Wait for Any Jobs, Load Import Data, Prepare Import, and Go to Production.
Automation scripts used in the Contributor 7.2 Automation tool are matched to the macros used
in the current Contributor Administration Console.

Note: The Automation scripts created in version 7.2 and earlier cannot be used as macro steps.

Function to Automate Type of Macro Step 7.2 Automation Script Name

Job Servers

Add a container to a job server Add Monitored Job Object AddMonitoredApplications.xml


(p. 197)

Stop a job server at a scheduled time Disable Job Processing (p. 197) StopApplicationServer.xml

Start a job server at a scheduled time Enable Job Processing (p. 197) StartApplicationServer.xml

Generate a report on jobs in a Job Doctor (p. 198) JobDoctor.xml


container

Remove a container from a job server Remove Monitored Job Object RemoveMonitoredApplications.xml
(p. 198)

Control the maximum number of Set Max Concurrent Job Tasks SetMaxConcurrentJobTasksFor
jobs that can run on a job server (p. 199) ApplicationServer.xml

Set how often a job server checks for Set Polling Interval for Job Server SetPollingIntervalForApplicationServer.
jobs (p. 199) xml

Schedule other macro steps to allow Wait for Any Jobs (p. 199) WaitForAnyJobs.xml
any jobs to finish before other jobs
start

Development

Take the development Contributor Go to Production (p. 200) AutomatedGoToProduction.xml


application and create a production
application

Administration Guide 193


Chapter 11: Automating Tasks Using Macros

Function to Automate Type of Macro Step 7.2 Automation Script Name

Move data from import staging tables Prepare Import (p. 203) AutomatedPrepareImport.xml

Load data into the datastore staging File (p. 202) UploadImportFile.xml
table

Synchronize Analyst and Contributor Synchronize (p. 203) N/A

Run an Analyst Macro Execute Analyst Macro (p. 204) N/A

Import Access Tables into a Import Access Table (p. 204) N/A
Contributor application

Import an e.List into a Contributor Import e.List (p. 206) N/A


application

Import rights into a Contributor Import Rights (p. 206) N/A


application

Load the development model XML Upload a Development Model UploadDevelopmentModel.xml


into a development application (p. 207)

Production

Publish data collected by Contributor Publish - View Layout (p. 208) N/A
to a datastore (using default
parameters)

Publish data collected by Contributor Publish - View Layout - Advanced AutomatedPublish.xml


to a datastore (configuring all (p. 209)
parameters)

Publish data collected by Contributor Publish - Table Only Layout N/A


to a datastore for reporting purposes (p. 211)

Publish only changed data for e.List Publish - Incremental Publish) N/A
items (p. 213)

Delete user or audit Commentary Delete Commentary (p. 214) DeleteAnnotations.xml

Run an Admin Extension in a Execute an Admin Extension ExecuteAdminExtension.xml


Contributor application (p. 215)

Set a Contributor application online Set an Application Online or UpdateWebClientBarrier.xml


or offline Offline (p. 216)

194 Contributor
Chapter 11: Automating Tasks Using Macros

Function to Automate Type of Macro Step 7.2 Automation Script Name

Administrator Links

Run an Administration Link in a Execute an Administration Link N/A


Contributor application (p. 216)

Macros N/A

Generate debugging report for Macro Doctor (p. 217) N/A


macros

Test a macro Macro Test (p. 217) Test.xml

Run a macro Execute Macro (p. 218) N/A

Run a program from the command Execute Command Line (p. 218) N/A
line

Import a macro Import Macros from Folder N/A


(p. 218)

Export a macro Export Macros to Folder (p. 219) N/A

Session

Remove an application lock Remove Application Lock (p. 219) N/A

Steps
1. In the Macro Steps area, click New.

The Select new macro step type dialog box appears.

2. Click the type of macro step you want to add.

3. Click OK.
A dialog box appears with the parameters relevant to the type of macro step you selected.

4. Review each parameter and change it as required. For information about the parameters, see
the topic for the type of macro step you are adding in step 2.

5. Click Validate. This checks the validity of the parameters.

6. If you want to add other steps to the macro, click New and repeat steps 2 to 5 for each macro
step.

7. When you are done, click OK.

Administration Guide 195


Chapter 11: Automating Tasks Using Macros

The macro steps are added to the list of macro steps contained in the macro.

Tips: To edit or delete a macro step, click the macro step and click Edit or Delete. To reorder
macro steps, click the macro step and then click the Move Up or Move Down button.

Transferring Macros and Macro Steps


You can copy steps from one macro to another, create a backup copy of your macro, add steps to
another macro, and make a copy of an existing macro.

Steps
1. Click the Macros icon in the tree.

2. In the Macros list, select the Macro and click Transfer.

3. Configure the following properties:

Property Description

Direction

From Exports a macro step from the existing macro

To Imports a macro step into the existing macro

Delete Source Deletes the macro step from the original macro

Copy From

Other Macro Copies the selected macro step to another macro

New Macro Creates a new macro that includes the selected macro step

(self) Makes a copy of the selected macro step in the existing macro

Files in Folder Copies the selected macro step to a folder, which is used for backup
purposes

Publish to Cognos Publishes the macro to Cognos Connection for use in Event Studio.
Connection

Select Steps

All Steps Selects all macro steps in the macro

Specify Steps Specifies which macro steps to include in the transfer

196 Contributor
Chapter 11: Automating Tasks Using Macros

4. Click OK.

Job Servers (Macro Steps)


Job servers can be managed using macro steps. Macro steps can do things such as enable or disable
job processing and set polling intervals.
For more information about managing job servers, see "Manage a Job Server Cluster" (p. 54).

Add Monitored Job Object


Use this macro step to automate the addition of a container to a job server. It works with the
Remove Monitored Job Object macro step. You can schedule the addition and removal of job
objects from job servers at a time that is appropriate for your business.
In Contributor, an application can run on more than one job server. The administrator adds a job
server to Contributor Administration Console, and then adds applications to the job server. The
administrator can start and stop job servers from Administration Console, but cannot start and
stop individual applications.
The following table describes the relevant parameters.

Parameter Task

Macro Step Name The name of the macro step.

Job Server or Cluster Browse to the Job Server or Cluster that you want to start by clicking
the Browse button and selecting the correct server name.

Job Container The container you want to monitor.

Disable Job Processing


Use this macro step to stop a job server at a scheduled time.
The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Job Server or Cluster The job server or cluster that you want to stop.

Enable Job Processing


Use this macro step to start a job server at a scheduled time.
The following table describes the relevant parameters.

Administration Guide 197


Chapter 11: Automating Tasks Using Macros

Parameter Description

Macro Step Name The name of the macro step.

Job Server or Cluster The job server or cluster that you want to start.

Job Doctor
Use this macro step to generate a report on jobs in a container. The report is in XHTML format.
It is typically used on the advice of Technical Support and can be used to help debug problems with
Contributor jobs.

Tip: Adding a Wait for Any Jobs macro step before the Job Doctor macro step ensures that all jobs
are complete before moving on to this macro step. For more information on the Wait for Any Jobs
macro step, see "Wait for Any Jobs" (p. 199).
The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Job Container The container you want to report on.

Include contents of Whether to include the adminhistory table information in the report.
Admin History Although this is useful information, it can slow down report generation
and make the output quite large.

Report file name (Enter A path and file name for the XHTML report.
Local Application Server
Path)

Remove Monitored Job Object


Use this macro step to automate the removal of a container from a job server. It is the converse
macro step to the Add Monitored Job Object macro step. You can schedule the addition and removal
of applications from job servers at a time that is appropriate for your business.
In Contributor, an application can run on more than one job server. The administrator adds a job
server to Contributor Administration Console, and then adds applications to the job server. The
administrator can start and stop job servers from Administration Console, but cannot start and
stop individual applications.
The following table describes the relevant parameters.

Parameter Task

Macro Step Name The name of the macro step.

198 Contributor
Chapter 11: Automating Tasks Using Macros

Parameter Task

Job Server or Cluster Browse to the Job Server or Cluster that you want to stop by clicking
the Browse button and selecting the correct server name.

Job Container The container you want to stop monitoring.

Set Max Concurrent Job Tasks


Use this macro step to control the maximum number of jobs that can run on a job server. This is
useful if the computer is slow or you must run other non-Contributor applications on the server at
the same time.
The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Job Server or Cluster The job server or cluster that you want to limit job tasks on.

Maximum number of Job The maximum number of job tasks allowed. This should be no more
Tasks than the number of physical processors that you have on the
computer.

Set Polling Interval for Job Server


Use this macro step to set how often a job server checks for jobs. The default is 15 seconds. Use
this macro step to control the amount of resources used by the job service.
The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Job Server or Cluster The job server or cluster that you want to set polling interval for.

Polling Interval Set the frequency with which a job server looks to see if there are any
jobs to do. This is measured in seconds. The default is 15 seconds.

Wait for Any Jobs


Use this macro step to ensure jobs are completed before allowing processing to continue. This macro
step is especially useful when combining it with other macro steps that may not wait for all jobs to
run before beginning, such as the Go To Production macro step.

Administration Guide 199


Chapter 11: Automating Tasks Using Macros

You can monitor the jobs in a container.


The following table describes the relevant parameters.

Parameter Task

Macro Step Name The name of the macro step.

Job Container The container you want to monitor.

Job Timeout (in minutes) A time period after which the macro terminates with an error if the
jobs are not completed. The default is 1440 minutes, which is one
day.

Development (Macro Steps)

Go to Production
The Go to Production process takes the development Contributor application and creates the
production application, making it available to users on the Web client. A new development
application is established. Use this macro step to automate the Go to Production process.
Before you can run Go to Production, the application must at least have an e.List and users defined.
You can run Go to Production without setting any rights, but no one can view the application on
the Web client. However, you can preview the application by selecting Production, Preview in the
Administration Console.
When you start Go to Production, job status is checked. If jobs are running or queued, the Go To
Production macro step will wait for them to complete. During the automated Go to Production
process, the following checks are completed.
● A check is made to see if there are any jobs running.

● If necessary, a job is created to ensure that all e.List items are reconciled if they are not already.

● A check is made to see if a Cut-down models job is required. If it is required, the job is created
and run.
For more information on Go To Production, see "The Go to Production Process" (p. 239).

The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Application The name of the Contributor application datastore that the macro
step is being run against.

Create Planning package Publish the Planning package to Cognos Connection.

200 Contributor
Chapter 11: Automating Tasks Using Macros

Parameter Description

Reset e.List item states

Minimum e.List item state: One of the following minimum workflow states:
● Not Started

● Work in Progress

● Locked

If you set the minimum workflow state to Work in Progress, any e.


List items that have a workflow state of Not Started are reset to Work
in Progress. The default is Not Started, which means no change takes
place.

Maximum e.List item One of the following maximum workflow states:


state: ● Not Started

● Work in Progress

● Locked

The state must be greater than or the same as the minimum e.List
Item State.
If you set Work in Progress, and e.List items are Locked, the e.List
items are reset from Locked to Work in Progress.
The default is Locked, which means no change takes place.

Skip Top Level e.List Items Does not reset top level e.List items.

Jobs

Job Timeout (in minutes) A time period after which the macro step terminates if the preexisting
jobs are not completed. If jobs are running, Go to Production will
wait for them to finish. The default is 1440 minutes, which is one
day.

Wait for jobs after Go to After the Go to Production process, complete all jobs before moving
Production on to the next macro step.

Validation Report Indicates problems with parameters set in the Go to Production


process. Click Validate to recheck the validation report status.

Import Data
Importing data into cubes requires the following process.

Administration Guide 201


Chapter 11: Automating Tasks Using Macros

● Create the source file.

● Select the cube and text file to load into the cube.

● Load data from the text files into the datastore staging tables.

● Prepare the import data blocks.

● Run the Go to Production process.


For more information on importing data, see "Managing Data" (p. 141).
Two macro steps are provided for the import process: Upload Import File and Prepare Import.

Upload an Import File


Use this macro step to load data from a text file into the datastore staging table. You can load only
one file at a time using this macro step.
An import table is created for each cube during datastore creation. There is a column for each
dimension, plus a value column. If new cubes or new dimensions are added to the Analyst model
after an application is created, new import tables or columns in the tables are added after a
synchronize is run and saved.
The cube name associated with the import table is stored in the application object table. The tables
are named im_cubename and ie_cubename. Errors are stored in ie_cubename.
Files can be loaded from any location that your bulk load engine supports.
The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Application The name of the Contributor application datastore that the macro step
is being run against.

File to Upload (Enter The name and location of the file to be loaded. This can be a UNC
Local Application location such as \\server\share\file.txt.
Server Path)

Target Cube Name The name of the cube that the data is to be imported into.

Remove existing data Whether existing rows are to be deleted. Selecting this item removes
in import table previously loaded data. Clearing this option when the names of previously
loaded data match the newly loaded data causes the new data to replace
the old. Previously loaded data that is not matched remains in the staging
table.

202 Contributor
Chapter 11: Automating Tasks Using Macros

Prepare Import
Use this macro step to take the data from the import staging tables per cube, per e.List item. The
calculation engine validates the data and converts it into import blocks Errors are written to
ie_cubename.
The import data block contains only the data required for an individual e.List item. Data targeting
No Data cells or formula items and data not matching any items is removed.
The process of converting data into import blocks uses the Job architecture to run on multiple
computers and processes. It will not conflict with other online jobs for the application.
The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Application The name of the Contributor application datastore that the macro step
is being run against.

Cube Details

Cubes to Prepare The names of the cubes that you are going to prepare import data for.

Cubes to Zero The names of the cubes that you are going to prepare import data for.

Job Timeout (in A time period after which the macro step terminates if the job is not
minutes) completed. If the job does not succeed, an error appears. The default is
1440 minutes.

Synchronize
Use this macro step to automate the synchronize function for the application.
For more information on synchronizing an application, see "Synchronizing an Application" (p. 177).
The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Application The name of the Contributor application datastore that the macro step
is being run against.

Analyst Library Name The name of the Analyst Library to synchronize. Either use library name
already specified for the application or select other library name.

Administration Guide 203


Chapter 11: Automating Tasks Using Macros

Parameter Description

Save Changes if Destructive synchronize removes dimensional items or cubes and results
Destructive in data loss.
When you run the synchronize process from the Contributor
Administration Console, you can preview the changes and decide whether
to save the synchronization. When running synchronize using macros,
it is not possible to preview the changes before saving the synchronize.
Instead, you can choose to cancel the synchronization if data will be
lost by selecting the Save changes if destructive option.
A synchronize is considered destructive in the following circumstances:
● cube dimensions added

● cube dimensions deleted

● cube dimensions substituted

● cube dimensions reordered

● detail items from a dimension deleted

● detail items from a dimension changed to calculated

For more information, see "Changes that Result in Loss of Data" (p. 177).

Execute Analyst Macro


Use this macro step to execute an Analyst Macro. For more information on Analyst macros, see
the Analyst User Guide.
The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Analyst Macro The Analyst Library containing the macro you want to run and the
specific macro.

Import Access Table


Use this macro step to import Access Tables into your application.
For more information on Access Tables, see (p. 114).
The following table describes the relevant parameters.

204 Contributor
Chapter 11: Automating Tasks Using Macros

Parameter Description

Macro Step Name The name of the macro step.

Application The name of the Contributor application datastore that the macro step
is being run against.

Base access level Choose from:


No data
Hidden
Read
Write (default)

Access Table Name The Access Table name.


(import only)

Access Table File Path The path for the Access Table file.
(Enter Local
Application Server
Path)

Trim leading and Whether you want to remove leading and trailing whitespace in the file.
trailing whitespace

First row is heading Whether the first row is used as the header row.

Delete Undefined items If an access table file was previously imported for the access table, and
you are importing a new one, existing settings are updated with the new
specified settings. Select this check box to delete settings that do not exist
in the new file. If the check box is cleared, previous settings are retained.

File Type:

Text File Whether you are importing a text file.

Quoted Strings Whether the file has quoted strings.

Delimiter The type of delimiter the file uses.

Excel File Whether you are importing an Excel file. Enter the Worksheet location.

Administration Guide 205


Chapter 11: Automating Tasks Using Macros

Import e.List and Rights

Import e.List
For more information on the e.List, see "Managing User Access to Applications" (p. 89).
Use this macro step to import an e.List into your application.
The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Application The name of the Contributor application datastore that the macro step
is being run against.

e.List File Path (Enter The location of the e.List file.


Local Application
Server Path)

Trim leading and Whether you want to remove leading and trailing whitespace in the file.
trailing whitespace

First row is heading Whether the first row is used as the header row.

Delete Undefined items If an e.List was previously imported and you are importing a new e.List,
existing settings are updated with the new specified settings. Select this
check box to delete settings that do not exist in the new file. If the check
box is cleared, previous settings are retained.

File Type:

Text File Whether you are importing a text file.

Quoted Strings Whether the file has quoted strings.

Delimiter The type of delimiter the file uses.

Excel File Whether you are importing an Excel file. Enter the Worksheet location.

Import Rights
Use this macro step to import rights into your application.
The following table describes the relevant parameters.

206 Contributor
Chapter 11: Automating Tasks Using Macros

Parameter Description

Macro Step Name The name of the macro step.

Application The name of the Contributor application datastore that the macro step
is being run against.

Rights File Path (Enter The location for the Rights file.
Local Application
Server Path)

Trim leading and Whether you want to remove leading and trailing whitespace in the file.
trailing whitespace

First row is heading Whether the first row is used as the header row.

File Type:

Text File Whether you are importing a text file.

Quoted Strings Whether the file has quoted strings.

Delimiter The type of delimiter the file uses.

Excel File Whether you are importing an Excel file. Enter the Worksheet location.

Upload a Development Model


Use this macro step to load the development model XML into a development application. For
information about saving the Development XML, see "Save Application XML for Support" (p. 77).
Remove write access in the Contributor Administration Console from the current user before
running the macro step. This macro step typically is used if you are running parallel test and live
servers and you need to upload the XML from the test server to the live server. On the target server,
there must be an existing application. After the development model has been uploaded, Go to
Production must be run for the model to appear to Web users.
This should be used only on the advice of Technical Support.
The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Application Enter the name of the Contributor application datastore that the
macro step is being run against.

Administration Guide 207


Chapter 11: Automating Tasks Using Macros

Parameter Description

Model Definition File Path The name and location of the model definition file. The model
(Enter Local Application definition file is a description of the entire Contributor application
Server Path) and is in XML format.

Save any generated datastore A location for generated datastore scripts to be saved.
scripts to file When Generate Scripts is set to Yes in Admin Options, a check
is made to see if the datastore must be restructured, for example,
if tables must be added or deleted, a script is generated.
This datastore update script typically must be run by a database
administrator (DBA).

Production (Macro Steps)

Publish
Use the Automated Publish process when you need to perform a Contributor publish as a scheduled
task or from the command line as part of a script. A Publish can no longer target the Contributor
transactional datastore.
During the publish process, the published data is exported to a temp directory on the job servers.
A file is created for each e.List item for each cube. After the files are created, they are typically
loaded to the datastore using a bulk load utility (BCP or SQLLDR) and then the temp files are
deleted.
Using the Publish - View Layout - Advanced macro step, you can do an interruptible publish if you
want to use different mechanisms to bulk load data into the target datastore or an external
application. Interruptible publish prevents the temp files from being loaded into the datastore and
deleted. They remain in the temp directory, or you can collate them into a large file per cube. For
collation, each job server that may be involved in the publish job must expose a share that the
computer running the macro step can access. That share must expose the TEMP folder for the user
context of the Planning Service.

Tip: To use Interruptible publish, you must select from the How should the data be managed option
group the User-managed option.
For more information on publishing, see "Publishing Data" (p. 255).

Publish - View Layout


Use this macro step instead of the Publish-View Layout-Advanced macro step when you do not
need to change any of the default parameters set for the Publish-View Layout-Advanced macro
step.
The following table describes the relevant parameters.

208 Contributor
Chapter 11: Automating Tasks Using Macros

Parameter Description

Macro Step Name The name of the macro step.

Select Publish Container Select a container to publish the data to.

Suppress zeros Whether to publish zeros.


Selecting this option suppresses zeros. This can speed up the process
of publishing data substantially, depending on the number of blank
cells.

Cubes to Publish Whether to publish all or some Contribution cubes.

e.List items to Publish Whether to publish all e.List items, use the selection from
Contributor Administration Console, or select individual e.List
items.

Publish - View Layout - Advanced


This macro step automates the view layout publish process. This layout is for historical purposes.
The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Select Publish Container Select a container to publish the data to.

Use Plain Number Formats Whether numeric formatting is removed or retained. Selecting this
option removes any numeric formatting. It publishes data to as many
decimal places as needed, up to the limit stored on the computer.
Negative numbers are prefixed by a minus sign. There are no
thousand separator, percent signs, currency symbols, or other numeric
formats that are applied on the dimension or D-Cube. Plain Number
Format uses the decimal point (.) as the decimal separator.

Data Filters

Suppress zeros Whether to publish zeros.


Selecting this option suppresses zeros. This can speed up the process
of publishing data substantially, depending on the number of blank
cells.

Administration Guide 209


Chapter 11: Automating Tasks Using Macros

Parameter Description

Data access level to The access level of data to Publish, one of


publish ● No data

● Hidden (default)

● Read

● Write

This option is additive, so if you select Hidden, data set to Read and
Write is also published, and if you select Read, data set to Write is
also published.

Annotation Filters The type of annotations to be published.

How should the data be


managed?

Automatically upload data Whether to load data automatically into the datastore.
to datastore

Remove existing data Selecting this option ensures that a consistent set of data is published.
It publishes data for all the selected cubes, and removes all other
published data in the datastore. Clear this option if you want to leave
existing data. If an e.List item is being republished, it replaces data
for that e.List item with the new data.

Where should the data be Whether to publish to either a default container or an alternate
published to? publish container.

User Managed Whether to take control of the published data.

Publish GUIDs not Names Select this item if you are doing a standard publish because the GUIDs
(to upload to export are used to load the data into the publish tables. You may want to
tables) use names rather than GUIDS if the data is to be exported to Analyst
or external systems.

Should files be collated If Yes, enter location for Local Application Server and enter the share
name to retrieve files from. This share must exist on all machines
that process the job. The same share name is used for all machines.

Cubes to Publish Whether to publish all or specific Contribution cubes.

210 Contributor
Chapter 11: Automating Tasks Using Macros

Parameter Description

Stop execution if specified Selecting this option ensures that execution and processing will be
Cubes not found halted when there are no Cubes that can be identified from the results
of the selected settings.

Select e.List items

Stop execution if no e.List Selecting this option ensures that execution and processing will be
items result from settings halted when there are no e.List items that can be identified from the
results of the selected settings.

e.List items to Publish Whether to publish all e.List items, use the selection from Contributor
Administration Console, or select individual e.List items.

Stop execution if specified Selecting this option ensures that execution and processing will be
e.List items not found halted when there are no e.List items that can be identified from the
results of the selected settings.

Apply Filters to e.List


items

Publish e.List items Enter or select a date and time for filtering those e.List items that
changed since have since changed.

e.List item type One of the following:


Contributor
Reviewer
Reviewer or Contributor (default)

e.List item state Whether to publish e.List items at any state or specify a particular
state to publish.

Job Timeout (in minutes) A time period after which the macro step terminates if the job is not
completed. If the job does not succeed, an error appears. The default
is 1440 minutes, or one day.

Publish - Table Only Layout


This macro step automates the table - only layout publish process. This layout is required for the
Generate Framework Manager Model extensions and the Generate Transformer Model extension
(Analyst and Contributor versions for both).
The following table describes the relevant parameters.

Administration Guide 211


Chapter 11: Automating Tasks Using Macros

Parameter Description

Macro Step Name The name of the macro step.

Select Publish Container Select a container to publish the data to.

Use persisted parameters Whether to use same parameters for all settings for each time macro
for all settings step is run.

Data options/ Column


Options

Create columns with data Determines the column data types from the model using the selected
types based on the 'dimension for publish'. This option can minimize the default number
'dimension for publish' of columns although non-uniform data will not be published. For
example, row data will be filtered when a value is inconsistent with
the model and column data type.

Only create the following This option will always publish the selected columns from the
columns 'dimension for publish'. Use this option to publish only the required
data of the selected types. Selecting numeric, date and text options
will ensure all data (uniform and non-uniform) is published.

Include rollups Selecting this check box includes all items, including calculated items.
Clearing this option only publishes leaf items, and therefore fewer
rows. You can recreate the calculation in your reporting tools by
linking the et and sy tables.

Include zero or blank Whether to include zero or blank values in the publish.
values This option suppresses rows containing all zeros or blanks. This can
speed up the process of publishing data substantially, depending on
the number of zero or blank cells.

Prefix column names with Whether to prefix column names with the data type.
data type Select this option if you wish the column name to be prefixed with
the data type to avoid reserved name conflicts.

Table options

Include user annotations Whether to include these annotations in the publish.

Include audit annotations Whether to include these annotations in the publish.

Include attached Whether to include attached documents in the publish.


documents

212 Contributor
Chapter 11: Automating Tasks Using Macros

Parameter Description

Cubes to publish Whether to publish all or specific Contribution cubes.

Dimensions for publish Whether to use the default dimension for publish or specify a
particular dimension.

Select e.List items

e.List items to Publish Whether to publish all e.List items, use the selection from Contributor
Administration Console, or select individual e.List items.

Job Monitoring

Timeout (in minutes) A time period after which the macro step terminates if the job is not
completed. If the job does not succeed, an error appears. The default
is 1440 minutes.

Description A description for this reporting job.

Publish - Incremental Publish


Use this macro step to publish data so that only the e.List items that contain changed data are
published.If your publish selection contains more than one cube, but values change in only one
cube, the changed e.List items for all the cubes are republished. Before an incremental publish can
be run, the publish schema must be created, either by doing a full publish, selecting the cubes and
e.List items that you want to publish, or by generating and running publish scripts. When the Go
to Production process is run, publishes that are changes-only are suspended. Model changes that
result in changes to the publish schema may result in you needing to do a full publish of all the
selected cubes and e.List items.

Parameter Description

Macro Step Name The default name of this macro

Where should the data be


published to?

Reporting Publish Container Choose either to publish data to the default container or specify
an alternate publish container.

e.List Item Filter Select the check box if you wish to only published submitted e.List
items.

Job Monitoring

Administration Guide 213


Chapter 11: Automating Tasks Using Macros

Parameter Description

Timeout (in minutes) A time period after which the macro step terminates if the job is
not completed. If the job does not succeed, an error appears. The
default is 1440 minutes.

Delete Commentary
Use this macro step to delete user or audit annotations and attached documents in a Contributor
application using date and time, character string, and filters for e.List item name.
Contributor applications can be annotated by users in the Web application. There are user and
audit annotations.
User annotations consist of comments per cell, cube (tab in the Web client), and model.
Audit annotations are records of user actions in the Web client, such as typing data, importing files,
and copying and pasting data. They can be enabled or disabled. For more information, see "Delete
Commentary" (p. 289).
When you delete commentary, the following process occurs:
❑ The macro step fetches and unpacks the model definition (this is a description of the entire
Contributor application).

❑ Processes the commentary for each e.List item in turn--deleting specified comments.

Tip: Adding a Wait for Any Jobs macro step before the Delete Commentary macro step ensures
that all jobs are complete before moving on to this macro step. For more information on the Wait
for Any Jobs macro step, see "Wait for Any Jobs" (p. 199).
The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Application The name of the Contributor application datastore that the macro
step is being run against.

Annotation Filters

Include user annotations in Whether user annotations are deleted. It is selected by default.
the operation

Include audit annotations in Whether records of user actions are deleted.


the operation

214 Contributor
Chapter 11: Automating Tasks Using Macros

Parameter Description

Apply date filter Whether to delete commentary by a date filter.


If selected, you must enter the date and time before which all
annotations are deleted. This is the date when annotations were
created, not saved. Dates are in ISO8601 format.

Apply content filter Whether to delete commentary using a content filter. This is off by
default.
If selected, you must enter a character string as a filter.

Important: All commentary containing the character pattern


specified are deleted. For example, if you specify the string pen,
annotations containing the words pencil and open are deleted.

e.List items to process You can specify which e.Lists to process.


The e.List item name is in the elistitemname column in the e.List
import file or e.List Item Id in Contributor Administration Console.
The names are case sensitive.
The following example contains a mixture of contributor e.List
items (A1 and A2) and a reviewer e.List item (B).
A1;A2;B
Alternatively, you can choose to process all e.List items.

Job Timeout (in minutes) A time period after which the macro step terminates if the job is
not completed. The default is 1440 minutes or one day.

Execute an Admin Extension


Use this macro step to automate the running of the Generate Transformer Model extension.

Note: You cannot automate the running of the Generate Framework Manager Model extension.
To automate the Generate Transformer Model Extension, you must first run the extension using
Contributor Administration Console. This creates valid settings in the application datastore. The
macro then uses these settings.
The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Application The name of the Contributor application datastore that the macro step
is being run against.

Administration Guide 215


Chapter 11: Automating Tasks Using Macros

Parameter Description

Admin Extension The Admin Extension that you want to run.

Set an Application Online or Offline


Use this macro step to automate setting the Web application online or offline, preventing people
from accessing the Web site.
For more information about accessing the Web site, see "Working Offline" (p. 86).
The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Application The name of the Contributor application datastore that the macro step
is being run against.

Enable Web Client Whether to prevent users from accessing the Contributor application
Barrier on the Web. Clear the Enable Web Client Barrier option to bring the
application back online. Select this option to take the application offline.

Administrator Links (Macro Steps)

Execute Administration Link


Use this macro step to run an Administration Link automatically. Administration Links copy data
between Contributor applications in the same Planning Store without having to publish data first.

Note: You must first create a valid Administration Link for the application. For more information,
see "Administration Links" (p. 145).

Tip: Adding a Wait for Any Jobs macro step before and after the Execute Administration Link
macro step ensures that all jobs are complete before moving on to this macro step and after running
this macro step. For more information on the Wait for Any Jobs macro step, see "Wait for Any
Jobs" (p. 199).
The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Administration Link The Administration Link to run.

216 Contributor
Chapter 11: Automating Tasks Using Macros

Parameter Description

Job Timeout (in A time period after which the macro step terminates if the job is not
minutes) completed. If the job does not succeed, an error appears. The default is
1440 minutes, or one day.

Macros (Macro Steps)


Macros are automated tasks such as those performed in Contributor Administration Console. You
can automate many macro-specific functions, such as running and importing macros.

Macro Doctor
Use this macro step to generate a report on macros for debugging purposes. In the event of problems
with macros you may be asked by customer support to create and run the Macro Doctor macro.
The Macro Doctor captures information about the macros. It also allows you to see more detail
about the execution steps, and write them to files so that they can be inspected. Those macro step
definitions may be imported into another system, if required.
The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Folder for report and The location where the report and macro step information is created.
Macro Steps (Enter Local
Application Server Path)

Include detailed progress for Whether to include detailed progress information for each macro
Macro Steps step. This extra information can often aid the debugging process.

Macro Test
Use this macro step to test if the macro components are running correctly. When successfully run,
it logs a user-specified Windows Application Event Log message that can be modified.
The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Message A message to be logged.

Administration Guide 217


Chapter 11: Automating Tasks Using Macros

Execute Macro
Use this macro step to run another macro automatically. This macro step is very useful because
you can nest many macros inside one macro. For example, if you have weekly or monthly processes
that share macro steps, such as import and publish, you can use the Execute macro to run them
both.
The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Macro The macro to run. Do not select the same macro that you are adding
this macro step to.

Number of times How many times the macro should run. The default is 1.

Execute Command Line


Use this macro step to run any program from the command line.

Important: Appropriate Access Rights need to be granted in order to use this macro step. For more
information, see "Access Rights for Macros" (p. 40).
The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Command to Execute The Command to run.

Check for Return Code One of the following:


● Ignore Return Code
ignores the return code from the program.

● Success Return Code


The macro step fails if the program returns any other code other
than the one specified. Typically, success is represented by zero.

Import Macros from Folder


Use this macro step to import a macro that has previously been exported.

Tip: It is possible to modify an exported macro step’s XML file with a third-party editor and then
import again using this macro step.
The following table describes the relevant parameters.

218 Contributor
Chapter 11: Automating Tasks Using Macros

Parameter Description

Macro Step Name The name of the macro step.

Select Source folder The location of the macro to be imported.

Process sub folders One of the following:


● Only direct sub folders

● All sub folders

How should duplicate One of the following:


names be handled ● Replace

● Append

● Create different name

Export Macros to Folder


Use this macro step to export a macro to a file location.
The following table describes the relevant parameters.

Parameter Description

Macro Step Name The name of the macro step.

Root folder (Enter Local The location where the macro is exported to.
Application Server Path)

Manage Existing Files One of the following:


● Archive
Use this to retain a history of your macros.

● Remove

● Leave

Session (Macro Steps)

Remove Application Lock


Use this macro step to remove an Application Lock. For more information, see "Managing
Sessions" (p. 59)
The following table describes the relevant parameters.

Administration Guide 219


Chapter 11: Automating Tasks Using Macros

Parameter Description

Macro Step Name The name of the macro step.

Application The name of the Contributor application that is locked.

Running a Macro
There are a number of scheduled and ad hoc methods that can be used to run the macro. You can
run a macro in the following ways.
● "Run a Macro from Administration Console" (p. 221)

● "Run a Macro from Cognos Connection" (p. 221)

● "Run a Macro from a Cognos 8 Event" (p. 222)

● "Run a Macro using Macro Executor" (p. 224)

● "Run a Macro using Command Line " (p. 224)

● "Run a Macro using Batch File" (p. 224)

For more information about the execution location and credentials for Contributor macros, see the
following table.

Type of Macro Execution Execution Location Credentials

Administration Console Planning Server with User logged on the


Dispatcher Planning Service Contributor Administration
enabled Console

Cognos Connection Planning Server with Cognos Connection


Dispatcher Planning Service credentials
enabled

Cognos 8 Event Planning Server with Cognos Connection


Dispatcher Planning Service credentials used to create
enabled event

Macro Executor Local machine where the Scheduler Credentials in the


Macro Executor is running. System Settings of the
The machine must have the Contributor Administration
Planning Server installed. Console

220 Contributor
Chapter 11: Automating Tasks Using Macros

Type of Macro Execution Execution Location Credentials

command line Local machine where the Scheduler Credentials in the


command line execution is System Settings of the
running. The machine must Contributor Administration
have the Planning Server Console
installed.

batch file Local machine where the Scheduler Credentials in the


batch file is running. The System Settings of the
machine must have the Contributor Administration
Planning Server installed. Console

Run a Macro from Administration Console


You can run a macro using the Macro tool in Administration Console.

Steps
1. In the Contributor Administration tree, click the Macros icon.

2. In the Macros list, select which macro you want to run and click Execute.
A dialog box appears informing you that the macro is running.

Tips: You can monitor the progress of the macro in the Macro Steps list. You can view any
error messages by clicking Error Details. You can stop a macro by clicking Stop. The macro
will stop before the next macro step begins.

Run a Macro from Cognos Connection


Once published to Cognos Connection, you can run the macro or create a job and use the macro
or job in an Event created in Event Studio.
You must first create a new macro or transfer an existing macro and publish it to Cognos Connection,
see "Creating a Macro" (p. 192).
To secure access to Contributor macros in Cognos Connection, see "Set Access Rights for
Contributor Macros in Cognos Connection" (p. 41).

Steps
1. In Cognos Connection, in the upper-right corner, click Cognos Administration.

2. Click the Configuration tab and then click Content Administration.

3. Click Planning and then click Macros.

4. Set the general properties and permissions, see the Cognos 8 Administration and Security Guide.

Administration Guide 221


Chapter 11: Automating Tasks Using Macros

5. To run the macro immediately or schedule it to run at a specified time, click Run with options

and select to run now or later. If you select later, choose a day and time to execute the
macro and click OK.

6. To create a recurring schedule to run the macro, click Schedule .

7. Under Frequency, select how often you want the schedule to run.

The Frequency section is dynamic and changes with your selection. Wait until the page is
updated before selecting the frequency.

8. Under Start, select the date and time when you want the schedule to start.

9. Under End, select when you want the schedule to end.

Tip: If you want to create the schedule but not apply it right away, select the Disable the schedule
check box. To later enable the schedule, clear the check box.

10. Click OK.


The macro schedule is created and the macro runs at the next scheduled time.

Run a Macro from a Cognos 8 Event


You can create events that run Contributor macros when specified conditions are met. For example,
you can move data between applications using an Event Studio Agent to trigger a Contributor
macro that uses an administration link.
When you specify an event condition, you describe specific occurrences that an agent must detect
before it performs its tasks. The event condition is a query expression that you create using items
from the package.
Task execution rules specify when a task is performed. By default, a task is performed for new
instances of events and all ongoing instances of events, but you can change this.
You specify the task execution rules separately for each task in the agent.
For more information about creating an event and agent, see the Cognos 8 Business Intelligence
Event Studio User Guide.

Steps

1. In Event Studio, click the Actions menu and then click Specify Event Condition .

2. Create a detail expression, a summary expression, or both by doing the following:


● If you want part of the event condition to apply to values of individual source items, click
the Detail tab and follow step 3.

● If you want part of the event condition to apply to aggregate values, click the Summary
tab and follow step 3.

3. In the Expression box, create a query expression by doing the following:

222 Contributor
Chapter 11: Automating Tasks Using Macros

● Type text or drag items from the source tab.

● Type text or drag operators, summaries, and other mathematical functions from the
functions tab.

Tip: To see the meaning of an icon on the functions tab, click the icon and read the
description in the Information box.

4. If you want to check the event list to ensure that you specified the event condition correctly,
from the Actions menu, click Preview.

5. If you want to know how many event instances there are, from the Actions menu, click Count
Events.

6. From the File menu, click Save As .

7. Specify a name and location for the agent and click OK.

8. In the I want to area, click Add a task.

9. Click Advanced.

10. Click Run a planning macro task.

11. In the Select the planning macro dialog box, specify the task to include in the agent by searching
the folders to find the task you want and clicking the entry.

12. Under Run the planning macro task for the events, review the event status that will cause the
task to be run.

13. From the File menu, click Save .


If you want to add other events, see Cognos 8 Business Intelligence Event Studio User Guide.

14. In the I want to area, click Manage the task execution rules .

15. On the source tab , click one or more data items that uniquely define an event and drag them
to the Specify the event key box.

16. Click Next.

17. On the Select when to perform each task page, do the following:

● In the Tasks box, click the task that the agent will perform for the event statuses you specify.

● Under Perform the selected task for, select one or more event status values.

18. If you want to manage the execution rules for another task, repeat step 4.

19. Click Finish.


The execution rules for each task you selected are set.

Tip: If you want to reset the execution rules for every task in the agent to the default values,
from the Actions menu, click Remove Task Execution Rules. Each task is reset to be performed
for new instances of events and all ongoing instances of events.

Administration Guide 223


Chapter 11: Automating Tasks Using Macros

20. Save the agent.

Run a Macro using Macro Executor


The Macro Executor is a program that takes the macro step and creates a component that performs
the action. After the component is created, the macro step is passed to it and then run.
MacroExecutor is typically installed to C:\Program Files\Cognos\C8\bin\epMacroExecutor.exe.

Note: Double-clicking epMacroExecutor.exe displays the available Command Line options.

Important: The Macro Executor must be installed on the computer you are trying to run it from.
You cannot execute it remotely.

The Macro Step Plug-in


Each macro step is associated with a macro step plug-in. In addition to running the specified
functionality, it validates the structure of the macro step.

Run a Macro using Command Line


You can run the macro by typing the following from the command line, substituting the appropriate
macro name:
"C:\Program Files\Cognos\C8\bin\epMacroExecutor.exe" MacroName

Important: If the macro name has spaces in it, you must enclose it in quotes.

Run a Macro using Batch File


You can use Windows built in Scheduler (Control Panel, Scheduled Tasks).

Important: If jobs are scheduled to start while Go to Production is running for an application, the
job will fail.
The following is an example of a batch file (.bat) that can be used to run the macro:
"C:\Program Files\Cognos\C8\bin\epMacroExecutor.exe"
MacroName
IF ERRORLEVEL 1 GOTO ExceptionDetectedSettingUpLabel
IF ERRORLEVEL 2 GOTO ExceptionDetectedExecutingLabel
ECHO Succeeded
GOTO EndLabel
:ExceptionDetectedSettingUpLabel
ECHO Exception detected in setting up macro
GOTO EndLabel
:ExceptionDetectedExecutingLabel
ECHO Exception detected in executing macro
GOTO EndLabel
:EndLabel
PAUSE

Microsoft Calendar Control


The Macros tool uses Microsoft Calendar Control to allow for date selections.

224 Contributor
Chapter 11: Automating Tasks Using Macros

If you want to enable the use of the calendar graphic for date selection, this calendar control
(MSCAL.OCX) must be installed and registered on the computer on which Administration Console
runs. This control is available in the Microsoft Office suite or from Microsoft Visual Studio.

Date Formats
The format of dates used in the Macro step is as follows:
yyyy-mm-ddThh:nn:ss.ttt+00:00
It is in ISO8601 format, where:
Where:
● yyyy = year

● mm = month

● dd = date

● T (signifies time)

● hh = hour

● nn = minutes

● ss = seconds

● ttt = milliseconds

● +00:00=The time zone offset in hours and minutes, relative to GMT (Greenwich Mean Time)
also known as UTC - Coordinated Universal Time.

Here is an example:
2002-11-13T15:10:31.663+00:00

Troubleshooting Macros
You can monitor the status of macros within the Contributor Administration Console. If a macro
fails, the Error Details button is enabled. Click this button to display information about why the
macro failed. You can also find error messages in the error log.

Unable to Run Contributor Macros Using a Batch File


If you are running a Contributor macro using a batch file and it does not work, check the following:
❑ Can you run the macro using the Contributor Administration Console?
If not, check that the parameters are correct.

❑ Is the batch file syntax correct?


"C:\Program Files\Cognos\c8\bin\epMacroExecutor.exe" /Macro=pad/pad_name/macro/
macroname

Administration Guide 225


Chapter 11: Automating Tasks Using Macros

Check that the correct filename is used in the actual macro step. If the file is open when the
batch command is run, the command fails and returns an error code of 1.
Check that the correct case is used for the macro name. The macro name is case sensitive.
Look in the TEMP folder of the Planning Service user context on the machine running the
Planning Service where the macro was executed. If you see the following message in the Error
Description column your security may not be set up correctly. For more information about
security, see "Security" (p. 27).
"Permission
denied"

Note: You can only run macros from the command line on a computer with a server install.

226 Contributor
Chapter 12: Data Validations

Data validation is the process of aligning plans with targets by enforcing business rules and policies.
Use the data validation feature in Cognos 8 Planning to define rules that ensure that incoming data
in a Contributor application is in the right format and conforms to existing business rules. Building
data validations involves defining a business rule that specifies the criteria that user input must meet
for the entry to be accepted.
A validation rule represents a single data entry requirement imposed on a range of cells in a single
cube of a model. This requirement is expressed as a rule or boolean formula (true or false) that
identifies invalid data entries when contributors or reviewers attempt to save or submit a plan. A
rule set is a collection of rules that can be associated with e.Lists and fail actions.
Data validation in Cognos 8 Planning has the following benefits:
● You can apply different rules to different e.List items.

● It reduces the number of conditional IF-THEN-ELSE formula flags in an Analyst model.

● Centralizes data validation rules definitions.

Validation Methods
Cognos 8 Planning provides these methods for validating data:
● presence check
Validates input into empty numeric or text cells. This method checks that critical data is present
and was not omitted from the plan. For example, contributors must enter forecast data for
product sales or provide an explanatory note for variances.

● dependencies
The text cell is based on values in other cells that contain single or compound conditions. For
example, contributors must enter an explanation into a text cell for any capital request that
exceeds $25,000 or for a capital request in the Other category greater than $25,000.

● business rule compliance


Ensures that the data entered conforms to the business rules. For example, the pay range for
new hires must not exceed top of pay scale.

● single numeric validations, with variation by e.List item


For example, all Division A profit centers must have a forecast profit margin that is equal to
or exceeds 20% and a profit that exceeds $250,000. All Division B profit centers must have a
forecast profit margin that is equal to or exceeds 15% and a profit that exceeds $0.

Administration Guide 227


Chapter 12: Data Validations

Validation Triggers
Validation rules are run on the Contributor Web client or on Contributor for Excel. A rule is
evaluated under one or more of the following conditions:
● automatically, when a contributor saves a plan or when a reviewer submits a plan

● manually, when a contributor or reviewer selects the Validate Data option from the File menu
or the Validation Data toolbar button in the Web client

● manually, when a contributor or reviewer selects the Validate Data option from the Contributor
menu in Excel

When one of these triggers occurs, the rule formula is evaluated to either pass or fail. If any of the
rules in the rule set detects a failure during the evaluation, the rule set is considered to have failed
and the fail action specified in the rule set is performed. The contributor or reviewer may be prevented
from saving or submitting the plan.

Setting Up Data Validation


You administer and maintain data validation in the Contributor Administration Console. The Data
Validations branch under the Applications node includes specific components that are required at
each stage of the data validation workflow.

Important: Rules that were built prior in versions to Cognos Planning 8.2 are incompatible with
the current product version, and must be redefined as follows.
The process flow for defining validations as follows:
❑ Plan how the rule applies to a reviewer, contributor, or both a reviewer and contributor (p. 229)
For reviewers, define validation rules against post-aggregate D-Lists. Because reviewers are
managing the aggregate of their contributors, the post-aggregate calculations, which are results
from all e.List items, are applicable only to the reviewer.
For contributors, define validation rules against pre-aggregation D-Lists (before the e.List).
For contributors and reviewers, define validation rules against post-aggregate calculations.

❑ Synchronize the Contributor applications with the Analyst models


Ensure that all cubes in an application are updated when the underlying objects in Analyst
change. Changes to the model may include renaming dimensions or adding, deleting, or renaming
dimension items. By synchronizing, you can import from Analyst the updated cube definition
in the application.

❑ Define one or more rules (p. 233)

In the Data Validations, Rules folder, use the Validation Rule wizard to define the validation
rules. Specify the rule message that appears when a data fails validation. Contributors or
reviewers can then react to the failed entry. The D-Cube to which the validation applies, the
measures dimension whose items are used to define the rule formula, and the scope or target
range for validation.

228 Contributor
Chapter 12: Data Validations

❑ Define one or more rule sets (p. 235)


After you create a rule set, you must add at least one rule to the rule set.

In the Data Validations, Rule Sets folder, you can create rule sets by adding one or more rules,
and assigning fail actions. A rule set applies to a single data validation process.

❑ Associate the rule sets with groups of e.List items (p. 237)

In the Data Validations, Rule Set e.List Items folder, associate the rules sets with the groups of
e.list items. Specify the roles, such as contributors, reviewers, or contributors and reviewers
and their subordinates, to which the rule set applies.

❑ Run the Go to Production process


Perform this process to make the application, including its new business rules and data format
constraints, available to users on the Web client and Contributor for Excel.

The Impact of Aggregation on Validation Rules


Before creating a validation rule, it is important to understand how the aggregation of totals works
for a Contributor application. For Contributor models, data is stored according to the e.List item.
For a contributor, only one e.List item exists. That means calculations that occur after the e.List
item must include only their own e.List item.
For a reviewer, the data is an aggregation of all the e.List items that roll up into that reviewer.
Therefore, the model may contain sums of the values for the e.List items (pre-aggregation), and
other calculations that are based on the sum of the values after the e.List items are aggregated
(post-aggregation).
Because an e.List item for a reviewer is an aggregation of multiple e.Lists, the calculations that
appear in the D-Lists after the e.List dimension must include data from all the e.List items that roll
up into the e.List item for that reviewer. These calculations change when the lower-level e.List items
save data. We recommend aggregating the values of the calculations before the e.List dimension
(pre-aggregation calculations) because the post-aggregation calculations are recalculated when data
changes for one of the e.List items that is aggregated. The reason is that the pre-aggregation
calculations belong within a single e.List item and the data from other e.List items does not affect
the calculation result.

Example - In Analyst, Setting Up the D-Cube


To use pre-aggregation calculations for a dimension, you must ensure that the order of the dimension
list in a D-Cube is correct. When a planner sets up a D-Cube, D-Lists are chosen in the order that
uses the calculation D-List first and the aggregation D-Lists last.

In the following example, the first dimension, RollupTest Slots, holds data values. The second
dimension, RollupTest FirstDim, occurs prior to the e.List item, so its calculations are
pre-aggregation. The next dimension is the e.List. The final dimension, RollupTest LastDim, is
post-aggregation because it occurs after the e.List.

Administration Guide 229


Chapter 12: Data Validations

As shown next, the RollupTest FirstDim D-List includes the Conditional item, which is a test for
data input greater than 50,000, with a default calculation option of Force to Zero for the Flag
Value. That means it will not calculate a value for this aggregate.

The next example shows the RollupTest LastDim D-List that includes Conditional1 items that test
for LinkedValue greater than 50,000.

230 Contributor
Chapter 12: Data Validations

The following graphic shows the RollupTest LastDim D-List that includes Conditional2 items that
test for InputValue greater than 50,000.

The D-Cube that is built with RollupTest FirstDim and RollupTest LastDim shows how the values
for cells are calculated.

Administration Guide 231


Chapter 12: Data Validations

When the underlying calculations are defined in Analyst, you can view the outcome in the
Contributor Web client. Suppose that in the Contributor Web client, there are three e.List items,
A1 Profit-center, rolls up into A Region and A2 Profit-center.
For A1 Profit-center, because the pre-aggregate test is based on conditional values of 50,000, only
input cell passes the test. The Conditional cell test against the Input Value item, so the Conditional
is 0 and 1. Conditional1 tests LinkedValue and Conditional2 tests InputValue. The post-aggregate
is Text1 and Text2.

For A2 Profit-center, you can see how the pre-aggregation (LinkedValue and InputValue) and
post-aggregation (Text1 and Text2) tests change.

The Reviewer e.List item shows that the Flag Value is not present. The Force to Zero option in
Analyst suppressed this from the reviewer e.List item because no data was present.

The Conditional values are sums of the values from the Contributor e.List items. For A1 Profit-center,
the two values are 0 and 1. For A2 Profit-center, the values are 1 and 1. The aggregation added the
values of these flags for a reviewer of 1 and 2.

Contitional1 and Conditional2 do not have their values added because the calculation is recalculated
against the aggregation total. Note that the Conditional1 value shows the test failing in the first
row for the reviewer. Text1 and Text2 reveal that the formatted D-List items appear based on the
recalculated Conditional1 and Conditional2 fields.

232 Contributor
Chapter 12: Data Validations

Define a Validation Rule


To support your business decisions, your contributors must enter data that is valid and accurate.
Validation rules give you a simple way to build business logic that checks the validity of data before
a plan is saved or submitted.
Rules also include an error message that appears when the rule returns a value of false. The text
strings in error messages, names of the validation rules, and rule set names, can be translated through
the Translation node in the Development branch in the Administration Console. Use the Content
Language tab, which handles the translation of model strings, to specify your settings. Use the
Product Language tab to translate all fixed parts of the validations, such as the following message:
Submit is not allowed due to one or more blocking validation errors

When creating a rule, you must also specify the cell range (scope) that is subject to validation.
A validation rule contains a formula or expression that evaluates the data in one or more cells in
the grid and returns a value of true or false. If you require complex expressions that are
cross-dimensional or deeply nested, we recommend that you first construct them in Cognos 8
Planning - Analyst.
● Do not create contradicting rules for the same target range because they will prevent contributors
or reviewers from saving the plan.

● If an entry does not conform to a rule, ensure that you provide explicit instructions in your
message. For example, instead of stating invalid entry, state the message as Capital costs
greater than $25,000 must be pre-approved.

● Consider which items are visible and editable for the contributor. Cells that are readable can
cause a validation error, but hidden or no data cells do not impact validation rules and cannot
cause a validation error.

You can use saved selections to specify the data that you want validated. You can name and save
a collection of items from a dimension using the Access Tables and Selections node under the
Development branch in the Administration Console. Saved selections are dynamic in that the items
in the selection change when an application is synchronized following changes to the Analyst model.
Hidden, and empty cells are not validated when the rule set is run.

Steps
1. Open the Administration Console.

2. In the Administration tree, expand Datastores, DatastoreServerName, Applications,


ApplicationName, Development, and the Data Validations folder.

Administration Guide 233


Chapter 12: Data Validations

3. Click the Rules folder.

4. Click New.

The New Validation Rule wizard appears.

5. On the Welcome page, click Next.

6. On the Validation Rule Options page, do the following:

● In the Rule Name box, type a unique name that distinguishes the validation rule from
others.
No blanks or special characters, such as apostrophe (’), colon (:), question marks (?), and
quotation marks (") are allowed.

● In the Rule Message box, type the error text message that you want the contributor or
reviewer to see if the validation fails.
We recommend that a message is included in the rule to facilitate data entry. The message
should contain meaningful information that helps the contributor or reviewer enter the
correct data.

7. Click Next.

8. On the Validation Rule Cubes page, select the D-Cube against which the rule is applied, and
click Next.
Assumptions cubes do not appear in the list of available D-Cubes.

9. On the Validation Rule Dimension Selection page, select a measures dimension in the D-Cube
whose items are used to create the boolean formula for the rule, and click Next.
A rule expression is defined against a specific dimension in the selected D-Cube. All dimensions
of the cube, except the e.List, are listed.

10. On the Validation Rule Expression page, build the business logic by defining a rule formula
that evaluates to either true or false, and click Next.

11. Under Available components select items from the specified dimension in the D-Cube that you
want to use to define your rule expression and then click the arrow to move them to the
Expression definition box. Use the IF statement, AND/OR boolean operators, or logical
comparison operators, such as =, <>, and <=. It is not necessary to begin the validation rule
formula with an IF function. You can use any boolean condition expression.
For example, one of your divisions is facing competitive pricing and has a minimum and
maximum margin target. Corporate marketing wants their average forecast margin to be greater
than or equal to 15% and less than or equal to 18%. In this case, the rule expression is defined
as (Margin >= 0.15) AND (Margin <= 0.18).

12. On the Validation Rule Scope page, select the cell range or D-Cube slice that you want to
validate, and click Next.
You can specify the range by selecting items from each dimension in the cube. Items can also
include saved selections.

234 Contributor
Chapter 12: Data Validations

All dimensions in the selected D-Cube are available with the exception of the measures and e.
List dimensions. Note that because the <ALL> item includes aggregates as well as details, it is
not an optimal data item to include in a rule.

13. Click Finish. If you want to change or review your settings, click Back.

The rule is automatically saved and associated with the model. It is now available for inclusion in
a rule set.

Define or Edit a Rule Set


You can add rules to rule sets. Each rule set is associated with an action that occurs if the data entry
fails validation. You can use more than one rule set for each validation process on a cube, with
multiple fail actions or messages.
You must create one or more rules before you can define a rule set.

Steps
1. Open the Administration Console.

2. In the Administration tree, expand Datastores, DatastoreServerName, Applications,


ApplicationName, Development, and the Data Validations folder.

3. Click the Rule Sets folder, and choose whether to create a new rule set or edit an existing one:

● To create a new rule set, click New.

● To edit an existing rule set, click the rule set that you want to change, and then click Edit.

4. In the Rule Set Name box, type a unique name for the rule set that distinguishes it from the
others.

5. In the Fail Action box, specify one of the following types of action to be triggered when one
or more rules in the rule set fails validation:

● To show only the rule message and take no action, click Message Only.

● To show the rule message and restrict contributors or reviewers from submitting the plan,
click Restrict Submit.

● To show the rule message and prevent contributors or reviewers from either saving or
submitting the plan, click Restrict Save and Submit.

Important: Use caution when applying this setting - this is not considered a best practice. If one
or more rules fail, contributors or reviewers cannot save the plan. To close the plan, the rules
must be resolved to a value of true, which may not be possible for the user to achieve.

6. In the upper rule grid, select the rule or rules that you want to include in the rule set, and click
Add.
The selected rule or rules appears in the lower grid.

Tip. You can remove a rule from the rule set using the Remove button.

Administration Guide 235


Chapter 12: Data Validations

7. Click OK.

8. Click the Save button to save the rule set.

You can now associate the rule set to e.List items (p. 237).

Edit a Validation Rule


You can modify a rule to better reflect the constraints placed on data entry for a particular D-Cube.

Steps
1. Open the Administration Console.

2. In the Administration tree, expand Datastores, DatastoreServerName, Applications,


ApplicationName , Development, and the Data Validations folder.

3. Click the Rules folder.

4. Select a rule that you want to change, and click Edit.

5. Choose how you want to modify the rule, and click OK.

6. If you want to rename the rule to something more obvious, in the Name box, type the new
name.

7. If you want to change the message to reflect the new constraints or limitations, in the Message
box, type the new message.

8. If the constraint on data entry changed, do the following:

● Click the ellipses button next to the Expression box.

● In the Edit Validation Rule Expression dialog box, define the new boolean expression used
to evaluate data entry and click OK.

● Under Available components, select items from the specified dimension in the D-Cube that
you want to use to define your rule expression, and then click the arrow to move them to
the Expression definition box. Use the IF statement, AND/OR boolean operators, or logical
comparison operators, such as =, <>, and <=. It is not necessary to begin the validation rule
formula with an IF function. You can use any boolean condition expression.

9. If you want to change the slice in the D-Cube that is subject to validation, under Scope, click
the new items under each dimension in the D-Cube.

10. Click OK to save your changes.

236 Contributor
Chapter 12: Data Validations

Associate Rule Sets to e.List Items


To specify who can enter data, who can only read data, and from whom data is hidden, you can
associate a rule set to an e.List dimension or to one or more of its items. For example, an e.List
item can be a cost center or sales division.

Steps
1. Open the Administration Console.

2. In the Administration tree, expand Datastores, DatastoreServerName, Applications,


ApplicationName ,Development, and the Data Validations folder.

3. Click the Rule Set e.List Items folder.

The Validation Rule Set grid shows all the available rules sets.

4. Associate a rule set to an e.List item as follows:

● Under Validation Rule Set, click a rule set.

Tip. Press Ctrl + click to select multiple rule sets.

● Under E.List Item Name, click the e.List item to which you want to apply the rules, and
click Add. You can also select ALL, All DETAIL, ALL AGGREGATE, or any e.List saved
selections.

The By Rule Set tab is filtered by rule sets and is sorted by all the rules sets and their
associated e.List items. The By e.List Item tab is filtered by e.List items that are associated
with the current rule sets.

5. Click the Save button to save the associations to the e.List.

Administration Guide 237


Chapter 12: Data Validations

238 Contributor
Chapter 13: The Go to Production Process

Use Go to Production to formally commit a set of changes. Any issues such as invalid editors, an
invalid e.List, and destructive model changes, are reported by the Go to Production process.
The Go to Production process can be automated (p. 200) so that you can schedule it to run during
slow periods.
Go to production consists of the following stages:
● pre-production

● go to production

● post-production

The crucial stage is the go to production stage where the old production application is replaced by
the incoming development application.
Until the go to production stage, all Web client users can use the old production application as
normal. Immediately after the go to production stage, a new production application exists that Web
client users then use. After the go to production stage, Web client users attempting to open a model
have access only to the new production application. However, if users are already viewing or editing
a model from the old production application at the time of the go to production stage, client-side
reconciliation is required.
In the go to production stage, the old production model definition is replaced by a new production
model definition (the incoming development model definition). Many development changes have
no effect on the structure or content of production data blocks and affect only the production model
definitions. An example is everything appearing within the Web-Client Configuration branch in
the Contributor Administration Console. If these are the only changes made, the production
application is fully updated when the go to production stage is complete.
Other development changes require a new production model definition and require the production
data blocks to be updated. For example, synchronizing with the Analyst model can change the
structure and content of data blocks in many ways. If changes that effect data blocks have been
made, the go to production stage fully updates the production model definitions as normal, but the
data blocks are updated by a subsequent reconciliation process.
If an import is performed, an Analyst to Contributor D-Link is run, or if an administration link is
run in the development application, then import data blocks are created. If there are import data
blocks at the point of the go to production stage, these import data blocks are moved into the new
production application. After the go to production stage, the import data blocks are combined with
the production data blocks, which are also handled by the reconciliation process. After this, the
import data blocks are removed from the development application.
In summary, the go to production stage replaces the old production model definition with a new
one, and moves any import data blocks into production. If import data blocks or changes that affect

Administration Guide 239


Chapter 13: The Go to Production Process

production data blocks are made, the production data blocks are updated by a reconciliation process
that follows Go to Production.
During the go to production stage the application is taken offline temporarily to ensure data integrity.
The new e.List item workflow states are determined to correctly process any e.List hierarchy changes.
As soon as those changes are applied the application goes online again and the post-production
processes are started. This offline period is typically so short that it is transparent to users, but it
can sometimes exceed one minute.

Planning Packages
In Cognos 8, a package is a folder in Cognos Connection. You can open the package in a studio
to view its content. A Planning package is a light-weight package that contains only the connection
information to the cubes in the Planning application. The D-list and D-List item metadata are
extracted from the Planning application at run-time.
To access Contributor applications, you must select the option to create the package when you run
Go to Production. This option also gives users access to Cognos 8 studios from the Contributor
application if they have the studios installed and enables users to report against live Contributor
data using the Planning Data Service (p. 304).
You may choose not to create a package if you just want to publish the data and create a PowerCube
or an Framework Manager model using the extensions. This will save time because the Go to
Production will finish more quickly.
To create a Planning Package, you must have the Directory capability. This is not part of the Planning
Rights Administrator role, but it is part of the Security Administrator role. For more information,
see "Capabilities Needed to Create Cognos 8 Planning Packages" (p. 32).
The Planning Package is created with the same display name as the Contributor application by
default, and a data source connection named Cognos 8 Planning - Contributor is created in
Framework Manager. You can configure the name of the Planning Package, and add a screen tip
and description. For more information, see "Set Go to Production Options" (p. 79).
The security on the Planning Package is as follows:
● The Planning Rights Administrator role is granted administrative access to the package.

● All the users who have access to the application are added as user of this package.

● The user who is logged on to the console when performing the Go to Production is the user
who creates the package. Therefore, that user is given administrative access to the package.
This user is not necessarily a planning administrator because they could have been granted only
Go to Production permission by a planning administrator.

If you remove an application from the console, any corresponding planning package in Cognos
Connection is disabled. The package will be hidden from the users and will appear with a locked
icon to administrators. This allows administrators to maintain an application while making it appear
offline to users. When the application is re-added in the console, the corresponding planning package
is re-enabled.

240 Contributor
Chapter 13: The Go to Production Process

Reconciliation
The reconciliation process ensures that the copy of the application used by the on the Web is up to
date. For example, all data is imported, new cubes are added, and changed cubes are updated. For
more information, see "Reconciliation" (p. 52).
The first time Go to Production is run for an application, all e.List items are reconciled. Subsequently,
only some changes result in e.List items result in changes being made. Reconciliation can take some
time, depending on the size of the e.List. If you are making changes that require reconciliation,
check that you made all required changes before running Go to Production.

The Production Application


When you first create a Contributor application, there is only a development version. The production
version is created only after Go to Production is run. The production version of the application is
the version that users see in the web. Changes made in the development application apply only after
Go to Production is run.
The production version of a Contributor application consists of one or more model definitions,
and one data block for each contribution or review e.List item.
Before you can run Go to Production, the Contributor application must contain at least an e.List.
If you can run the Go to Production process without setting any rights, no one can view the
application on the Web. You can preview the application prior to setting any rights by selecting
Production, Preview in the Administration Console.
When you start Go to Production, job status is checked. If jobs are running or queued, you cannot
run Go to Production. A message appears prompting you to go to the Job Management window.
For more information, see "Managing Jobs" (p. 50).
If there are discrepancies in information about translations in the datastore table and the model
XML, you cannot run Go to Production. For more information, see "Datastore Options" (p. 80).

Model Definition
A model definition is a self-contained definition of the model. It holds definitions of the dimensions,
cubes, and D-Links of the model, as set up in Cognos 8 Planning - Analyst. It also holds details of
modifications applied in the Contributor Administration Console. This includes configuration
details, such as navigation order for cubes, options such as whether reviewer edit or slice and dice
are allowed, and Planning Instructions. A model definition also includes e.List details and access
table definitions. It also contains all assumptions cube data, because this does not vary by e.List
item.

Data Block
The data block for an e.List item contains all data relevant to an individual e.List item, except
assumptions cube data (p. 115). It contains one element of data for every cell in the model, except
for any cell identified as No Data by Contributor access tables (p. 118). No Data cells are generally

Administration Guide 241


Chapter 13: The Go to Production Process

treated as if they did not exist. This reduces the volume of data that must be downloaded to and
uploaded from clients, speeding up client-side recalculation and server-side aggregation.
When a Web client user opens an e.List item by clicking its name in the status table, a model
definition is opened, and then the appropriate data block is loaded into it. If a multi e.List item
view is opened, more than one data block is loaded. Wherever possible, the model definition and
data block are loaded from the client-side cache if enabled. If the client-side cache does not contain
an up-to-date version of the model definition or the data block, they are downloaded from the
server. Note that data in the data block is not compressed, although compression and decompression
takes place on transmission to and from the client.
In addition to a data block, each e.List item also has an annotation block. Various translation tables
exist if multiple languages are used.

Production Tasks
You can do the following to the production version of a Contributor application:
● publish data
You publish the production version of the application. However, you must set dimensions for
publish in the development application and then run Go to Production to apply the changes to
the production version. This is because setting dimensions for publish requires datastore tables
to be restructured (p. 79).

● delete user and audit annotations

● preview the workflow state

● preview the model and data

● configure extensions
Extensions allow you to extend the functionality of Contributor in ways that fulfill business
requirements. For example, an extension can use the existing data in Contributor and export
it to create reports. An extension can also extend printing to different types of formatting.

● run administration links

Cut-down Models and Multiple Languages


If neither cut-down models nor multiple languages are used, there is just one master model definition
for an application.
When cut-down models are used, separate model definitions are produced for individual e.List
items or groups of e.List items, according to the cut-down model option chosen (p. 245).
When multiple languages are used, a separate master model definition exists for each language.
Each master model definition contains just the relevant translated strings to prevent the master
model definition from becoming excessively large.
An application may use multiple languages and cut-down models.

242 Contributor
Chapter 13: The Go to Production Process

The Development Application


Web client users interact only with the production version of an application. Most application
configuration or administration is performed in the development version of an application. This
has no impact on the production application or the web client users, until you run Go to Production.
In considering the Go to Production process, the following components are relevant:
● The development version of the master model definition

● A set of import data blocks

Development Model Definition


With the exception of data import, any configuration performed in the development application
affects the development model definition. In the Contributor Administration Console, changes are
implemented to the development model definition as soon as they are saved. For example, new
access table definitions are stored in the development model definition when you click the save
button in the main Access Tables window.
You cannot undo individual changes made to the development application. However, you can reset
the entire development application which then matches the current production application, using
the Reset Development to Production toolbar button .

Import Data Blocks


The final stage of the Contributor import process creates a set of import data blocks. The Prepare
import stage extracts data from the import staging tables and creates one import data block for
each contribution e.List item with import data. Import data blocks contain data for only one e.List
item, and contain only valid data. Invalid data, such as data that does not match any dimension
item or targets formula items or No Data cells, is written to an import errors table (ie_cubename)
. No import data block is created for e.List items with no import data.
By removing all irrelevant or invalid data, the import data block for an e.List item is kept as small
as possible. This is crucial for the subsequent reconciliation process, particularly for client-side
reconciliation.
Note that if you use the Prepare zero data option, an empty import data block is created for all e.
List items.
Because the Prepare process is performed using the job architecture (a Prepare_Import job), it can
be scaled out, and monitored in the normal way. It does not conflict with other jobs for the
application, but it is not possible to run the Go to Production until it is complete.
Analyst to Contributor D-Links and administration links that target the development application
also create import data blocks that are brought into the production application by the Go to
Production process. Data from links that target the production application is brought in to the
application by an activate process that triggers a reconcile job. For more information see
"Reconciliation" (p. 52).

Administration Guide 243


Chapter 13: The Go to Production Process

Run Go to Production
You must run Go to Production to commit any changes made to the development application, such
as configuration options, importing data, and synchronize with Analyst.
You must wait for all jobs to stop running before running Go to Production. This includes the
reconcile job.
Before running Go to Production, ensure that
● an e.List, was imported, rights were set

● appropriate data dimensions for publish were set

● the Copy Development e.List item publish setting to production application and Prevent
client-side reconciliation options are set as required
For information about these options see "Go to Production Options Window" (p. 244).

Steps
1. Select the Contributor application.

2. Click the Go to Production button. If this button is not enabled, check that the application has
an e.List. If so, you do not have access rights to run Go to Production .

Go to Production Options Window


Use the Go to Production Options window to specify whether to back up the datastore
(recommended), whether to reset the Workflow State, and whether to show information about
invalid owners and editors. The Invalid owners and editors option is not relevant the first time you
run Go to Production.

Back-up Datastore
This option creates a backup of the development and production application and stores them in
the location specified during application creation or in the Datastore Maintenance window (p. 80).
We recommend that you set this option. If you clear this option, a warning advises you to make a
backup in case of problems. Note that when you automate the Go to Production process, there is
no backup option and you should schedule a backup to be made before running the Go to Production
process.

Create Planning Package


In order to be able to access Contributor applications from Cognos Connection, you must select
this option when you run Go to Production. This option also gives users access to Cognos 8 studios
from the Contributor application if they have the studios installed, and enables users to report
against live Contributor data using the Planning Data Service (p. 304). For more information, see
"Planning Packages" (p. 240).

244 Contributor
Chapter 13: The Go to Production Process

Display Invalid Owners and Editors


Use this option to choose whether to show owners and editors who become invalid when you run
Go to Production. This option can add some time to the Go to production process.

Workflow States
Reset resets the state of the e.List items in the Contributor Application.
If required, select one of the following options:
● Not Started
This sets every e.List item back to the state of Not Started.

● Not Started and Work in Progress


This sets all e.List items that are not in a state of Not Started to the state of Work in Progress.
e.List items that are Not Started remain in this state. This option aggregates all data but does
not indicate whether e.List items were modified.

● Work in Progress
this sets all e.List items to a state of Work in Progress, meaning that changes were saved but
not submitted.

● Locked
this locks all e.List items. No changes can be made to locked e.List items, but the data can be
viewed.

Skip top e.List items enables you to reset all but the top e.List items.

Show Changes Window


The Show Changes window contains tabs which shows the differences between the development
application that you are running the Go to Production process on and the current production
application. By looking at this information, you can see the effect that Go to Production has on the
production application.

The First Time Go to Production is Run on an Application


The first time you run Go to Production, no changes are shown. If you imported data, a tab shows
import data details (p. 250). You can cancel the Go to Production process at this stage.

After you view Import data details, click Next.


If you have set cut-down model options (p. 72) the cut-down models job monitor is shown until
the cut-down models job has finished running. You can cancel while the cut-down models job is
running, but not after it is complete. The final process is started automatically.

Subsequent Go to Productions
When you run Go to Production more than once, depending on the changes you have made, you
may see the following information:

Administration Guide 245


Chapter 13: The Go to Production Process

● model changes

● import data details (p. 250)

● invalid owners and editors (p. 250)

● e.List items to be reconciled (p. 252)

Model Changes Window


The model changes window shows the changes made to the Cognos 8 Planning - Analyst model
since the previous Production application was created.
For example, which cubes were added, removed, or had their dimensions changed, and whether
dimensions were added, deleted, or substituted.
Pay particular attention to changes that could affect production data, such as cubes or dimension
items deleted.
Click Advanced to view a detailed description of the differences between the previous Analyst model
and the current model. It lists the cubes and dimensions that changed. When you click an item, a
breakdown of the changes appears. Typically, this information is used for debugging purposes.

Changes to Cubes
When you expand the cubes listed under Common Cubes, the following branches are listed under
each cube: New, Old and Differences.
New and Old contain the same categories of information and list what was in the old model and
what is in the new model.

Name Description

Dimensions The dimensions in the order in which they were in Analyst.

AccessTables The access tables assigned to the cube.

AccessLevel The base access level for the cube.

246 Contributor
Chapter 13: The Go to Production Process

Name Description

BaseFormat The D-Cube format as defined in Analyst. If no format is defined,


it defaults to the system default.

UpdateLinks The update links that are associated with the cube.

BreakBackEnabled 1= breakback enabled for the cube.


-1= breakback disabled for the cube.

MeasuresDimension -1 = no dimension for publish set on the cube.


n = the position in the dimension order in Analyst of the dimension
for publish, excluding the e.List.

AggregationDimension 1 = the cube has an e.List.


-1 = no e.List.

The following window shows the differences for the D-Cube Revenue Plan that result from adding
a dimension:

Name Description

[New] Dimensions Lists the dimensions after the synchronize.

[Old] Dimensions The dimensions before the synchronize.

[New] A positive number indicates that this cube contains the e.List.
AggregationDimension

[Old] -1 indicates that there was no e.List in this cube in the old model.
AggregationDimension.

Administration Guide 247


Chapter 13: The Go to Production Process

Changes to Dimensions
When a dimension is changed, three branches are listed under each dimension: New, Old, and
Differences. When you click one of these branches, something similar to the following appears:

This table lists the following details for each dimension item in the Now, Old, and Differences
windows.

Name Description

Item Guid A unique internal reference for items in a model. When you add a dimension
item, this item is assigned a GUID.

Item Id Internal unique numeric reference.

Name The name of the dimension item.

Parent Index Internal reference that indicates the parent of each item in the hierarchy.

Changes to Links
When a link changes, three branches are shown, New, Old, and Differences.

248 Contributor
Chapter 13: The Go to Production Process

The New and Old branches show the following information.

Name Description

Source The name of the source D-Cube.

Target The name of the target D-Cube.

Correspondences The matched dimensions if any exist.

Mode The execution mode selected for the link.

Scale The scaling factor applied to the D-Link.

UseRounding The rounding factor used.

RoundingDigit The rounding digit used.

Match names: Link name The matched dimensions.

SourceDimension The source dimension.

Administration Guide 249


Chapter 13: The Go to Production Process

Name Description

SourceSubColumns The source sub column, if it exists.

TargetSubColumns The target sub column, if it exists.

CaseSensitive 1 = on
0 = off

TargetCalculations The target calculation, if it exists.

DumpItem Indicates that data from unmatched source items is assigned to a


dump item in the target D-List.

Selection:Dimension name The dimension name where there is no match.

Selection The selection of items.

IsTarget Indicates the target.


0 indicates a selection on the source dimension rather than the
target.

When you click the Difference branch, you see an overview of the changes.

Import Data Details Tab


The Import data details tab appears only if you imported data. This window shows the e.List items
that have import data blocks prepared. Within each e.List item, the number of data cells prepared
per cube are listed.

Invalid Owners and Editors Tab


Invalid owners and Editors tab appears only if you have selected the Display invalid owners and
editors check box in the Go to Production Options tab.
This tab lists users who are currently editing or annotating, and who will become invalid when the
development application becomes the production application.

The Editor column lists the editors who are currently editing the e.List item.

Invalid Editors
An invalid editor is a user who was editing or annotating an e.List item when Go to Production
was run, and, due to a change, can no longer edit or annotate the e.List item. These changes can
be one of the following:
● The e.List item that was being edited or annotated was deleted.

250 Contributor
Chapter 13: The Go to Production Process

● The rights of a user were changed to view.

● Reviewer edit (p. 72) is prevented.

● The review depths of an e.List item that is being edited by a reviewer or annotated were changed
so that the user no longer has access.

Editor Lagging
Editor lagging lists those Web client users who are editing at the time, either on or offline.

Steps
1. Go to the Production branch of the application, and then click Preview.

2. Right-click on the e.List item and select Properties.

3. Click the Editors tab.


If the e.List item is being worked on off-line, a cross is shown next to Edit session connected.
This is shown because in some circumstances, a user is unable to save their changes. For example,
a reconcile job runs for the e.List item that is being edited or annotated and client side
reconciliation is prevented. The user is bounced off the e.List item, losing any changes. They
can save the contents of the grid locally to a text file, and re-import the file. Another example
is when two reconcile jobs run for the e.List item while someone is editing or annotating it (on
or offline). The Prevent client side reconciliation option can be on or off. The user is bounced
off the e.List item, losing any changes, or is unable to bring the data online.
Note that an administration link or an Analyst to Contributor link to a production version of
an application causes a reconcile job to be run to update the e.List items.
Sometimes the Administration Console cannot detect whether a user is still editing or annotating,
although the Administration Console is always aware of the start of an editing session. An
editing session is ended when the user closes the grid, and submits the e.List item. Normally
these actions are detected by the Administration Console. However, if the user loses their
network connection while editing, the Administration Console thinks that the user is still editing.
If the user reconnects to the network and ends the session by closing the grid or submitting the
e.List item, the Administration Console detects that the edit session is ended.

How a user can lose access to an e.List item


The following changes result in an end user losing access to an e.List item they are currently editing
or annotating:
● A user who was editing or annotating an e.List item when Go to Production was run was
deleted.

● An e.List item that was being edited or annotated when Go to Production was run was deleted.

● The rights of a user who was editing or annotating an e.List item when Go to Production was
run have been changed to View.

Administration Guide 251


Chapter 13: The Go to Production Process

● Reviewer edit was prevented and the reviewer was editing an e.List item when Go to Production
was run.

● The review depths of an e.List item that is being edited by a reviewer (with reviewer edit allowed)
or annotated have been changed so that the user no longer has access.

● A reconcile job (p. 52) is run for the e.List item that is being edited or annotated and Client
side reconciliation is prevented.

● Two reconcile jobs have been run for the e.List item while someone is editing or annotating it
(on or offline). Prevent client side reconciliation can be on or off. Note that running an
administration link or a Analyst<>Contributor link causes a reconcile job to run.

● Another user takes ownership of the e.List item while the current user is editing it or annotating.

● Users will receive a warning message and the buttons in the grid will disappear. Users will be
able to right-click in the data and save to file.

e.List Items to be Reconciled Tab


The Owner column lists the current owners of the e.List items to be reconciled. By default, the
current owner is the first person in the Rights table to be assigned to the e.List item with rights
higher than View. If another owner takes ownership of the e.List item, they become the current
owner (p. 91). If the e.List item has no current owner, it shows System.
The Editor column lists the names of users who are currently editing or annotating the e.List item
to be reconciled. If the e.List item is currently being edited, this is the same name as in the Owner
column. If the e.List item is not being edited, the cell says No Editor.
The Import Data Block column indicates whether there are import data blocks for the e.List item.

Cut-down Models Window


If cut-down models (p. 134) are required, they are generated at this stage so that they are available
to Web client users as soon as the new application is available.
After the cut-down model job runs, the Finish window is displayed and the final step in the Go to
Production process is started automatically.

Finish Window
During the final stage of Go to Production, the following processes occur.

Datastore Backup
A datastore backup is made if this option was selected and happens after the cut-down models
process.

Preproduction Processes
● A master model definition per language is produced.

252 Contributor
Chapter 13: The Go to Production Process

● New e.List items are added to datastore tables.

● Import data blocks are associated with the production application.

● Error trapping takes place, for example, if there are no e.List items, Go to Production does not
take place.

Go to Production
● The development and production model definitions are unpacked and loaded into memory.

● Two e.List items are reconciled as a test. Most errors in reconciliation occur when the first e.
List items are reconciled.

● The switch-over from the development to the production application is performed. During this
stage, the system takes the application offline temporarily to ensure data integrity. The new e.
List item workflow states are determined during this time to correctly process any e.List hierarchy
changes. As soon as those changes are applied it the application becomes online again and the
post-production processes are started. This offline period is typically so short that it is transparent
to users, but it can sometimes exceed one minute.

● The workflow state is refreshed to include new items.

● The new application is put into production.

● The Web application is restarted and users can now submit.

Post Production Tasks


● Any e.List items that were removed are deleted from the datastore.

● Completed jobs that are no longer relevant are removed from the job list, such as a publish of
a previous production application.

A message that tells you that you successfully put the development application into production.
Then the following operations are performed:
● Obsolete cut-down models are removed by the cut-down tidy job.

● Old import data blocks are removed.

● A validate_users job is run to check that the current owner or editor of an e.List item can still
access the e.List item.

● Redundant copies of translations from the previous production application are removed by the
language_tidy job.

● If reconciliation is required, it is queued and run as soon as job servers are started and set up
to monitor the application.

Note: If you set the production application offline before running the Go to Production process, it
is offline when the Go to Production finishes running. If the production application is online before
running Go to Production, it is online when Go to Production finished.

Administration Guide 253


Chapter 13: The Go to Production Process

254 Contributor
Chapter 14: Publishing Data

You can publish the data collected by Cognos 8 Planning - Contributor to a datastore, either from
the Administration Console, or using the publish macros (p. 208).The data can then be used either
as a source for a data mart or warehouse, or with Cognos 8 studios. The publish process creates a
datastore containing tables and views based on the publish layout and options that you select.

Publish Layouts
Choose from these types of publish layouts: table-only, incremental, and view.
● The table-only layout gives users greater flexibility in reporting on Planning data. The table-only
layout can also be used as a data source for other applications. This layout is required by the
Generate Framework Manager Model Admin extension (p. 308) and the Generate Transformer
Model Admin extension (p. 311).

● The incremental publish layout publishes only the e.List items that contain changed data. Users
can schedule an incremental publish using a macro (p. 213) or through Cognos Connection and
Event Studio. You can achieve near real-time publishing by closely scheduling incremental
publishes.

● The view layout generates views in addition to the export tables. This layout is for historical
purposes.

The Publish Process


Publishing data is a production task that can only be performed after you run Go to Production
and all e.List items that you are publishing are reconciled. You can monitor the progress of the e.
List items being reconciled in the Preview window (p. 295). If you are setting data dimensions for
publish for the view layout, you must do this before you run Go to Production. This is not necessary
for the table-only layout.
When you publish data, the following things happen:
● A publish container is created, if one does not already exist.

● An accurate snapshot is taken of the data at the time a publish is run to ensure a consistent
read.

● A publish job runs. This creates tables in the datastore, depending on the layout and options
selected. For more information, see "Jobs" (p. 47).

You can run a publish at the same time as an administration link (p. 145).

Administration Guide 255


Chapter 14: Publishing Data

The Publish Data Store Container


You must publish to a separate publish container. This is because the main application container
holds the transactional planning data in compact XML binary large object (blob) format, and must
be backed up on a regular schedule based on the lifecycle of the transactional application.
The publish container contains a snapshot of the planning data in relational form, which is a
different lifecycle and contains a significantly different storage and performance profile. By having
separate publish containers, you can make use of dedicated job servers for publish datastores. The
two available publish layouts cannot coexist in a single datastore container.

Access Rights Needed for Publishing


You must be granted rights to publish data for the application (p. 37). If this is the first time you
run publish, you can either publish to the default publish container, or create a new publish container.
The default publish container resides on the same datastore server as the application with the same
tablespace settings (for Oracle and DB2 UDB).
For publish to be processed, the publish container must be added to a job server cluster or job server
(p. 56). Therefore you must either have the right to create a publish container, and assign objects
to a job server, or the publish container must be created and added to the job server already (p. 37).

Publish Scripts
You may need to create publish scripts before you can publish data if you do not have DBA rights
to your datastore server.

To generate publish scripts, the Generate Scripts option must be set to Yes in the Admin Options
table (p. 178).
If you attempt to publish but a publish container does not exist, a script is generated. A DBA must
then run the script to create the container. A message indicating the location of the script is shown.
If the publish container does exist, a check is run to see if there are any datastore incompatibilities.
If there are incompatibilities, another script is generated. Incompatibilities occur if you republish
a datastore, and the format of the metadata has changed between publishes. For example, a cube
was added, data dimensions changed, items were added to the Analyst model. There are always
incompatibilities on the first publish, since the metadata tables are not present.You cannot publish
until this script is run to update the datastore.

You can generate a synchronization script manually by clicking the Generate synchronization script
for datastore button.

Warnings
You may receive a warning similar to the following when running a script generated by Table-only
layout publish, when the Generate Scripts option is selected:

256 Contributor
Chapter 14: Publishing Data

"Warning: The table 'annotationobject' has been created but its maximum row size (8658) exceeds
the maximum number of bytes per row (8060). INSERT or UPDATE of a row in this table will fail
if the resulting row length exceeds 8060 bytes."
The table definition allows for a large amount of data to be stored per row. SQL Server generates
a warning to let you know that there is a limit on how much data you can have on a row. If your
annotation data exceeds this limit then your publish will fail. You can reduce the amount of data
by selecting a smaller data dimension or by reducing the amount of data in the system, for example
by using Delete Commentary.

Selecting e.List Items to Be Published


When you publish data, select the e.List items that you want to publish.
Steps
❑ For a View layout, you can import the e.List item publish settings in the e.List import file. If
you do this, select the Copy development e.List item publish settings to production application
option. When you run Go to Production, the publish settings are copied to the e.List items tab.
If you have not imported the publish e.List item settings, you can also set them on the e.List
items tab.

❑ For a Table-only layout, you must set the publish e.List item settings on the e.List items tab.

Reporting Directly From Publish Tables


Cognos Planning published data is stored in standard datastore tables. You can report directly from
these tables, using the Generate Framework Manager Model functionality to help in the process.
You must not run reports during the publish process, as you may get inconsistent results. Also, if
destructive changes have been made to the Planning environment, the publish tables may no longer
match the ones defined in your reporting metadata model.
It is best practice to isolate the business intelligence reports from the source data environments by
creating a reporting datastore. This allows you to add value by bringing in data from other
applications. For example, perhaps there is some data which was optimized out of the Cognos
Planning application but would be useful for your reports.
At its simplest, this datastore could be a straight copy of the publish tables as produced by Cognos
Planning. It could also be a traditional data mart or an extension to your existing data warehouse.
Dimensionally-aware ETL tools such as Cognos Data Manager can also be used to ensure that a
single version of data runs through all your Cognos Planning and business intelligence applications.
If you report directly from Cognos Planning tables, be aware of the following:

Scenario and Version Dimensions


Cognos 8 is usually set up to automatically aggregate data around grouped items. If your report
does not contain all the dimensions from a fact table then the data for the unspecific dimensions is
aggregated.

Administration Guide 257


Chapter 14: Publishing Data

Normally, this is desirable behavior, but the Scenario and Version dimensions that are often used
in planning applications are not suited for aggregation. One technique to handle this is to set up a
mandatory filter on your cube tables in Framework Manager, forcing the reporting environment
to either prompt for values whenever the fact table is used, or to have separate filtered query subjects
for each version.

Precalculated Summaries
Be aware of precalculated summary levels in the published tables when using the Table-only publish
layout. You may find that they complicate your data model. You can disable them by clearing the
Include Roll Ups publish option.
If you do not do this, then the data for precalculated summary levels is published into the same
tables as the detail items. If you are using item tables (named it_D-List_name and containing an
unstructured flat list of all items in the hierarchy) this is acceptable. If not, you may have reporting
issues as your queries need dimensional context in order to avoid double counting.
Note also that the publish take longer to run (more data points to write). If you are not using the
item tables then the reporting environment could confuse users because there are separate hierarchy
table aliases for each level in Framework Manager.

Model Changes that Impact the Publish Tables


Cognos Planning models are flexible and change to map the changes in the business. Changes can
affect reports that are run off planning data. The following topics describe the impact that various
changes to the Planning model have on publish tables in the data.

Changes to Dimensions

Change D-List (not Dimension for Dimension for Publish


Publish)

Add items None as long as the number of New columns added. Existing SQL
levels in the hierarchy remain the still works.
same.

Delete items None as long as the number of Columns are deleted. Processing
levels in the hierarchy remain the referring to these columns must be
same. modified.

258 Contributor
Chapter 14: Publishing Data

Change D-List (not Dimension for Dimension for Publish


Publish)

Rename items None unless name filters are used D-List formatted items are stored
in the BI application. in the fact columns as text rather
than as a foreign key. As a result,
text exported from previously
published data may not match this
text.
A full publish resets the text in the
Publish tables, but review external
datastores where these items have
not been normalized.

Add hierarchy levels New columns created in dimension None.


tables. Existing reports will not fail
but level naming may no longer be
correct.

Delete hierarchy levels Columns deleted in dimension None.


tables.

Reorder items None. Datastore columns are reordered


but SQL still works.

Refresh items from None. SQL still works.


the datastore

Rename the D-List Dimension table name changes. None, as long as the D-List is not
used in D- cube where it is not the
dimension for publish.

Changes to D-Lists that are used as D-List Formatted Items


Fact table name change.

Changes to D-Cubes

Change Affect

Reorder dimension None. The column sequence in the datastore may change but this does
not impact reports.

Administration Guide 259


Chapter 14: Publishing Data

Change Affect

Add dimension Assuming that the new dimension is not the dimension for publish, data
for all items in the new D-List are automatically summed if no action is
taken.
For most lists this is desirable, but care needs to be taken if the dimension
contains scenarios or versions.

Delete dimension Links to the dimension table are removed from the fact table. Reports
referring to items in that dimension are affected.

Data Dimensions for Publish


Setting data dimensions for publish can reduce the volume of published data considerably.
In both layouts, for each selected D-Cube, you can choose a D-List that is designated as the dimension
for publish. A separate column is created for each data item in the dimension. Selecting a dimension
speeds the publishing process because fewer rows are written to the target database.
Candidate dimensions for publish typically contain formatted D-List items or items by which the
business tracks or measures performance. Such items are often numeric.
For a View layout, set dimensions for publish in Development, Application Maintenance, Dimensions
for Publish. Setting a data dimension is optional only for View layout, and changes to dimensions
for publish apply only after you have run Go to Production. If the dimension that is used as a data
dimension is removed in Cognos 8 Planning - Analyst and the application is synchronized, the
synchronization process handles this.
It is mandatory to select a data dimension for publish for the Table-only layout. If you do not select
one, a default dimension is used. This is a dimension that has formats defined. If you have more
than one dimension with formats, ensure you select the one you require. If you plan to use
Contributor data as a source for Cognos 8 Business Intelligence Studios, the dimension you select
for a cube is the one used as the measure dimension in Cognos 8. The PPDS driver also uses a
dimension for publish. If one is set on the cube, it is used as the measure dimension in Cognos 8.
We recommend that each item of the selected dimension contain uniform data. This means that for
every row, the data is of the same type: numeric, text, or date/time.

Handling Nonuniform Data in Table-only Publish


Slicing the D-Cube along the dimension for publish may result in nonuniform data. For example,
cell data along any item of the dimension for publish may be of mixed types. Because of this, rows
of data for an item may be of different types.

In the table-only layout, where nonuniform data exists and must be preserved, if you select Create
columns with data types based on the "dimensions for publish", it automatically creates enough
columns so that no data is excluded. However, if you manually choose the columns to create, only
the data in the format selected is published. For example, selecting the numeric and date/time options

260 Contributor
Chapter 14: Publishing Data

guarantees that only numeric and date/time data are written to the corresponding numeric and date/
time columns; text is excluded. As a result, if the first row of an item is a numeric value, it is stored
in the corresponding numeric column. The remaining data type columns for that item are populated
with null values.
In the view layout, data type uniformity is handled by storing all values in text columns. An
associated fact view (fv) is created using the sum hierarchy to view only numerical information.

Selecting a Dimension for Publish for Reporting


It is important to carefully consider which dimension for publish you select when publishing data
to be reported on. The dimension for publish is the D-List whose items become columns in the fact
table. That is, instead of becoming an ID which links to a hierarchy table, the items in the selected
D-List are converted to actual fact table columns.

Why the Choice of Dimension for Publish is Important


If you are reporting using a non-OLAP reporting tool, the publish is performed using SQL behind
the scenes. Data is reported in columns, which are sorted, grouped and summarized in the rows.
This means that you can perform actions such as inter-column calculations and independent
formatting in the columns, but the rows can only be summarized. You can potentially build SQL
reports with intra-row calculations, but it would take you longer to build and costs more to maintain.
Note that rows and columns here are SQL terms. Columns and rows can be switched around within
a report, and indeed this is often the case in financial reports.

Selecting Your Dimensions for Publish


The primary calculation D-List is often used as the dimension for publish. However there are
situations where other D-Lists (such as Time or Versions) are more suitable.
Your choice of dimension for publish is driven by a number of factors, and the Planning and BI
designers should work together to select the appropriate one for reporting. The Planning designer
may find it easier to build another cube than to report from an existing cube.
Carefully consider which dimension for publish to use, because if you change it after you have
started to build reports, you may need to rework existing reports.
The following D-List attributes identify a likely dimension for publishing:
● It contains a combination of text, numeric and date items (mixed data types).

● It contains numeric items with different display formats such as ##% and #,###.##.

● Your reports need to do additional calculations between items in the D-List.

● You need to treat some of the D-List fields separately for reporting purposes.

● The dimension for publish impacts the time the Publish takes to run. Even though there are
fewer columns to create, more rows are written to the datastore, and this takes time to write.

Tip: In some circumstances you may not want a dimension for publish. In this case your publish
table has one row for every combination of dimension and you would leave all the processing

Administration Guide 261


Chapter 14: Publishing Data

and formatting intelligence to the reporting tool. Using the Table-only layout, you must select
a dimension for publish, so to achieve equivalent functionality, add a D-List to the cube
containing one item, and use this D-List as the dimension for publish.

The Table-Only Publish Layout


Using the table-only publish layout, you can
● generate data columns in their native order, which preserves the original order when reporting,
as when you publish to a view layout

● publish detail plan data

● select whether to prefix the dimension for publish column names with their data type to avoid
reserved name conflicts

When using the Generate Framework Manager Model Admin extension (p. 308), the table-only
publish layout must be used.
The following types of tables are created when you publish using the table-only layout.

Table type Description Prefix or name

Attached Documents Contains metadata about the Ad_ for cell attached documents,
attached documents documentobject for tab (cube) and
model attached documents

Items (p. 263) Describes the D-List items. it_

Hierarchy (p. 264) Contains the hierarchy sy_ for the simple hierarchy
information derived from the cy_ for the calculated hierarchy.
D-list, which is published to
two associated tables.

Export (p. 266) Contains published D-Cube et_


data.

Annotation (p. 267) Contains annotations, if the an_ for cell and audit annotations
option to publish annotations annotationobject for tab (cube) and
is selected. model annotations

Metadata (p. 270) Contains metadata about the P_APPLICATIONCOLUMN


publish tables. P_APPCOLUMNTYPE
P_APPOBJECTTYPE
P_APPLICATIONOBJECT

262 Contributor
Chapter 14: Publishing Data

Table type Description Prefix or name

Common (p. 272) Contains tables used to track P_ADMINEVENT


when major events occurred in P_ADMINHISTORY
the publish container.
P_CONTAINEROPTION

Job (p. 272) Contains tables with P_JOB


information relating to jobs. P_JOBITEM
P_JOBITEMSTATETYPE
P_JOBSTATETYPE
P_JOBTASK

Object locking (p. 273) A table used to lock objects in P_OBJECTLOCK


the system when they are being
processed.

Publish parameter Contains state information publishparameters


related to table-only publish

Database Object Names


Database object names are derived from the Planning object names. The maximum length of table
and column names are as follows.

Type of Name MS SQLServer IBM DB2 UDB Oracle

Column 128 30 30

Table 128 128 30

Names cannot begin with a number or underscore (_), and can include the following characters:
● a through z

● 0 through 9

● _ (underscore)

Items Tables for the Table-only Layout


One items table is created for each D-List. It contains one row per item. The name of the table is
generated from that of the D-List and the prefix it_.
The items tables have the following columns.

Administration Guide 263


Chapter 14: Publishing Data

Column Description

itemid Unique identifier for the item

itemname Name of the Item

displayname Display name of the item

disporder Display order specified in Analyst, which is zero-based

itemiid D-List integer identifier for the item, which is used as the primary
key

Hierarchy Tables for the Table-only Layout


The complete hierarchies are published to the cy tables while the simple summary hierarchies are
available in the sy_ tables.
These tables all have the same format. They contain the following columns for each level of the
hierarchy.

Column Description

levelLevelNumber_guid Globally unique identifier of the item

levelLevelNumber_iid D-List integer that identifies the item

levelLevelNumber_name Item name

levelLevelNumber_displayname Item display name

levelLevelNumber_order Item order in the hierarchy

Simple hierarchy tables are created by the publish table-only layout. They are intended to be used
when there are simple parent-child relationships between D-List items that can be summed. The
purpose of this is to allow a reporting tool to automatically generate summaries for each hierarchy
level, or for use with applications that do not require precalculated data, such as a staging source
for a data warehouse.
In the following examples, D-List items are represented by letters, and the relationships between
items are drawn as lines.
Parent D-List items are calculated from child D-List item dependencies. Leaf D-List items do not
have child D-List item dependencies.
All D-List items have their values shown in simple parenthesis and in addition, leaf D-List item
codes are shown in curly braces.

264 Contributor
Chapter 14: Publishing Data

D-List Value Description

0 Direct child of a simple sum D-List item.

1 The leaf has multiple parents.

2 The leaf item is part of a sub-hierarchy that has been moved to the
root (no parent).

3 The leaf item is an orphan.

Example 1 - Simple Summaries

The left pane is an example of simple hierarchies with values. The right pane is an example of simple
hierarchies with values and leaf D-List item codes.

Example 2 - Leaf D-List Item with Multiple Parents

In the left pane, [E] has more than one parent, so parentage is assigned to the first parent in the IID
order. In the right pane, [D] becomes a leaf D-List item, and [F] becomes orphaned and is moved
to the root in the right pane.

Administration Guide 265


Chapter 14: Publishing Data

Example 3 - Non-Simple Summaries

In the left pane, [P] is the product of [S] and [T]. Leaf D-List items of non-simple summaries get
moved to the root. In the right pane, [P] became a leaf D-List item, [S] and {T] were orphaned and
moved to the root in the right pane.

Example 4 - Sub-Hierarchy of Non-Simple Summary

In the left pane, [B] is the product of [C] and [E]. [C] has its own simple summary hierarchy. Because
non-simple sums are not included in the hierarchy, in the right pane, [B] becomes a leaf, [E] and
[C] become orphaned and moved to the root, and [C] keeps its sub-hierarchy because it is a simple
sum.

Export Tables For the Table-only Layout


Cell data for the selected cubes is published to the export tables.

If you select the Include Roll Ups option, the export tables contain all the data, including calculated
data.
If you do not select this item, the export tables contain only non-calculated fact data.

266 Contributor
Chapter 14: Publishing Data

Users who report against published data that contains only fact data use the reporting tool to
aggregate the calculated items when grouping with the hierarchical dimensions.
You can control how the export tables (prefix et_) are generated as follows.

● Publish only uniform cube data. Select the Create Columns With Data Types Based on the
Dimension for Publish option, the data type of each item of the Dimension for publish is used
for the columns of the export tables. If individual cell types differ from that of the corresponding
columns, the corresponding cell data is not published and an informative message appears.

● Select only data of the specified types.


When more than one data type is selected, multiple columns appear for each item in the export
tables, one column per data type. For example, if both numeric data and dates are selected,
two columns are created per item in the dimension for publish.

● Include the original formatted numeric and date values, which are stored in the text column.
This is useful when the original format cannot be easily reproduced in the reporting tool
application.

● Publish entire cubes, or publish only leaf data and let the reporting engine perform the rollups.
In this way, you control the level of detail of the information to publish.
The summary hierarchy as specified in the sy_ tables must be used to perform the rollups. Leaf
cells are those that correspond to leaf items of the simple summary hierarchies.

Data Types Used to Publish Data


The following data types are used for publishing data.

Data type MS SQLServer IBM DB2 UDB Oracle

TEXT VARCHAR(8000) CLOB VARCHAR2(4000)

DATE datetime date timestamp

DOUBLE float float float

INTEGER integer int NUMBER(10)

The prefixes text_, date_, and float_ are used to identify the data types of columns in tables, and
the suffix _[count] is used to guarantee name uniqueness.

Annotations Tables for the Table-only Layout


You can choose to publish user and audit annotations.
Cell and audit annotations are published to the an_cubename table.
Tab and model annotations are published in the annotationobject table.

Administration Guide 267


Chapter 14: Publishing Data

The an_cubename Table

This table contains cell and audit annotations.


The columns of the an_cubename table are as follows:

Column Dimension

HierarchyDimensionName The unique identifier (p. 263) of the e.List items for
the coordinates of the cell annotations.

Dimension_DimensionName The unique identifier of the D-List items for the


coordinates of the cell annotations.

MeasureDimensionItemName_user_id The last user who updated the annotation.

DimensionItemName_date The last date the annotation was updated.

DimensionItemName_annotation For a cell annotation, the text of the annotation.


For an audit annotation, the details of the action
performed. The text is prefixed with Audit
annotation on … followed by the action.

DimensionItemName_value The cell value at the time of the annotation.

visible Indicates if this row can be reported on.

The annotationobject Table


This table contains tab and model annotations.
The columns of the annotationobject table are as follows.

Column Dimension

object_id The identifier of the cube or model being annotated.

node_id The e.List item identifier.

user_id The user id of the person who created the annotation.

annotation_date The date and time the annotation was made. They are stored as UTC
+ 00:00.

annotation The text of the annotation.

268 Contributor
Chapter 14: Publishing Data

Attached Document Tables for the Table-only Layout


You can choose to publish some metadata about the attached document. Cell level attached document
metadata is published to the ad_cubename table. Tab and model attached document metadata is
published in the documentobject table.

The ad_ cubename Table

This table contains cell attached document metadata.


The columns of the ad_cubename table are as follows.

Column Description

HierarchyDimensionName The unique identifier (p. 193) of the e.List items for the
coordinates of the cell attached documents.

Dimension_DimensionName The unique identifier of the D-List items for the coordinates
of the cell attached documents.

MeasureDimensionItemName_user_id The last user who updated the attachment of the document.

DimensionItemName_date The last date the attachment of the document was updated.

DimensionItemName_filename The file name of the document that was attached.

DimensionItemName_filesize The file size at the time the document was attached.

DimensionItemName_comment A comment that was entered at the time the document was
attached.

DimensionItemName_value The cell value at the time the document was attached.

visible Indicates if this row can be reported on.

The documentobject Table


This table contains tab and model metadata about attached documents.
The columns of the documentobject table are as follows.

Column Description

node_id The e.List item identifier.

object_id The identifier of the cube or model that is having the


document attached to it.

Administration Guide 269


Chapter 14: Publishing Data

Column Description

user_id The user id of the person who attached the document.

document_date The date and time the document was attached.

document_name The file name of the document that was attached.

document_size The file size at the time the document was attached.

document_comment A comment that was entered at the time the document was
attached.

visible Indicates if this row can be reported on.

Metadata Tables
Metadata about the publish tables is maintained in several tables.

The P_APPLICATIONOBJECT Table

The description of each database object created during a publish operation is maintained in
applicationobject.
The columns of the P_APPLICATIONOBJECT table are as follows.

Column Description

objectname Name of the object.

displayname Display name of the associated Planning object.

objectid A globally unique reference (GUID) for the object.

objecttypeid The type of object, for example, EXPORT_TABLE,


DIMENSION_SIMP_HIER.

datastoretypeid Describes the datastore type: Table.

objectversion This is an internal version number used for debugging.

lastsaved This detects if the published model is out of date.

libraryid This detects if the published model is out of date.

270 Contributor
Chapter 14: Publishing Data

The P_APPLICATIONCOLUMN Table

The columns of the P_APPLICATIONCOLUMN table are as follows.

Column Description

objectname Name of the object.

columnname Name of the column.

displayname Display name of the associated Planning object.

columnid A globally unique reference (GUID) for the object.

objecttypeid Type of Planning object, such as, EXPORT_TABLE,


DIMENSION_ITEMS, DIMENSION_SIMP_HIER.

columntypeid -

columnorder The order in which the column appears.

logicaldatatype The type of data, such as, epGUID, epTextID.

The P_APPCOLUMNTYPE Table

The types of tables that can exist in Contributor.


The columns are as follows:

Column Description

objecttypeid The object type, such as a FACT_VIEW.

columntypeid -

description A description of the object type.

The P_APPOBJECTTYPE Table

The types of application object that exist.


The columns are as follows:

Column Description

objecttypeid The object type, such as ANNOTATION_OBJECT.

Administration Guide 271


Chapter 14: Publishing Data

Column Description

description A description of the object type.

The dimensionformats Table


The dimensionformats table formatting information for the items of the dimension for publish that
is compatible with Cognos 8 Business Intelligence.
The columns of the dimensionformats table are as follows.

Column Description

dimensionid Globally unique identifier of the dimension for publish.

itemguid Globally unique identifier of the item of the dimension for publish.

formattype One of percent, number, or date.

negativesignsymbol String indicating how negative values must be reported.

noofdecimalplaces Number of decimal places for numerical values.

scalingfactor Integer for the scaling factor of numerical values.

zerovaluechars Characters to use for zero of blank value.

Common Tables
Common tables are created so that you can track the history of events in the publish container.
The P_ADMINHISTORY table stores information about when major events occurred to the publish
container.
The P_ADMINEVENTS table contains the IDs and descriptions of the event types used in the
P_ADMINHISTORY table.
The P_CONTAINEROPTION table is used for Oracle and DB2 to store tablespace information
for blob, data, and index.

Job Tables
The following tables are created to support jobs (p. 47).

272 Contributor
Chapter 14: Publishing Data

Table Description

P_JOB Information about the jobs that are running or ran in the application.
This information is used in the Job Management window.

P_JOBITEM Each Job Item is represented by a row in the jobitem table. The state
of the Job Item is also stored. If a problem occurred while running
the Job Item, descriptive text is stored in the failurenote column and
is appended to the failurenote column for the job.

P_JOBITEMSTATETYPE Job item status types: failed, ready, running, succeeded.

P_JOBSTATETYPE Job status types: canceled, complete, creating, queued, ready, running.

P_JOBTASK Where and when the job items ran and the security context it used.

P_JOBTYPE Job types and their implementation program IDs.

Parameters and failurenotes in Job tables are stored as XML LOBs.

The P_OBJECTLOCK Table


The P_OBJECTLOCK table supports macros, administration links, and the export queue. It locks
objects in the system when they are being processed and contains information about the objects
being worked on.

Create a Table-only Publish Layout


Before you publish, you must ensure the following
● you have appropriate access rights (p. 256)

● the Go to Production process has been run (p. 239)

For more information about the table-only publish layout, see "The Table-Only Publish
Layout" (p. 262).

Steps
1. In the application's tree, click Production, Publish, Table-only Layout.

2. Check the cubes you want to publish data from.

● The Dimension column indicates the data dimension for publish that is selected.

● The Annotation Rows column shows the number of annotation rows for a cube when you
click Display row counts.

Administration Guide 273


Chapter 14: Publishing Data

● The Export Rows column shows the number of rows that are published when you click
Display row counts.

3. Click the e.List Items tab.


You can select or clear individual items, or use the buttons at the top of the table.
This step is not required for assumption cubes because assumption cubes do not contain e.List
items.

4. To set the Publish options and configure the Publish datastore connection, click the Options
tab (p. 274).

5. Click Publish.

6. If you are asked if you want to create a publish container, click OK.

7. Select the job server or job server cluster to monitor the publish container and click Close.
You need the rights to add an application to the job server or job server cluster.

A reporting publish job is queued. You can monitor the progress of the job. For more information,
see "Jobs" (p. 47).

Options for Table-only Publish Layout


In the Publish Options tab, you can set the publish options and configure the publish datastore
connection.

Option Description

Creating a New Publish The first time you attempt to publish data, you can either create
Container the default publish container, by clicking Publish, or create a new
publish container (p. 282).

Configuring the Publish To configure the publish datastore connection, click the Configure
Datastore Connection button, see (p. 283).

Create columns with data To use the item types from the dimension for publish as the table
types based on the columns.
’dimension for publish’

Only create the following To manually select the data types that are part of the publish process
columns for each measure.
You can choose to publish Numeric, Text, and Date columns.
Within the Text column, you can also choose whether to include
formatted numeric and date values.

274 Contributor
Chapter 14: Publishing Data

Option Description

Include Roll Ups Selecting this check box includes all items, including calculated
items. Clearing this option only publishes leaf items, and therefore
fewer rows. You can recreate the calculation in your reporting tools
by linking the et and sy tables.

Include Zero or Blank Values Clearing this check box means that empty cells are not populated
with zeros or blanks. This can speed up the process of publishing
data substantially, depending on the number of zero or blank cells.

Prefix column names with Select this option if you wish the column name to be prefixed with
data types the data type to avoid reserved name conflicts.

Include User Annotations Selecting this check box publishes cell level user annotations in a
table named an_cubename.

Include Audit Annotations Selecting this check box publishes audit annotations in a table
named an_cubename, in the column annotation_is_edit.

Include Attached Documents Selecting this check box includes information about attached
documents. Information about the attached document such as the
filename, location, and file size are published with the data.

Create an Incremental Publish


If you are publishing data using the Table-only layout, the Incremental Publish feature can be used
to only publish those e.List items that have changed since the last time you published. Incremental
Publish uses the same infrastructure as Table-only layout but tracks what items were altered since
the last publish and only re-publishes them. You can create macros that will run an incremental
publish at scheduled intervals, see "Publish - Incremental Publish" (p. 213). This provides you with
a nearly real-time publish. Another benefit of scheduling incremental publishes is that you can
publish changed data soon after saving or submitting plans. It also reduces the need for frequently
scheduled full publishes and therefore potentially saving time and resources.
If your publish selection contains more than one cube, but values change in only one cube, the
changed e.List items for all the cubes are republished. The incremental publish is by definition a
change-only publish and therefore requires that a publish schema is created, either by running a
full publish, selecting the cubes and e.List items that you want to publish, or by generating and
running publish scripts. When you run a Go to Production process incremental publishes that are
changes-only are suspended. Model changes that result in changes to the publish schema may require
you to do a full publish of all the selected cubes and e.List items.

Steps
1. In the application's tree, click Production, Publish, Incremental Publish.

Administration Guide 275


Chapter 14: Publishing Data

2. To configure the publish container, click Configure.


The incremental updates will be applied to that container.

3. To publish only submitted changes, select Include only submitted items.

Note: If you use this option without changing data in an e.List, the e.Lists without changes are
not included in the publish.

4. Click Publish.

A message displays indicating if an Incremental Publish job was initiated or, if no changes were
detected.

The View Publish Layout


The view publish layout as supported in Contributor and Analyst version 7.2 is compatible with
previous Planning data solutions. It is intended for backwards compatibility only.
We recommend that non-Cognos applications currently dependent on the view publish layout be
migrated to use the new table-only publish layout because of the improvements in publish
performance, data storage efficiency and incorporation of best practices.
The following types of publish tables are created when you publish using the view layout:

Table type Description Prefix or Name

Items (p. 277) Describes the D-List items. it_

Hierarchy (p. 277) Used by reporting tools. The depth hy_


of each item in the dimension cy_ (calculation hierarchy tables)
hierarchy is recorded and display
information.

Export (p. 278) Contains published D-Cube data. et_

Annotation (p. 278) Contains annotations, if the option an_ for cell and audit annotations
to publish annotations is selected. annotationobject for tab (cube) and
model annotations

Metadata (p. 270) Contains metadata about the P_APPLICATIONCOLUMN


tables. P_APPCOLUMNTYPE
P_APPOBJECTTYPE
P_APPLICATIONOBJECT
annotationobject

276 Contributor
Chapter 14: Publishing Data

Table type Description Prefix or Name

Common (p. 272) Contains tables used to track when P_ADMINEVENT


major events occurred in the P_ADMINHISTORY
publish container.
P_CONTAINEROPTION

Job (p. 272) Contains tables with information P_JOB


relating to jobs. P_JOBITEM
P_JOBITEMSTATETYPE
P_JOBSTATETYPE
P_JOBTASK
P_OBJECTLOCK

Database Object Names


Database object names are limited to 18 lowercase characters, and are derived from the Cognos 8
Planning object names.

Items Tables for the View Layout


One items table is created for each D-List. It contains one row per item. The name of the table is
generated from that of the D-List and the prefix it_.
The items tables have the following columns.

Column Description

itemid Unique identifier for the item.

dimensionid Unique identifier for the D-List.

itemname Name of the item.

displayname Display name of the item.

disporder Display order specified in Analyst, which is zero-based.

Hierarchy Tables for the View Layout


There is no model construct for specifying items hierarchies. Instead, hierarchies are derived from
user specified equations.

Administration Guide 277


Chapter 14: Publishing Data

Two types of hierarchies are currently supported; complete hierarchies and simple summary
hierarchies.
Complete hierarchies are used to produce reports on the entire contents of cubes. Complete
hierarchies are used to organize cube data and are not used to perform rollups and calculations in
the reporting engine. The rules that govern the generation of complete hierarchies in the cy_ tables
are as follows:
● The parent of a given item is the first simple sum that references the item.

● If this sum does not exist, it is the first non-sum calculation that references the item.

● If neither exists, the item is a top-level item.

Simple summary hierarchies are used when only detail items are published and rollups are performed
from the reporting engine. The rules that govern the generation of these hierarchies are as follows:
● The parent of a given item is the first simple sum that references it.

● If there are there are multiple candidates for the parent of an item, it is assigned to the first
parent in iid order and the other candidate parents are considered to be detail items in the
hierarchy.

● In the case where a parent cannot be identified that way and the item is not a simple sum, it is
considered to be a root item.

Simple summary hierarchies are not necessarily complete because all items that are part of a D-List
may not necessarily be part of the hierarchy.
The starting point for the production of these hierarchies is the graph of items dependencies produced
when equations are parsed. This graph specifies all parent/child relationships between items. Because
the simple summary hierarchy is limited to simple sums, sub-hierarchies can be detached from the
main hierarchy and moved to the top.

Export Tables for the View Layout


Cell data for the selected cubes are published to the et_ tables, one row per cell. These tables contain
the coordinate for each cell of the cube and the corresponding cell value. One column per D-List
stores the item GUID along that dimension. An additional column stores the cell value (published
as a blob or varchar depending on the target DBMS). In Cognos 8 Planning, a cell value can contain
a date, a double, or text.

Annotation Tables for the View Layout


You can choose to publish user and audit annotations.
Cell and audit annotations are published to the an_cubename table.
Tab and model annotations are published to the annotationobject table.

The an_ cubename View Layout Table

The an_cubenameview layout table contains cell and audit annotations.

278 Contributor
Chapter 14: Publishing Data

The columns of the an_cubename table are as follows.

Column Dimension

Dimension_DimensionName The unique identifier of the D-List items for the coordinates
of the cell annotations.

HierarchyDimensionName The name of the e.List.

annotation_user_gu The globally unique identifier of the last user who updated the
annotation.

annotation_date The date and time the annotation was made. They are stored
as UTC + 00:00.

annotation_cell_va The cell value at the time of the annotation.

annotation_is_edit Whether the annotation is editable (0 = no, 1 =yes).

The annotationobject View Layout Table


The annotationobject view layout table contains published tab and model annotations.
The columns of the annotationobject table are as follows.

Column Dimension

objectguid The globally unique identifier (GUID) of the cube or model


being annotated.

nodeguid The GUID of e.List item being annotated.

user_guid The GUID of the user that annotated.

annotation_date The date and time the annotation was made.

annotation The text of the annotation.

Views
An ev_view is created to provide a more user-friendly access to its associated export table (et_table)
, which contains cube data. In this view, GUIDs are simply replaced by the display name associated
with the D-List items, and export value are cast to varchar when published as blobs.
A fact view (with fv_prefix) is created for each cube being published and is limited to numeric values
by joining the export values from the et_ table to the items in the hy_ tables for the cube. These
rules for deriving this hierarchy are explained earlier.

Administration Guide 279


Chapter 14: Publishing Data

A complete view (with cv_prefix) is created for each cube being published and is built by joining
the export values from the et_ table to the items in the cy_ tables for the cube.
The following views are created in a view publish layout.

View name Description

fv_cubename A view on the cell data for a cube that resolves the star schema linking
to the flattened out hierarchy for a dimension.

ev_cubename A view on the cell data for a cube that resolves the star schema linking
to the items in a dimension.

av_cubename A view on the cell annotations table for a cube that resolves the star
schema.

cv_cubename A complete view created for each cube being published.

Create a View Layout


Before you publish, you must ensure that
● you have appropriate access rights (p. 256)

● select the data dimensions for publish, if required (p. 260)

● select the e.List items to be published (p. 257)

● Go to Production was run (p. 239) after the data dimensions for publish were selected

For more information about the view layout, see "The View Publish Layout" (p. 276).

Steps
1. In the application tree, click Production, Publish, View Layout.

2. On the Cubes tab, check the cubes you want to publish data from.
● The Dimension column indicates the dimension that is selected.

● The Annotation Rows column shows the number of annotation rows for a cube when you
click Display row counts. Note that only cell annotations are published.

● The Export Rows column shows the number of rows that are be published when you click
Display row counts.

3. Click the e.List Items tab to select the e.List items to publish. You are unable to publish before
doing this.

4. Click the Options tab. This step is optional and enables you to set the Publish options and
configure the Publish datastore connection. For more information, see "Options for View
Layout" (p. 281).

280 Contributor
Chapter 14: Publishing Data

5. Click Publish.

6. If you are asked if you want to create a publish container, click OK.

7. Select the job server or job server cluster to monitor the publish container and click Close.
You need the rights to add an application to the job server or job server cluster.
A publish job is queued. You can monitor the progress of the jobs.

Options for View Layout


In the Publish Options window, you can set the publish options and configure the publish datastore
connection.

Option Description

Creating a New Publish The first time you attempt to publish data, you can either create
Container the default publish container, by clicking Publish, or create a new
publish container (p. 282).

Configure the Publish To configure the publish datastore connection, click the Configure
Datastore Connection button, see (p. 283).

Do Not Populate Zero/Null/ Ensure that empty cells are not populated with zeros. Selecting this
Empty Data option can substantially speed up the process of publishing data,
depending on the number of blank cells.

Publish Only Cells With Selecting this check box publishes only rows that include at least
Writeable Access one cell with write access; rows for which all cells are read-only or
hidden are not included. Clearing this check box publishes all cells,
including hidden cells, regardless of access levels.

Use Plain Number Formats Selecting this check box removes any numeric formatting for the
purposes of export. It exports to as many decimal places as are
needed, up to the limit stored on the computer. Negative numbers
are prefixed by a minus sign. No thousand separator, percent signs,
currency symbols, or other numeric formats that were applied on
the dimension or D-Cube are used. Plain Number Format uses the
decimal point (.) as the decimal separator.

Remove all data before Selected by default, selecting this option ensures that a consistent
publishing new data set of data is published. It publishes data for all the selected cubes,
and remove all other published data in the datastore.
If this check box is cleared, it leaves existing data, unless the e.List
item is being republished. In this case, it removes the existing data
for that e.List item and replace it with the new data.

Administration Guide 281


Chapter 14: Publishing Data

Option Description

Include User Annotations Selecting this check box publishes cell level user annotations in a
table named an_cubename.

Include Audit Annotations Publishes audit annotations to a table named an_cubename, in the
column annotation_is_edit.

Create a Custom Publish Container


Before you publish data, no publish containers exist. You can create a custom publish container.
When you create a table-only publish layout (p. 273) or a view publish layout (p. 276), you can also
create a default publish container using default naming conventions on the same datastore server
as the application datastore.

Type of Publish Publish Container Name

Table-only layout applicationname_table

View Layout applicationname_view

Steps
1. In the Production application, click Publish, and either Table-only Layout or View Layout
depending on the type of publish container you require.

2. Click the Options tab, and then click the Configure button.

3. Select the datastore server where you want the publish container to be created.

4. Click Create New.

5. Click the button next to the Name box .

6. Complete the New publish container dialog box:

Application name Type a name for the publish datastore. We


recommend that you use the name of the current
application and append to it a suffix, such as, _view
for a view layout, or _table for a table-only layout.

Location of datastore files Enter an existing location for the datastore files on
the datastore server. Required only by SQL Server
applications.

282 Contributor
Chapter 14: Publishing Data

7. If you have an Oracle and DB2 UDB application, click Tablespace and then specify the following
configuration options:
● Tablespace used for data

● Tablespace used for indexes

● Tablespace used for blobs

Custom temp tablespaces are supported for Oracle only.

8. Click Create.
If you are prompted to create a script, this must be run by a DBA to create the publish container.
If you are not prompted to create a script, the container is created.
The publish container must be added to a job server or job server cluster so that the publish
jobs are processed.

Configure the Datastore Connection


You must have Modify connection document rights (p. 37) for the datastore server.
You can select and configure the publish container.

Note: Tablespace settings can be configured only when creating a new publish container.

Steps
1. In the Production application, click Publish, and either Table-only Layout or View Layout
depending on the type of publish container you require.

2. Click the Options tab, and then click the Configure button.
If you are prompted to create and generate a script, do the following:

● Click OK and Cancel.

● Click Generate Synchronization Script for Datastore.

● Name and save the script, and pass to the DBA to run the script.

● When the script is run, click the Configure button.

For more information on scripts, see "Publish Scripts" (p. 256).

3. Click the required publish container and click Configure.

4. Configure the following options.

Administration Guide 283


Chapter 14: Publishing Data

Option Action

Trusted Connection Click to use Windows authentication for the logon method to
the datastore. You do not have to specify a separate logon ID
or password. This method is common for SQL Server
datastores and less common, but possible, for Oracle.

Use this account Enter the datastore account that this application will use to
connect. This box is not enabled if you use a trusted
connection.

Password Type the password for the account. This box is not enabled if
you use a trusted connection.

Preview Connection Click to view a summary of the datastore server connection


details.

Test Connection Click to check the validity of the connection to the datastore
server. This is mandatory.

5. If you want to configure advanced settings, click Advanced, and enter the following information.
Typically these settings should be left as the default. They may not be supported by all datastore
configurations.

Provider Driver Select the appropriate driver for your datastore.

Connection Prefix Specify to customize the connection strings for the needs of the
datastore.

Connection Suffix Specify to customize the connection strings for the needs of the
datastore.

6. Click OK.

Remove Unused Publish Containers


You can remove unused publish datastore containers. This does not delete the datastore container,
it just removes the reference so that it is not displayed in the Select Publish Datastore Container
window.

Steps
1. From the Tools menu, click Maintenance, Validate Publish Containers. A list of publish
containers is displayed.

284 Contributor
Chapter 14: Publishing Data

2. Select the ones you want to remove, and click Delete.

Administration Guide 285


Chapter 14: Publishing Data

286 Contributor
Chapter 15: Commentary

Attached documents and user annotations that are linked to a plan are grouped together and are
named Commentary. The user can view an attached document by browsing the Commentary of
an application.
The Maintenance branch of the production application enables the user to delete user annotations
(p. 287), audit annotations (p. 287), and attach documents (p. 290) from the application datastore.
The user can also filter what to delete based on a date or the text that it contains.

Annotations
The two types of annotations are user annotations and audit annotations.

User Annotations
User Annotations consist of the following:
● Annotations per cell.

● Annotations per cube (named tab in the Web client).

● Annotations per model.

Any owner of an e.List item, that is a user with directly assigned or inherited rights that are greater
than View can annotate. Users with View rights cannot annotate, but can view annotations.
Annotations include date/time, user name and text.
Annotations in cells are indicated in the cell by a red dot and are showed as tips when the mouse
moves over the dot.
Users can browse and print annotations.

Audit Annotations
Administrators can choose to record user actions. These records are called audit annotations. Audit
annotations can be made visible in the Web application to the user, and can be published.
The user can record user actions in the Web client, such as typing data, importing files, and copying
and pasting data. Tracking changes through the system is useful for auditing purposes, and if an e.
List item has multiple owners and users want to see who made changes.
To control its impact on the size of the application datastore, this feature is configured in Application
Settings.
Users can view audit annotations using the Annotation Browser, if they are visible in the Web client.
For more information see "Change Application Options" (p. 72).

Administration Guide 287


Chapter 15: Commentary

User Annotation Behavior


To annotate a cell, cube (tab), or model, in the Contributor application, users opens the e.List item
and click Take Ownership .

Steps
1. Right-clicks in the cell, tab or model to be annotated.

2. Selects Annotate, Annotate cell or tab or model and then selects Add.

3. Types the note and closes it by clicking the button in the top right hand corner.
Users can only annotate a particular cell, cube or the model once in a session (but can annotate
more than one cell or cube in a session). They can edit annotations in that session. An annotation
session is ended by saving.
e.List items can be annotated in any workflow state, for example locked. Making an annotation
does not affect the workflow state.
A user wanting to annotate a contribution e.List item may never bounce the current editor of
the e.List item, irrespective of the status of the Allow Bouncing option (p. 72).
Assumption cubes cannot be annotated. This is because assumption cube data is stored in the
model definition, not in the data block, so there is no ability to make it user specific.

Linking to Web Pages, Files, and Email Addresses From Annotations


In Contributor Applications, users can link to Web pages, files and email addresses from annotations.
When the user views an annotation in the grid by selecting Annotate, Annotate Cell/Tab/Model,
View, the link is activated. Links in annotations are not activated if viewed by moving the mouse
pointer over the red triangle.
Users are limited to 3844 characters per annotation.

Step to Add a Link to a Web Page


● Type a valid URL, for example: http://www.cognos.com in the annotation edit box.
Clicking this link automatically opens the Web page in a new browser window.

Step to Add a Link to an Email Address


● Type the HTML link command mailto: using the following format: mailto:jsmith@company.
com
Where "jsmith@company.com" is the email address that the user wants to link to.
Clicking this link opens up a new mail message window in the user’s default browser, and
enters the email address in the To: box.

Steps to Add a Link to a File


1. Ensure that the file is in a shared network location.

288 Contributor
Chapter 15: Commentary

2. Type the HTML link command file: using the following format: file:\\uncdrivename\docs\
expenses.xls
Where uncdrivename is the UNC (universal naming convention) name for the drive. Use this
instead of a fixed drive letter such as f:\. This is because a fixed drive letter may not be the same
for the people viewing the annotation.
Only use this method of linking to a file if the user expects the file to be viewed by a small
number of people (such as 2 or 3). If the user expects more people to view this it is better to
make the file accessible from a Web site.

Delete Commentary
Administrators can delete comment in a Contributor application using date and time, character
string and e.List item name filters. See "Deleting Commentary" (p. 289) for more information. The
user can also automate the deleting of annotations by using a macro "Delete Commentary" (p. 214).
Users can also delete annotations. See the Contributor Browser User Guide for more information.

Delete Commentary - e.List items


Before you can delete commentary, they must select all the e.List items that they want to delete
them from. To do this, use the buttons at the top of the Delete Annotations, e.List Items tab to:
● Select All or Clear All e.List items.

● Select All Children or Clear All Children.

● Select All Planners or Clear All Planners.

You can also click individual items to select or deselect them.

Deleting Commentary
You can delete all commentary in a Contributor application using date and time, character string
and e.List item name filters.

After you specify the filters, e.List items for the annotations to be deleted, and click Delete
commentary, a COMMENTARY_TIDY job is run. The deletion is not seen by web clients until a
reconcile is run. This enables the commentary to be deleted while the Contributor application is
online. This may be run as a macro (p. 214).
Annotations and saved annotations can be deleted by the creator until the annotation is submitted.

Steps
1. In the Production branch of the application, click Maintenance, Delete Commentary.

2. On the Delete options tab, click the options as required:


● Delete user annotations

● Delete audit annotations

● Delete attached documents

Administration Guide 289


Chapter 15: Commentary

● Apply date filter. Select if the user wants to delete commentary by date. If the user selects
this option, they must select a date from the Delete commentaries before date box. It will
default to today's date at midnight, local time.

● Apply annotation content filter. Select if the user wants to delete commentary by content.
For example, if they want to delete annotations containing the word banana, any annotations
containing this word will be deleted, if they also conform to the other filters.

3. Click the e.List items tab and click the e.List items that the annotations will be deleted from.

4. Click Delete Commentary.


The Tidy annotations job runs on available job servers. To monitor the progress of the job,
view the Job Management window (p. 50).

Attach Documents
The user can attach many types of files to a cell, cube, or model to help support their planning
process. The types of files that can be attached are configured by the administrator in the Contributor
Administration Console. The attachments are stored in a Planning Application database.
The following default file types are allowed:
● Microsoft Word (.doc)

● Microsoft Excel (.xls)

● Microsoft PowerPoint (.ppt)

● Microsoft Visio (.vsd)

● Microsoft Project (.mpp)

● ZIP Files (.zip)

● RAR Files (.rar)

● Web Documents (.htm, .html)

● Text Files (.txt)

● PDF Files (.pdf)

The user can add or remove any required file type from the defaults provided. Executable files (.
exe) are not included in the default list because of security reasons, but can be added by the
Administrator.

Configuring the Attached Documents Properties


In the Contributor Administration Console, the administrator designates what type of files are
allowed and also configures the size limits of an attached document. These settings are set at the
System level and apply to all applications within a specific Planning environment.

290 Contributor
Chapter 15: Commentary

Note: The maximum number of attached documents is 500.

Steps
1. In the Administration tree, click System Settings, and Web Client Settings.

2. In the Attached Documents area, click Limit Document Size if the user wants to restrict the
size of the attached files.

3. Enter an amount (in megabytes) for the Maximum Document Size (MBs).

4. In Allowable Attachment Types, choose to either remove a selected file type by clicking Remove
or click Add to add a new allowable attachment type.

5. At the end of the list of file type, enter a label name and the file type extension. Make sure the
user appends the file type extension with an asterisk (*).

Note: Changes made to the Attached Documents settings take effect almost immediately and
without the need to perform a Go To Production.

Attaching a Document
The user can attach a document to a cell, tab, or model in the Contributor Web application.

Note: The user can also do this in the Contributor for Excel.

Steps
1. In the Contributor workflow window, the user clicks on an available e.List item that they want
to open.

2. In the Contributor grid, the user can either click on the Attached Documents button or right-click
in a cell and select either cell, tab, or model and click Add. The Attach a new document dialog
box appears.

3. In the Source file location, enter either the location, the file, or click the browse button and
browse to the file location.

4. Enter comments into the Comments box. There is a 50 character maximum limit for this box.

5. Click OK to attach a document.

A red triangle appears in the corner of the cell to which the document is attached. A copy of the
document is attached to the application, not the original file. This is similar to attaching a file to
an email and is not meant to perform as a document management system.

Viewing and Editing Commentary


The user can view an attached document by browsing the Commentary of an application. Attached
documents do not download when the e.List item is opened. They are only downloaded from the
application server when the user selects to view or edit them.

Administration Guide 291


Chapter 15: Commentary

Note: Attached documents are not available when working offline and the user cannot attach a
document while working offline. However, it is possible to see if a document is attached to a cell
while offline.

Steps
1. In the Contributor grid, click the Browse Commentary button or right-click a cell and select
Browse Commentary. An icon also appears in the Contributor workflow window notifying
the user that one or more documents are attached to an e.List item. However, they cannot open
attached documents from the workflow window.

2. In the Commentary Browser dialog box, the user selects the commentary item that they want
to view and then click View Document to open the file. The user can filter the items to just
show user annotations or attached documents. They can also choose whether to view
Commentary for the current page in the grid or Commentary for all pages.

3. To edit commentary, select the commentary item and click Edit Document. The item opens
allowing the user to make changes and save the new version along with the application. The
user will be prompted to update the repository if they made changes to the file.

4. To delete commentary, the user selects the check box for the item that they want to delete and
click Delete.

Note: Only the owner or the Contributor administrator can delete an attached document.

5. The user can print an annotation by selecting the file and clicking Print. To print a document,
open it and print from the associated viewer.

Publishing Attached Documents


Information about attached documents can be published using the Table-Only Layout publish
function. Information such as file size, file name, location, and the user who attached the file is
published.
For more information, see "The Table-Only Publish Layout" (p. 262)

Copy Commentary
Attached documents and user annotations that are linked to a plan are grouped together to form
Commentary. The user can copy commentary between Contributor cubes and applications using
administration, system, and local links.
You can create a link that moves data from multiple sources. If the multiple sources contain
commentary, once the link is run the target will contain all the commentary available from the
sources.
For more information, see "Managing Data" (p. 141).

Note: The user can only copy Commentary using links that contain data.

292 Contributor
Chapter 15: Commentary

Breakback Considerations when Moving Commentary


Breakback does not occur when attaching commentary either manually or by using an administration
or system link. Commentary can be attached to a calculated cell without impacting cells making
up that calculation.

Note: Identical documents from different sources are treated as separate documents.

Administration Guide 293


Chapter 15: Commentary

294 Contributor
Chapter 16: Previewing the Production Workflow

The Preview window gives you a preview of the production e.List and workflow state and allows
you to view properties of the e.List items. The icons indicate the current status of the data in the
production application. Clicking Refresh enables you to keep track of the status of the icons. For
example, when you have put a development application into production, you can see when e.List
items have been reconciled, see "Reconciliation" (p. 52) In this case, the icons will change from:
Not started, out of date .
Not started, reconciled .
See "Workflow State Definition" (p. 297) for more information.
To preview the data in the production application in the Preview window, expand the Preview tree,
right-click the e.List item and click Preview.
When you preview an e.List item it appears to behave like it does in the Web. For example, you
can right-click in the grid and click Annotate cell, Add and an annotations window appears. You
can type in the annotations window and when you close the window, you can view the annotation
by moving your mouse over the red square. However, after you have closed the Preview, these
changes are not saved.

Note: Any action you perform in Preview has no bearing on the Production application.

Previewing e.List item Properties


Right-click the e.List item and click Properties. A five-tabbed window is displayed containing:
● General

● Owners

● Editors

● Reviewers

● Rights

Preview Properties - General


The General tab contains the following information:

e.List item type Indicates whether it is a contribution or a review e.List item.

e.List item state The workflow state, see "Workflow State Definition" (p. 297)
for more information.

Administration Guide 295


Chapter 16: Previewing the Production Workflow

Date state changed This gives the date and time that the workflow state changed
in the following format: yyyy-mm-dd hh:mm:ss.

User who last changed the state The name of the user to have last changed the state.

Number of children The number of items in the next level below this e.List item.

Number of locked children The number of child items that are locked, indicating that
data was submitted.

Number of saved children The number of child items where work has started and been
saved.

Preview Properties - Owners


This Owners tab contains the following information about the owners of the selected e.List item.

Owner name The name of an owner of the e.List item. An owner is a user assigned to an
e.List item with greater than View rights.

E-mail address The e-mail address of the user.

Current Owner This is checked if the owner is the current owner of the e.List item. The
current owner is the last user to have opened an e.List item for editing.

Preview Properties - Editors


The following information is displayed on this tab.
● Editor - the name of the last or current editor, the time they started editing the e.List, and
whether they are working online or offline.

● Annotator - the name of the last or current annotator, and the time they started creating an
annotation.

For information about editing while offline, see "Working Offline" (p. 86).

Preview Properties - Reviewers


This lists the reviewers for the e.List item and their e-mail addresses.

The Data Reviewed indicator indicates whether the e.List item was reviewed and the Data Viewed
indicator indicates whether the data was viewed or not.

Preview Properties - Rights


This displays the following information:

296 Contributor
Chapter 16: Previewing the Production Workflow

e.List item The e.List item display name.


Display Name

User, Group, The User, Group, or Role assigned to the e.List item (more than one user,
Role group, or role can be assigned to an e.List item).

Rights The level of rights that a user has to the e.List item.

Inherit from If the rights have been directly assigned to the user, this cell will be blank.
If the rights have been inherited, this indicates the name of the e.List item
the rights have been inherited from.

You can print this information, or save it to file.

Workflow State Definition


The workflow state icons indicate the state of data in the Cognos 8 Planning - Contributor
application.

Icon State Contribution Review

Not started The e.List item has not been edited None of the items that make up this
and saved (it may have been edited e.List item have been edited and
but the changes not saved). saved.

Work in The e.List item was edited and saved All items that make up this e.List
progress but not submitted. item have been edited and saved. At
least one item has not yet been
submitted.

Incomplete Not applicable. Some items that make up this e.List


item have not been started. At least
one item was started.

Ready Not applicable. All items that make up this e.List


item have been submitted and are
locked. This item can be submitted
for review.

Locked The e.List item was submitted and The e.List item was submitted.
can no longer be edited.

Administration Guide 297


Chapter 16: Previewing the Production Workflow

Additional Workflow States


In addition to these workflow states, there are additional icons that indicate variations on these
states. The variations are:

Icon Variation Description

Has a current editor/annotator. The e.List item was opened for editing/
annotating. An edit session is ended by the user
closing the grid, or by submitting the e.List item.

Is out of date. This indicates that the e.List item needs


reconciling. This happens when Go to
Production was run on the application and the
e.List item has not been reconciled. If client side
reconciliation is prevented, the user is unable
to view the data until reconciliation has
occurred.

Has a current editor/annotator and There is a current editor or annotator, and the
is out of date. data is out of date.

These additional states only appear to the user in the front window of the Contributor application,
not in the grid.

Workflow State Explained


This diagram demonstrates the different workflow states that in Contributor.

298 Contributor
Chapter 16: Previewing the Production Workflow

Each icon represents the state of the e.List item. The lowest level e.List items (for example, labeled
A1 Profit Center) are contribution e.List items, that is items that you enter data into. The higher
level e.List items are review e.List items, and the state of a review e.List item is affected by the states
of the contribution e.List items that feed into it.

States for Contribution e.List Items


Before data is entered and saved in an e.List item, its state is Not started. After you save an e.List
item, the state becomes Work in progress and remains accessible for more editing. When you submit
an item, the e.List item is Locked and no more changes can be made. The Locked state indicates
that the e.List item is ready for review. A reviewer can review the e.List item in any state, but can
only reject a Locked e.List item. When an e.List item is rejected, it returns to a state of Work in
progress.

States for Review e.List Items


A review e.List item where none of the items that feed into it have been saved has a state of Not
started (see A Region). When at least one of the items that make up a review e.List item is not saved,
and at least one other item is saved, its state is Incomplete (see Total Company, Division X and B
Region). When all the items that make up a review e.List item are saved and at least one is not
submitted, its state is Work in progress (see Division Y and C Region).
After all the e.List items that make up a review e.List item are locked, the state of the review e.List
item is Ready (see D Region) and if acceptable, can be submitted to the next reviewer. After it is
submitted, it becomes Locked (E Region).

Administration Guide 299


Chapter 16: Previewing the Production Workflow

Other Icons
Each of the Workflow state icons can have additional indicators that tell you whether the e.List
item is being edited, is out of date, or both. They are a grid, a box or both a grid and a box.
The following show examples of these indicators, but note that they can apply to all workflow
states:
● Has a current editor/annotator. The e.List item was opened for editing/annotating. An edit
session is ended by the user closing the grid, or by submitting the e.List item .

● Is out of date . This indicates that the e.List item needs updating.

● Has a current editor/annotator and the data is out of date.

300 Contributor
Chapter 17: Using Cognos 8 Planning - Contributor
With Other Cognos Products

Client and admin extensions help Cognos 8 Planning - Contributor to work with other Cognos
products (p. 302).
You can analyze and report on published Contributor data in Cognos 8 Business Intelligence using
the Generate Framework Manager Model admin extension (p. 308). Additionally, the Planning Data
Service provides access to unpublished Contributor data for Cognos 8 Business Intelligence users.
You can use Excel with Contributor, benefiting from the formatting capabilities of Excel (p. 313).
You can take actuals from an Enterprise Resource Planning (ERP) system and combine them with
planning information to perform comparative analysis using Cognos Performance Applications
(p. 315).
The following diagram illustrates the various integration points between Cognos 8 Planning and
other Cognos products.

Published Data

Real Time Data

Cognos 8 Cognos 8
Business Intelligence Metrics Manager

Planning OLAP Relational


Cognos Connection

Cognos 8 Business
Intelligence
Transformer 7.4

Cognos 8 Cognos 8
Planning - Contributor Planning - Analyst

Performance Cognos 8 Cognos


Applications Controller Finance

Administration Guide 301


Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products

Client and Admin Extensions


Extensions are tools that provide additional functionality to Cognos 8 Planning - Contributor as
well as provide inter operability with other Cognos products. There are two types of extensions:
● "Client Extensions" (p. 302)

● "Admin Extensions" (p. 303)

All extensions are installed as part of the main Planning installation. For more information, see the
Cognos 8 Planning Installation Guide.

Note: Ensure that both the Administration Console computer and the client computers meet all of
the software and hardware requirements before configuring and running Contributor client and
administration extensions.
For a current list of the software environments supported by Cognos products, see the Cognos
Global Customer Services Web site (http://support.cognos.com).

Client Extensions
Web client users can use client extensions to take advantage of the functionality of Excel (p. 313).
Client extensions are activated through the menu bar in the Contributor grid.
You can control when an extension is available for Web client users by enabling and disabling it
in Contributor Administration Console.

Tip: On the Configure Extensions tab, right-click the extension and click Enable or Disable.

Organize Client Extensions


Use extension groups to organize client extensions. Extension groups appear as items under the
Tools menu on the Contributor grid page. A list of the member extensions appear when you click
the group name. For example, you can create an extension group named Export to organize the
export extensions.

Steps
1. In the Contributor Administration Console application tree, click Production, Extensions, Client
Extensions, and then click the Extension Groups tab.

2. Click Add, and type a name for the new group.


Extension group names must be 15 characters or less and should be meaningful to the Web
client user.

3. Click OK.
The name of the new extension group appears in the Extension Group list.

Tips:

● You can rename an extension group by clicking Edit in the Extension Group dialog box.

● You can reorder extension groups by using the arrow buttons on the Extension Group tab.

302 Contributor
Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products

Configure Client Extensions


You must configure a client extension before users can use it. An extension can run in the Web
client in a custom or manual mode. In manual mode, extensions run when a user selects the extension
from the Extension Group list under the Tools menu in the Web client. In custom mode, extensions
run automatically in the Web client and do not need to be assigned to an extension group.

Tip: You can reset a client extension back to its original settings by clicking the Reset button. This
resets the configuration back to its original, unconfigured state and all settings and data are lost.

Steps
1. In the Contributor Administration Console application tree, click Production, Extensions, Client
Extensions, and then click the Configure Extensions tab.

2. Click the extension you want, and click Configure.

The Extension Properties dialog box appears.

3. In the Display Name box, type a name for the extension or leave the default name.

4. If the Activation Mode box shows that Manual activation mode is selected, in the Extension
Group box, click the appropriate Extension Group.

5. In the Extension Properties-Users dialog box, click All Users or Selected Users.

6. If you clicked Selected Users, select the check box next to each user who should have access.

7. If you are configuring the Export for Excel extension, in the Location on client for saved
selections box, type the full path of the saved selections folder.

8. Choose whether to enable this extension now by clicking Yes or No.

9. Click Finish.

Admin Extensions
Administrators use admin extensions to generate Framework Manager models, Transformer Models,
and Cognos PowerCubes from Contributor applications. This enables you to report on Contributor
data in Cognos 8 studios, and view data in PowerPlay Series 7.

Run an Admin Extension


You run an Admin extension when you want it to perform its task. Before you can run an Admin
extension, you must first configure it. For information about configuring the individual extensions,
see the following topics:
● "The Generate Framework Manager Model Admin Extension" (p. 308)

Administration Guide 303


Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products

● "Generate Transformer Model" (p. 311)

Step
● In the Contributor Administration Console application tree, click Production, Extensions,
Admin Extensions, select the extension, and click Run.

You may be prompted to perform more tasks, depending on the extension.

Integrating with Cognos Business Intelligence Products


Cognos Business Intelligence (BI) users can access unpublished (real-time) and published Contributor
data for analysis and reporting.
For Cognos 8 Business Intelligence users, the Planning Data Service provides access to unpublished
Contributor data, and the Generate Framework Manager Model extension provides access to
(table-only layout) published data. Note that you get better performance when reporting off published
data than off live data. This is because the Planning Data Service has the added overhead of
interpreting the model.
You can also import data from Cognos 8 data sources into Contributor applications and Analyst
models. For more information, see "Importing Data from Cognos 8 Data Sources" (p. 161).

Using Cognos 8 BI with Contributor Unpublished (Real-Time) Data


You can use Cognos 8 Business Intelligence to report on and analyze unpublished (real-time)
Contributor data.
To create a Planning Package, you have two options.

● Select the Create Planning Package option in the Go to Production wizard

● Create a package directly in Framework Manager

To determine which method to choose consider the following information.

Create the Planning Package in the Go to Production Wizard


The Planning Package that is published in the Go to Production wizard contains all cubes in the
application. Thus, when a user opens this package in Query Studio, Analysis Studio, Report Studio,
or Event Studio, they are presented with metadata for all of the cubes in the application. The user
is free to choose metadata from multiple cubes for use in their reports. However, unless care is
taken, users may inadvertently build queries that attempt to access values from more than one cube,
which results in no data returned to the report.
For more information, see "Planning Packages" (p. 240).

Create the Planning Package directly in Framework Manager


If you create the package in Framework Manager, you can determine how many cubes to expose
in a given package. By default, you get one cube in each package, which prevents users from building

304 Contributor
Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products

queries that access more than one cube. However, this may result in large numbers of packages in
Cognos Connection which could be difficult to manage.

Framework Manager Project


Before you can create a package using Framework Manager, you must create a Framework Manager
project. The Framework Manager project contains objects that you organize for Business Intelligence
authors according to the business model and business rules of your organization. A package is a
subset of the query subjects and other objects defined in the project.
Framework Manager can use the metadata and data from external data sources to build a project.
To import metadata, you must indicate which sources you want and where they are located. You
then publish the package to the Cognos 8 server so that the authors can use the metadata.
You can create several packages from the same project, with each package meeting different reporting
requirements. Framework Manager models accessing Contributor data are light-weight models
only. A light-weight model, as well as the packages derived from that model, contain only the
connection information to the cubes in the Planning application. The D-List and item metadata are
extracted from the Planning application at runtime.

Note: Cross tab reports in any of the Business Intelligence studios do not support text or date-based
measures, including annotations, if configured for display. If a text or date-based measure is selected,
it appears as "--" in the report.

The Measures Dimension


Cognos 8 requires that one of the Contributor D-Lists is used as the measures dimension. A measures
dimension is typically one that contains quantitative data items, such as revenue or headcount. The
default measures dimension is determined based on the following:
● Excluding the e.List, all dimensions with defined formats.

● Other than the e.List, if no dimensions have defined formats, then the first dimension is used.

● If only one dimension has defined formats, that dimension is used.

● If more than one dimension has defined formats, the dimension with the lowest priority
calculations is used.

The default measures dimension can be overridden when publishing, by selecting the Dimension in
the Cubes screen. If you republish the data and change the dimension at a later date, be aware that
this may break some saved reports.

Create a Framework Manager Project and Publish a Package


The Framework Manager project contains objects that you organize for Business Intelligence authors
according to the business model and business rules of your organization.

Steps
1. From the Windows Start menu, click Programs, Cognos 8, Framework Manager.

2. In the Framework Manager Welcome page, click Create a new project.

Administration Guide 305


Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products

3. In the New Project page, specify a name and location for the project.

4. Optionally, you can add the new project to a source control repository by doing the following:

● Click Repository, and then select the Add to repository check box.

● In the Connection box, click the repository connection.


If you do not have a repository connection defined, you are prompted to create one. For
more information, see the Framework Manager help.

● In the Location in Repository box, browse to a location to add the project and then click
Select.

5. Log on if you are prompted to do so.

6. In the Select Language page, click the design language for the project.

You cannot change the language after you click OK, but you can add other languages.

7. In the metadata source page, select Data Sources.

8. If the data source connection you want is not listed, you must create it (p. 307).
If the Planning Data Service is configured, a data source named Cognos Planning - Contributor
is available. This gives you access to cube (OLAP) data only. If you want to access table data,
you must create a data source that points to these tables.

9. Select the cube that you want to import.


The name of the project is shown. This defaults to the cube name. You can change this.

10. Add security to the package if required. See the Cognos 8 Framework Manager User Guide for
more information.

11. Click Next and then Finish.

Note: You save the project file (.cpf) and all related XML files in a single folder. When you
save a project with a different name or format, ensure that you save the project in a separate
folder.

12. When prompted to open the Publish Wizard, click Yes. This enables you to publish the new
package to Cognos Connection.

13. In the Publish Wizard, choose where to publish the package:


● To publish the package to the Cognos 8 Server for report authors and business authors to
use, click Cognos 8 Content Store.

● To publish the package to a network location, click Location on the network.

14. To enable model versioning when publishing to the Cognos 8 Content Store, select the Enable
model versioning check box.

15. In the Number of model versions to retain box, select the number of model versions of the
package to retain.

306 Contributor
Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products

Tip: To delete all but the most recently published version on the server, select the Delete all
previous model versions check box.

16. If you want to externalize query subjects, select the Generate the files for externalized query
subjects check box.

17. By default, the package is verified for errors before it is published. If you do not want to verify
your model prior to publishing, clear the Verify the package before publishing check box.

18. Click Publish.


If you chose to externalize query subjects, Framework Manager lists the files that were created.

19. Click Finish.

Note: You can run the Framework Manager Metadata wizard repeatedly to import multiple
cubes into the same Framework Manager project. For more information about creating
Framework Manager projects, see the Framework Manager User Guide.

Create a Data Source Connection


You must create a data source connection if you are creating a Planning Package in Framework
Manager.
When you create a Cognos 8 Planning - Contributor data source, you must provide the information
required to connect to the datastore. This information is provided in the form of a connection string.
Data sources are stored in the Cognos namespace and must have unique names. For example, you
cannot use the same name for a data source and a group.
Before creating data sources, you need write permissions to the folder where you want to save the
data source and to the Cognos namespace. You must also have execute permissions for the
Administration secured function.

Steps
1. In Framework Manager, click the Run Metadata Wizard command from the Action menu.

2. Click Data Sources and Next.

3. Click New and Next.

4. In the name and description page, type a unique name for the data source and, if you want, a
description and screen tip. Select the folder where you want to save it.

5. In the connection page, under Type, click Cognos Planning - Contributor.


The connection string page for the selected database appears.

6. Under External namespace, select the namespace set up previously in Cognos Configuration.

Tip: To test whether parameters are correct, click Test the connection. If prompted, type a user
ID and password or select a signon, and click OK.

7. Click Finish.

Administration Guide 307


Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products

The data source appears in the Directory tool in the portal or in the list of data sources in the
Metadata Wizard in Framework Manager.

Tip: To test a data source connection, right-click the data source in the Data Sources folder and
click Test Data Source.

The Generate Framework Manager Model Admin Extension


Generate Framework Manager Model creates a set of Framework Manager models from Cognos
Planning data published in a table-only layout, including a base model and a user model. It also
publishes a package to Cognos Connection.

Base Model
The base model contains the definitions of objects required to access Cognos Planning data published
in a table-only layout. The objects include table definitions (query subjects), dimension information,
security filters, and model query subjects.

User Model
The user model provides a buffer to contain the modifications made by the Framework Manager
modeler. When modifications are made to the Contributor application, or to the Analyst model,
the base model can be updated using Generate Framework Manager Model. Then, the user model
can be synchronized using the synchronize option in Framework Manager.
The synchronization process makes all the modifications to the base model appear in the user model.
This is done by synchronizing the user model with the base model and by reapplying any changes
made to the user model by the modeler to the synchronized user model.
The package published to Cognos Connection is published from the User Model.

Configuring Your Environment


Before you can use Generate Framework Manager Model, you must configure your environment.
Do the following:

Note: It is recommended that you install the Administration components (Analyst and Contributor
Administration Console) on the same machine as the Planning Server components.
❑ Ensure that you can access Cognos Connection

For example, in the address bar of your Web browser, type http://computer_name/cognos8/.

❑ Ensure that you can publish the Cognos Planning data in a table-only layout.

❑ Configure the Publish datastore to use the logon and password of the datastore server, not
Trusted Connection.

Understanding Generate Framework Manager Model


When you generate a Framework Manager model, the following occurs:
● A Cognos 8 data source is created for the Table-only publish container.

308 Contributor
Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products

● The Framework Manager script player


● creates models and packages object actions from the Table-only publish metadata

● publishes packages

Objects in the Generated Framework Manager Model


The models generated by Generate Framework Manager Model contain the following objects.

Folders
Framework Manager contains a series of folders containing objects of the same type. These folders
are created in two top-level folders: Physical View and Business View. The Physical View Folder
contains all the database query subjects and the Business View folder contains all the dimension
and star schema objects.

Database Query Subjects


Database query subjects are created for all the tables needed to provide access to Cognos Planning
data. The tables included in the model depend on the query subjects selected, and may include
● cube export tables

● dimension item tables

● dimension derived hierarchy tables

● dimension complete hierarchy tables

● annotation tables

Joins
Joins are created between related tables, such as the cube export data tables and derived hierarchy
tables.

Column Usage
The usage attribute of the query items contained in the database query subjects are set to the correct
value: fact, identifier, or attribute.

Security Filters
If the models are generated from a Contributor application, security filters are created for each cube
export data query subject. The filters grant users access to the same e.List items as in the Contributor
application. A security filter is created for every user on every cube export data query subject.
If the models are generated from Analyst, no security filters are created.

Administration Guide 309


Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products

Regular Dimensions
For each derived hierarchy and complete hierarchy query subject, a regular dimension object is
created and saved to the Derived Dimensions and Complete Dimensions folders respectively. These
folders are located in the Business View folder.

Measure Dimensions
For each cube export table, a measure dimension object is created. It is stored in a folder that has
the same name as the cube. These folders are located in the Business View folder.

Star Schema Groupings


For each cube in the model, some star schema groupings are created. If the derived hierarchy lists
are selected, a star schema grouping is created using the derived dimensions. If the complete hierarchy
lists are selected, a star schema grouping is created using the Complete Dimensions folders.

Data Source
Data source refers to the data source created in the Cognos Connection Portal.

Package
A package contains all the objects in the Framework Manager model. The administrator of the
package is the user generating the model.
In Contributor, the users who have access to the package are the users of the Contributor application.
In Analyst, the only user to have access to the package is the user generating the model.

Objects Created in Cognos Connection


When generating a Framework Manager model using Generate Framework Manager Model, a data
source is added to Cognos Connection. The associated connection string and signon is also created
if applicable.

Run the Generate Framework Manager Model Admin Extension


Run the Generate Framework Manager Model Admin Extension to create a set of Framework
Manager models from Cognos 8 Planning data.
This extension cannot be automated.
In the appropriate Contributor application, publish data using the Table-only layout. You must
use an untrusted connection.

Note: This extension uses the last published container.

Steps
1. Click Extensions, Admin Extensions, and double-click Generate Framework Manager Model.

2. Select Create a new Framework Manager Model and click Next.

3. Specify the following Framework Manager Model settings:

310 Contributor
Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products

● Framework Manager Model Location


Where the Base model and User model are created on the hard disk. Only one model can
exist in any location.

● Package name
The name of the package to be published to the portal. The Package name must not already
exist on the portal.

● Package location
Where the package is stored in Cognos Connection.

● Package screentip

● Package description

4. Select the cubes to be included in the model.

5. Specify the type of data source query subjects to include in the model.

When the package is published, it can be accessed from Cognos 8 studios.


The selections made in this extension are saved for the next time the extension is run and are used
when updating a model.
For troubleshooting information, see "Troubleshooting the Generate Framework Manager Model
Extension" (p. 361).

Update a Framework Manager Model


You can update a Framework Manager model to include changes to the Contributor application.
The new base model is re-imported and any changes you made to the user model are reapplied.

Steps
1. In the appropriate Contributor application, click Extensions, Admin Extensions, and double-click
Generate Framework Manager Model.

2. In Create or update model, select Update an existing Framework Manager Model.

3. Enter the Project Location where the model you want to update is stored.

4. Complete the steps in the wizard.

5. Open Framework Manager and open the User Model.

6. From the Project menu, click Synchronize and then click Run the script from the starting point.

Generate Transformer Model


Use the Generate Transformer Model to generate a Cognos Transformer Model from a table-only
database layout and create Cognos PowerCubes. You can view the PowerCube in PowerPlay Series

Administration Guide 311


Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products

7, or publish the PowerCube to a package in Cognos Connection and view its content using any
of the Cognos 8 studios.
Because the PowerCube is based on published data, the Generate Transformer Model extension
generates a single Transformer model for each Contributor model. The extension automatically
extracts the necessary information about your Contributor model from the publish tables and the
application model, and then creates the equivalent model in Transformer. After the Transformer
model is created, you can modify it using the Transformer interface and optionally, publish the
cubes to a Cognos Portal. Generate Transformer Model uses the last publish data source.
With the Generate Transformer Model, you can:
● generate a Transformer model

● generate a Transformer model and a PowerCube. Transformer must be installed to generate a


PowerCube

● create a PowerCube from an existing Transformer model

Before you can use the Generate Transformer Model Wizard, you must configure your environment.
Do the following:
● If you create PowerCube(s), ensure that you can access Cognos Connection.

For example, in the address bar of your Web browser, type http://computer_name/cognos8/.

● Publish your Cognos 8 Planning data in a table-only layout.

● Before you can create PowerCube(s), you must first have Transformer installed and configured
on the computer where the Planning Server components are installed.

Security Considerations
The Transformer model and PowerCubes generated can only be secured against a Series 7 Namespace.
The name of the namespace in Cognos Configuration must match the name of the Series 7 namespace.
We recommend that the user class for the administrator creating the Transformer model has the
property: Members can view all users and/or user classes in the User Class permissions tab. This
property is set in the administration console of Access Manager Series 7.

Automation
This extension can be automated. It must first be configured. For more information, see "Execute
an Admin Extension " (p. 215).

Steps
1. In the Contributor Administration Console application tree, click Production, Extensions,
Admin Extensions and double-click the Generate Transformer Model extension.

2. Choose whether to generate a Transformer model, and a PowerCube, or just a PowerCube.

3. Specify the locations for the Transformer model and the PowerCubes.

312 Contributor
Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products

This location can contain only one model. The paths must be located on the Planning server,
and can be UNC paths.

4. Browse to a location for the package in Cognos Connection.

5. You can choose to include security information. To do this you must specify a Series 7
Namespace.

6. Choose the cubes to add to the Transformer model.

7. If you selected Create PowerCube, you can choose to create a Planning Package, enabling you
to view its content using any of the Cognos 8 studios.

8. Click Finish.

Excel and Contributor


In addition to accessing Contributor through the Web, users can access Contributor using
Contributor for Excel. This enables users to apply Excel formatting. Users can also print Contributor
data using Excel formatting, and can export data from Contributor to an Excel file.

Contributor for Excel


Users can use Contributor for Excel to view and edit Contributor data using Excel, getting the
benefit of Excel formatting and Contributor linking functionality. Here are some examples of things
you can do:
● Create bar charts and other graphs from Contributor data.

● Create dynamic calculations from Contributor data.

● Create a calculation in Excel and link it to a Contributor cell.


As you update this calculation, you can choose whether to update the value in the Contributor
cell.

● Reuse custom calculations and formatting by saving the workbook as a template.

● Resize the worksheet so you can see more or less data on a page.

● Save data as an Excel workbook and work locally without a connection to the network.

To use Contributor for Excel, administrators must create a Contributor Web site, and client users
must install Contributor for Excel on their computers. For more information about installation,
see Contributor for Excel Installation Guide.

Design Considerations When Using Contributor for Excel


Because of the formatting and other capabilities of Contributor for Excel, cubes with large
two-dimensional window footprints tend to reduce performance. The two-dimensional window
footprint is not related to e.List model size, which is the total number of cells in an application (per

Administration Guide 313


Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products

e.List slice). A two-dimensional window footprint is the number of rows in a cube multiplied by
the number of columns that can currently be viewed on a worksheet.
Performance is affected by the size and complexity of Contributor e.List models. Larger and more
complicated models take longer to download than smaller models. Contributor for Excel does not
impose any new limits on size and complexity.
The performance of Contributor for Excel is also affected by cubes containing large numbers of
Contributor cells visible on worksheets at one time. The following actions are affected:
● breakback

● entering a value

● changing multi-dimensional pages

● saving the model

As a result, you may want to use the most relevant data and not all possible data. There are several
ways to limit the two-dimensional window footprints of cubes. You can design compact,
multidimensional cubes. If the model requires a long D-List, consider using access tables to send
only the items needed to different e.List items. Finally, consider using cut-down models as another
way of restricting portions of long D-Lists to some e.List items.

Percentages and Consistency When Using Contributor for Excel


Percentages are usually numbers between 0.00 and 1.00. Contributor permits a cell, with a value
not between 0.00 and 1.00 to appear as a percentage by appending the percent (%) character.
Model calculations then divide such a number by 100 to convert it to a percent.
Contributor for Excel initially matches these formatting conventions, including the trailing %
character as custom Excel formatting. However, if users reformat such cells, the true underlying
value can be revealed and may be confusing. For example, reformatting a 5% increase as Genera
shows the underlying value as 5.00 or 0.05.
Excel accepts several forms of input in a cell with a % format. It converts user input of 8.00, 0.08,
and 8% to a value of .08 and presents it as 8%. To eliminate possible confusion in your Contributor
model, build models in which values are used as consistently as possible in calculations, display,
and input.

Specifying the Level of Granularity When Using Contributor for Excel


Some cubes exist so that users can retain more granular data than is actually required for the
centralized planning process.
To improve performance, you may want to remove these cubes and do one of the following:
● Build Excel-based templates that replace such cubes with Excel worksheets linked to Contributor
cells.

● Permit users to decide their own level of granularity and build their own incoming formulas.

314 Contributor
Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products

Print to Excel
Contributor Web client users can print data using the print formatting options available from Excel.
Using the Print to Excel functionality is the default standard for the Web Client. The Activation
Mode is set to custom. If the Web client does not have Excel installed, the standard print capability
is provided.
To configure the Print to Excel extension, see "Configure Client Extensions" (p. 303).

Export for Excel


Contributor Web client users can use the Export for Excel extension to export data from Contributor
to a Excel file. From there, they can use the Excel formatting and graphing features. Also, Web
client users can export their data to Excel in one easy step using the Current View Only feature.
Web client users can specify a selection of their data to be used for a future export. Saved selections
define specific Contributor data sets to be exported and are stored in a folder on either the user's
computer or on a network. The Contributor administrator must designate this location.
We recommend that the Contributor administrator consult with their network administrator to
determine the best location for the saved selections folder.
If the designated folder location does not already exist on the user's computer or on the network,
it is created the first time the user creates a selection. If the designated folder is on a network, advise
users to give their selections unique names so they do not overwrite other saved selections with the
same name. Users are warned prior to overwriting a selection. If you choose not to designate a
folder, you can export data but cannot save the selection information for a future export.
To configure the Export for Excel extension, see "Configure Client Extensions" (p. 303).

Financial Planning with Cognos Performance Applications and


Cognos 8 Planning
You can use Cognos Planning and Cognos Performance Applications together to
● extract actuals from the data warehouse of an Enterprise Resource Planning (ERP) system and
bring them into Cognos Planning, as structural data for the Analyst model and as the initial
figures from the previous planning cycle for Contributor

● return completed planning data to the data warehouse using an ETL tool such as Data Manager
for comparative analysis

● monitor live or published planning data during the planning cycle against current operational
data in the Performance Applications warehouse

The data warehouse extracts, and changes that occur during the planning cycle, are managed using
the Import from IQD wizard. Monitoring is done directly against Contributor data using the
appropriate extensions.

Steps

Administration Guide 315


Chapter 17: Using Cognos 8 Planning - Contributor With Other Cognos Products

❑ Preparing Cognos Performance Applications Data for Planning


In the Performance Application, identify the key performance indicators (KPIs) to plan by,
monitor, and report on. Because planning is often performed at a different level from actuals,
you may need to add to the dimensions from the data warehouse. Cognos consultants can help
you in this identification and analysis.
The Import from IQD wizard expects each dimension to have both an ID field and a description
field, each of which must be unique across the dimension.

❑ Preparing for the Model in Analyst


After the planning measures and dimensions that are available from Cognos Performance
Applications have been identified, the Analyst user designs a model, and identifies any alternate
data sources that are needed for the dimensions and measures. Because Performance Applications
use multiple currencies for reporting, the Analyst user should determine what currency to use
when data is published back into the Performance Applications warehouse.

Note: If you create a D-List using the Import from IQD wizard, you should not add any items
manually. If you do add items manually, these items will be removed every time you refresh
the D-List.
After planning models are designed and sourcing is identified, the solution to integrate the
actuals information with planning information can be implemented using either the mapping
table that is generated during the IQD import, or if the mapping tables are not required, you
can use a Cognos package as a source to populate D-Lists in Analyst.

❑ Preparing e.Lists for Contributor Data


As well as importing D-List data for the Analyst model, you can choose to generate e.Lists
using data from IQD files, or if the data is modeled in Framework Manager and published as
a package, you can also use Contributor Administration Links.

For more information about financial planning with Cognos performance applications and Cognos
8 Planning, see the Analyst User Guide.

316 Contributor
Chapter 18: Example of Using Cognos 8 Planning
with Other Cognos Products

Cognos 8 Planning integrates with all other Cognos 8 Business Intelligence products. For example,
you can create reports on planning data and you can create macros with administration links that
are triggered by events in planning data.
The example in this section shows you some of the ways that Cognos 8 Planning works with Cognos
8 Business Intelligence. It demonstrates just a few of the many ways that you can view and use your
planning data.
The Central Europe region of the Great Outdoors Corporation plans to increase sales in its new
stores by holding promotions. The regional manager, Sébastien Pascal, wants a report delivered to
him the first day of every month that shows the current Central Europe promotions plans compared
to projected costs for the promotions and average monthly store revenue.
To complete this task, you need to create a report on the contributions for Central Europe and use
that report in an event that delivers a scheduled news item to Sébastien Pascal. You require Cognos
8 Business Intelligence products: Framework Manager, Report Studio, and Event Studio.
The example uses the Great Outdoors New Stores sample available on the Cognos Global Customer
Service Web site http://support.cognos.com.
The items created in this example, including the report, event, and PowerCube are included with
the sample download for your reference.

Download and Deploy the Sample


The example requires the Great Outdoors New Stores Sample, including a Planning deployment
archive and a Cognos 8 deployment archive. The sample is available from the Cognos Global
Customer Support Web site and must be imported using the Cognos 8 and Planning deployment
wizards.

Steps to Download the New Stores Sample:


1. Download, from the Documentation page on http://support.cognos.com, the Great Outdoors
New Stores Sample (go_new_stores_sample_en.zip).

Tip: Search the Global Customer Support Web Site for the document type Utility.

2. Save the files go_new_stores_contributor_data.zip and Cognos8_new_stores.zip to the


deployment location set in Cognos Configuration.

Tip: The default location is C:\Program Files\cognos\c8\deployment.

Administration Guide 317


Chapter 18: Example of Using Cognos 8 Planning with Other Cognos Products

3. In the deployment location, open the compressed go_new_stores_contributor_data.zip file. The


Cognos 8 deployment archive (Cognos8_new_stores.zip) must remain a compressed file.

4. Save PowerCube.zip to <install location>\samples\Planning\Contributor\en\Data\


Data_go_new_stores_contributor. Open the compressed file.

Tip: The default location for the samples folder is C:\Program Files\cognos\c8\samples.

Steps to Deploy Cognos 8 Package


1. In Cognos Administration, import Cognos8_new_stores.zip from the deployment archive.

Tip: On the Configuration tab, click Content Administration and select New Import .

2. Complete the Import wizard to import the package and data source connection.

3. On the Configuration tab, click Data Source Connections, select the properties for the new
data source connection, new_stores_power_cube, and update the location of the PowerCube
on the Connection tab to <install location>\samples\Planning\Contributor\en\Data\
Data_go_new_stores_contributor\store_cost.mdc.

Note: If you do not save the store_cost.mdc PowerCube to C:\Program Files\cognos\c8\samples\


Planning\Contributor\en\Data\Data_go_new_stores_contributor\, then you will have to
reestablish the Administration link to this data source.

Need more help?


● See the Deployment section in the Cognos 8 Administration and Security Guide

Steps to Deploy the Planning Application


1. Import the go_new_stores_contributor application sample into the Contributor Administration
Console using the deployment wizard.

Tip: Click Tools, Refresh Console after the deployment to display the application, administration
link, and macro.

2. Add the go_new_stores_contributor application to a job server cluster.

3. If you saved the store_cost.mdc Powercube to a location other than the default location, edit
the Administration link data source and target application to map to the data source,
new_stores_power_cube, and the imported planning application, go_new_stores_contributor.

Tip: You do not change the mappings in the administration link.

Need more help?


● "Import a Model" (p. 169)

● "Add Applications and Other Objects to a Job Server Cluster" (p. 55)

● "Create an Administration Link" (p. 148)

318 Contributor
Chapter 18: Example of Using Cognos 8 Planning with Other Cognos Products

Run a Contributor Macro to Import Data


A Contributor macro, Import Average Monthly Revenue, that imports the average monthly revenue
for the stores from a PowerCube and completes go to production for the application, has been
created and published to Cognos Connection.
The historical average monthly revenue for Great Outdoors stores is stored in a PowerCube. The
Administration link in this macro moves the data from a Framework Manager package created
from the PowerCube into the Average Monthly Revenue dimension of the Store Cost cube.

Steps
1. From the Content Administration page on the Configuration tab in Cognos Administration,
click Planning and then click Macros.

2. Click Run with options for the Import Average Monthly Revenue macro, and select to
run now.

Tip: You can view the progress of the macro in the Monitoring Console on the Macros tab.

Need more help?


● "Run a Macro from Cognos Connection" (p. 221)

Create and Publish a Framework Manager Package


To use the go_new_stores_contributor application as the basis for reporting, you must create a
Framework Manager package. After the package is created and published, it can be used to trigger
events and create reports.

Steps
1. Publish the go_new_stores_contributor application using Table-only Layout publish.

Include all cubes and e.List Items in the publish and configure a Publish Datastore. Name the
datastore, go_new_stores_table, and add it to the job server cluster.

Tip: Clear Prefix column names with data type on the Options tab.

You can view the progress of the publish in the Monitoring Console on the Job Server Clusters
tab.

2. Run the Generate Framework Manager Model admin extension. Name the package
new_stores_FM_model and store the Framework Manager Model in <install location>\samples\
Planning\Contributor\en\Data\Data_go_new_stores_contributor.

Select all cubes and the data source query subjects Unformatted lists and Complete hierarchy
lists for the model.

3. In Framework Manager, open the model created by the Framework Manager extension.

Administration Guide 319


Chapter 18: Example of Using Cognos 8 Planning with Other Cognos Products

4. Create a filter on Measure Dimension Promotions Plan to exclude the ALL RETAILERS D-List
item.

Tip: Double-click on the Measure Dimension Promotions Plan in the Business View and click
the Filters tab. Use Retailer Type and the not like operators.

5. Rename 2eList Level 3 in Complete Dimensions to Regions.

Tip: Planning levels are numbered in Framework Manager. To make them easier to use in
Report Studio, rename the levels to reflect the content.

6. Select the new_stores_FM_model package and publish the package to make it available in
Cognos Connection.

320 Contributor
Chapter 18: Example of Using Cognos 8 Planning with Other Cognos Products

Need more help?


● "Create a Table-only Publish Layout" (p. 273)

● "The Generate Framework Manager Model Admin Extension" (p. 308)

● See the Cognos 8 Framework Manager User Guide

Create a Report
You are now able to create a report on Planning data to compare the cost of promotions against
the planned promotion value. This report will use a crosstab report to compare information that
uses one or more criteria and a chart to reveal trends and relationships.
Your final report for budget version 1 will look like this.

Steps to Create the Crosstab


1. In Report Studio, create a new blank report that uses the sample package named
new_stores_FM_model.

2. Create a table (2 columns by 4 rows) to be used as the template for the report.

Tip: Use the tool box to drag a table into the report area.

3. Using a text item, create headings for Budget Version 1 (Central Europe) and Budget Version
2 (Central Europe).

Tip: Drag a text item to the first and the third rows of the first column.

4. Drag a crosstab to the cell in the second row of the first column.

Administration Guide 321


Chapter 18: Example of Using Cognos 8 Planning with Other Cognos Products

5. Add Central Europe to the rows from the Complete Dimensions.

Tip: Use the Source tab in the Insertable Objects pane.

6. Add the following Promotions Plan data items to the rows:

● Franchise/Corporate

● Month of Promotion

● Retailer Type

7. Add the following Promotions Plan data items to the columns:

● Promotion Costs

● Planned Promotion Value

322 Contributor
Chapter 18: Example of Using Cognos 8 Planning with Other Cognos Products

8. In the Query Explorer, add Average Monthly Revenue to Data Items from New Store Plan and
use the Budget version 1 dimension in the Slicer.

9. Copy and paste the crosstab into the cell in the fourth row of the first column.
Tip: Select the crosstab in the properties pane to create the copy.

Administration Guide 323


Chapter 18: Example of Using Cognos 8 Planning with Other Cognos Products

10. Create a second query, a copy of the first, and apply it to the second crosstab. Change the Slicer
so that Query 2 applies to Budget version 2.

11. Run the report to view the crosstabs.


Your crosstabs for budget version 1 and budget version 2 will look like this.

Steps to Create a Combination Chart


1. From the tool box, drag chart to the table cell to the right of the crosstab in the second row.
Select the default combination chart. In the properties pane for the combination chart, change
Query to Query1 to use the Budget Version 1 data.

2. From the Data Items Insertable Objects tab for Query 1, drag the following data items into
the Category (x-axis):

● Central Europe

● Franchise/Corporate

324 Contributor
Chapter 18: Example of Using Cognos 8 Planning with Other Cognos Products

● Month of Promotion

3. Drag the following data items from Query 1 into the Series:

● Promotion Costs

● Planned Promotion Value


For Planned Promotion Value, change the chart type to line.

4. Create a Statistical Maximum Baseline in Properties, Chart Annotations, Baselines.

5. Set Average Monthly Revenue in the baseline properties.

6. Change the Line Styles to dotted red line and rename Statistical Maximum to Average Monthly
Revenue.

7. Copy and paste the combination chart into the cell in the fourth row of the second column and
apply Query 2 to the second crosstab.

Administration Guide 325


Chapter 18: Example of Using Cognos 8 Planning with Other Cognos Products

8. Run the report to view what it will look like.

9. Save the report as Central Europe Promotions Report.


Your charts for budget version 1 and budget version 2 will look like this.

Need more help?


● See the Cognos 8 Report Studio User Guide

Create an Event Studio Agent


Create and schedule an Event Studio Agent to deliver the report as a news item on the first day of
every month if the value of the planned promotions is greater than $20,000.

Tip: Create a signon to the data source connection so that users don’t have to enter database
credentials when they run reports or reports are run by an event. For more information, see the
section Create or Modify a Data Source Signon in the Cognos 8 Administration and Security Guide.

Steps
1. Open Event Studio using the package named new_stores_FM_model.

2. Create an agent with an event condition expression for Planned Promotion Value greater than
20,000. This event will be scheduled to run once a month, the event condition expression will
return a result so the event is triggered.

Tip: The Expression should look like the following:

326 Contributor
Chapter 18: Example of Using Cognos 8 Planning with Other Cognos Products

[Promotions Plan Value]>20000.

Validate the expression , then preview the results to check that the expression returns
a result.

3. Include the Run a report task and select the Central Europe Promotions Report.

4. Include a Publish a news item task.

In the Headline box, type Central Europe New Store Promotions Available.

Under Link to, click Select an entry and select the Central Europe Promotions Report.

Administration Guide 327


Chapter 18: Example of Using Cognos 8 Planning with Other Cognos Products

5. Select My Folders as the News list locations. The news item will be published to this location.

6. Click Schedule the agent … and select the By Month tab. Schedule the agent for the first day
of every month.

7. Save the event as New Stores Event. In Cognos Connection, click Run with options , for
the New Stores Event to run the event now.
View the news item, containing Central Europe Promotions Report, in Cognos Connection on
the My Folders tab. This news item will be created the first day of every month.

Need more help?


● See the Cognos 8 Event Studio User Guide

328 Contributor
Chapter 19: Upgrading Cognos 8 Planning -
Contributor

You can upgrade users, user classes, groups, libraries, and applications from previous Cognos
Planning versions.
The upgrade process involves the following tasks:
❑ Plan the upgrade.
For more information, see the Cognos 8 Planning Installation and Configuration Guide.

❑ Design, install, and configure the test environment.


For more information, see the Cognos 8 Planning Installation and Configuration Guide.

❑ Use Cognos Connection to set up security.


For more information, see the Cognos 8 Planning Installation and Configuration Guide and
the Cognos 8 Administration and Security Guide.

❑ Backup and upgrade Analyst security and library files.


● If you want to upgrade all existing libraries, user classes, users, and groups at the same
time, start Analyst and from the File menu, select Administration, Upgrade, Existing File
System, browse to find your existing filesys.ini file, and click Open.

● If you want to upgrade only existing libraries, start Analyst and from the File menu, select
Administration, Upgrade, Existing Libraries, browse to find your existing Libs.Tab file,
and click Open.

● If you want to upgrade only existing user classes, start Analyst and from the File menu,
select Administration, Upgrade, Existing User Classes, browse to find your existing
usersclasses.Tab file and click Open.

● If you want to upgrade existing native security to Access Manager, start Analyst and from
the File menu, select Administration, Upgrade, Existing Native Users and Groups, browse
to find your existing existingusers.Tab file and click Open.

Files are converted automatically to 8.3 format, after which they can no longer be opened in
earlier versions of Analyst.

❑ Upgrade a Cognos 7.3 or later Planning Administration Domain.


This wizard, which is run from the Contributor Administration Console, upgrades Planning
Administration Domain objects into the current Planning content store. Planning objects include
applications, datastores, job server clusters, job servers, macros, and administration links. If
you are upgrading from Cognos Planning 7.3 or later, you must also upgrade the Planning

Administration Guide 329


Chapter 19: Upgrading Cognos 8 Planning - Contributor

Administration Domain. This is done separately from the Contributor application upgrade.
For more information, see "Upgrade the Planning Administration Domain" (p. 330).

❑ Upgrade the Contributor application.


● Stop any scheduled scripts from running and ensure that all jobs are complete.

● Start the Contributor Administration Console and run the wizard.


For more information, see "Upgrade Contributor Applications" (p. 332).

● You can also use the wizard to upgrade and migrate Contributor applications from one
datastore provider to another. For example, you can upgrade a 7.2 application to an 8.3
application and migrate from Oracle to SQL Server.

❑ Test the upgrade.


For information, see the Cognos 8 Planning Installation and Configuration Guide.

❑ Install and configure the new production environment.


For information, see the Cognos 8 Planning Installation and Configuration Guide.

❑ Migrate from the test environment to the new production environment.


For information, see the Cognos 8 Planning Installation and Configuration Guide.

When Contributor is upgraded to the current version and tested, and you no longer need the old
namespace, you can use the Deployment wizard to migrate objects from the older namespace to
the Cognos 8 namespace. For more information, see (p. 168).

Upgrade the Planning Administration Domain


When you upgrade from Cognos Planning 7.3 to Cognos 8 Planning version 8.3, you should upgrade
the Planning Administration Domain into the current Planning store.
When you upgrade from Cognos 8 Planning version 8.2 to Cognos 8 Planning version 8.3, you
upgrade the Planning Administration Domain and the associated applications at the same time.
You must upgrade each Planning Administration Domain separately.
Run the Planning Application Domain Migration wizard from the Contributor Administration
Console.
Before you upgrade the Planning Administration Domain, you must configure at least one datastore
(p. 46), job server cluster (p. 54), and job server (p. 54).

Steps
1. In the Contributor Administration Console, click Tools, Upgrade Planning Administration
Domain.

2. Click Next.

330 Contributor
Chapter 19: Upgrading Cognos 8 Planning - Contributor

3. Configure the datastore server connection for the datastore server that contains the Planning
Administration Domain.

4. Select the Datastore provider.


The options are SQL Server, Oracle or DB2.

5. Enter the Datastore server name, or click the browse button to list the available servers
(SQL Server only).

6. Enter the information as described in the table below:

Setting Description

Trusted Connection Click to use Windows authentication as the method for logging
on the datastore. You do not have to specify a separate logon
id or password. This method is common for SQL Server
datastores and less common, but possible, for Oracle.

Use this account Enter the datastore account that this application will use to
connect. This box is not enabled if you use a trusted
connection.

Password Type the password for the account. This box is not enabled if
you use a trusted connection.

Preview Connection Provides a summary of the datastore server connection details.

Test Connection Mandatory. Click to check the validity of the connection to


the datastore server.

7. If you want to configure advanced settings, click Advanced.


Typically these settings should be left as the default. They may not be supported by all datastore
configurations.
Enter the following information.

Setting Description

Provider Driver Select the appropriate driver for your datastore.

Connection Prefix Specify to customize the connection strings for the needs of the
datastore.

Connection Suffix Specify to customize the connection strings for the needs of the
datastore.

Administration Guide 331


Chapter 19: Upgrading Cognos 8 Planning - Contributor

8. Select the Planning Administration Domain that you want to upgrade, test the connection, and
click Next.

9. Click the namespace to secure the objects against and click Next.

10. If you want to upgrade existing Planning Administration Domain objects, select the Replace
objects if they exist in the current Planning Store check box.
If you do not select this option and the objects exist, they are not upgraded.

11. Click Next.

12. In the Map Planning Administration Domain Objects page, in the Target column, click the
options that you want.
You must map the job servers and job clusters that are configured in the source Planning
Administration Domain to the upgraded job server and job server clusters so that macros are
upgraded correctly.

13. If you want to choose defaults that apply to all applications, click Map All Applications and
complete the fields.

14. Click Finish.

A log located in the %Temp%epUpgrade directory notifies you of any warnings that occur while
upgrading the Planning Administration Domain.

Upgrade Contributor Applications


You can upgrade an individual application or multiple applications at once. You may want to
upgrade applications separately if the end of the plan year for each application is different and you
do not want to upgrade an application mid-year.
When you upgrade a Contributor application the wizard does the following:
● creates a new application datastore, or enables you to create a script that can be run by a
database administrator

● shows you if there are users who are working off-line because off-line data cannot be upgraded
due to a new caching file that is used in the current version of Contributor

● automatically updates Contributor translation information in the datastore, requiring no manual


configuration

● upgrades access tables and saved selections requiring no further configuration

● retains all configuration options, except those stated

● saves an import/upgrade log named application_dataStore_ImportLog.txt to your %temp%/


epUpgrade directory

● upgrades the following client extensions: Excel Export (Export for Excel), Client Loader (Get
Data), Excel Print (Print to Excel)

332 Contributor
Chapter 19: Upgrading Cognos 8 Planning - Contributor

● applies default access rights


For more information, see "Configuring Access to the Contributor Administration
Console" (p. 36).

Before you use the Contributor upgrade wizard, do the following:


● Upgrade your directory server

● Upgrade other Cognos products

● Stop scheduled scripts from running, if appropriate

● Ensure that all jobs are complete

The upgrade wizard does not upgrade the following:


● data in the source application

● Admin extensions
Cognos Planning 7.1 or 7.2 Admin extensions are removed because substantial changes were
made to the extensions for the current version.

● audit information
History table data and Job metadata is not upgraded. When the Go to Production process is
run, cut-down model information is automatically generated after upgrading.

● Contributor for Excel


This is a separate Web client installation. To upgrade, the previous version must be uninstalled
and the new version installed.

● scripts
In Contributor 7.2, you automate Contributor functionality using scripts. This functionality
was replaced by macros. You cannot migrate your 7.2 scripts to macros. For more information,
see "Automating Tasks Using Macros" (p. 191).

● published data
Publish datastores are not upgraded. Prior publish datastores can be retained and are compatible
with 8.3. However, if you need to recreate your publish datastore as part of your 8.3 deployment,
Cognos recommends rebuilding it as UTF-16 to better conform to global business standards
and to ensure easier compatibility with future Cognos releases.
You cannot publish to the Contributor application container. You must publish to a separate
container. We recommend that you compare the results of publishing from earlier versions of
Contributor with publishing in the current version to ensure that the publishing is performing
as required. Any publish scripts must be re-created using the new macro functionality.

● Analyst>Contributor links
When you upgrade applications that contain Analyst>Contributor links, you must open the
link in Analyst and reselect the source and target of the link. For more information, see "Update

Administration Guide 333


Chapter 19: Upgrading Cognos 8 Planning - Contributor

a Link from a Computer That Cannot Access the Original Datastore" (p. 352), and the Analyst
User Guide.
Analyst and Contributor macros that use Analyst>Contributor links will fail if you do not
update the source and target of the link.

You should not run multiple versions of Contributor on the same computer.
To upgrade an application, you must have the Planning Rights Administration capability. By default,
this capability is granted to the Planning Rights Administrators role.
Before upgrading an earlier Contributor version to the current version, we recommend that you
install on a separate server and then upgrade each application.
We recommend that you back up the data stores that you intend to upgrade.
For more information, see "Security" (p. 27) and the Cognos 8 Administration and Security Guide.

Steps
1. Under Datastores, click the required datastore, right-click Applications, and click Upgrade
Application.

2. Click Add.

3. Select the Datastore provider.


The options are SQL Server, Oracle or DB2.

4. Enter the Datastore server name, or click the browse button to list the available servers
(SQL Server only).

5. Enter the information as described in the table below:

Setting Description

Trusted Connection Click to use Windows authentication as the method for logging
on the datastore. You do not have to specify a separate logon
id or password. This method is common for SQL Server
datastores and less common, but possible, for Oracle.

Use this account Enter the datastore account that this application will use to
connect. This box is not enabled if you use a trusted
connection.

Password Type the password for the account. This box is not enabled if
you use a trusted connection.

Preview Connection Provides a summary of the datastore server connection details.

Test Connection Mandatory. Click to check the validity of the connection to


the datastore server.

334 Contributor
Chapter 19: Upgrading Cognos 8 Planning - Contributor

6. If you want to configure advanced settings, click Advanced.


Typically these settings should be left as the default. They may not be supported by all datastore
configurations.
Enter the following information.

Setting Description

Provider Driver Select the appropriate driver for your datastore.

Connection Prefix Specify to customize the connection strings for the needs of the
datastore.

Connection Suffix Specify to customize the connection strings for the needs of the
datastore.

7. Select the application that you want to upgrade, test the connection, and click Next.

8. Choose whether to create the datastore now and continue upgrading the application (Create
and populate datastore now) or to exit the wizard and create and populate the datastore using
scripts Generate datastore scripts and data files. Then, in either case, click Next.

9. If you chose to use a script, give the script to your DBA to have the datastore created.
You can later link to the new datastore using the Contributor Administration Console which
will resume the upgrade wizard.

10. If you chose to create the datastore now, do the following:

● Click the namespace which will secure the upgraded application and click Next.

● Click Finish.

The Upgrade Application(s) page appears with the application that you specified added to
the list of applications that can be upgraded.

● Repeat steps 1 to 10 for each application that you want to upgrade.

● Click Upgrade.

The results of the upgrade show in the Upgrade log for application(s) page.

You must add the application to a job server or job server cluster (p. 50), run Go to Production
(p. 239), and set up the Contributor Web site to enable users to access Contributor applications
(p. 76).

Upgrade Security
If your version 7.2 Planning application was secured using Contributor native security, you can
upgrade directly to a Cognos 8 namespace.

Administration Guide 335


Chapter 19: Upgrading Cognos 8 Planning - Contributor

If your Planning application or Planning Administration Domain was secured by a Series 7 namespace
that was administered by Access Manager, you can upgrade your security to a Cognos 8 namespace
using the Contributor Administration Console deployment wizard.
To upgrade your security, you must configure Cognos 8 Planning to use both the Series 7 namespace
that was originally used as well as the namespace to which you are upgrading. In the Contributor
Administration deployment wizard, you must first export the Planning application or the Planning
Administration Domain, and then import the application or domain again. During the import, you
can map the security to your new namespace.
After you have upgraded the security for all of your applications or your Planning Administration
Domain, you can remove the Series 7 namespace from your configuration.
For more information, see "Deploying the Planning Environment and Viewing the Status of
Deployments" (p. 168) and the Installation and Configuration Guide.

Accessing Contributor Applications


Contributor applications are accessed from the Cognos Connection portal, typically http://
servername/cognos8.
For users with bookmarks to the Contributor client, either inform users of the application URL,
or use URL redirection to the new URL.
We recommend that you use the full client installation to deploy the Web application to the Web
client computer rather than the CAB download option if you have limited WAN bandwidth or your
users do not have sufficient rights to their local machine to install the CAB files.
For more information, see the Cognos 8 Planning Installation and Configuration Guide.

336 Contributor
Chapter 20: Cognos 8 Planning - Analyst Model
Design Considerations

You can design a Cognos 8 Planning - Analyst model to be used for a Contributor application and
create links between Analyst and Contributor (p. 347).

Designing an Analyst Model for Contributor


When designing an Analyst model that is used to create a Contributor application, the following
things must be considered:
● "Analyst Library Guidelines" (p. 337)

● "D-Cube Restrictions" (p. 338)

● "D-Links" (p. 339)

● "Dimensions" (p. 341)

● "Creating Applications with Very Large Cell Counts" (p. 346)

Analyst Library Guidelines


When you design an Analyst model to be used for a Contributor application, consider whether it
has D-Lists, formats, and A-Tables that can be shared with other Contributor applications. If so,
you can create a common library to contain any D-Lists to be shared between Analyst models. You
then create the main Analyst library, which must contain all D-Cubes, the e.List, and all update
links.
The following restrictions apply:
● A maximum of two libraries can be used per Analyst model.
All objects on which the model depends must be contained in the main library and the common
library.

● Update links must target the specific cubes in the main library.

● The Contributor administrator must have write access to all objects used in the Analyst model.

● D-List names used by the Analyst model must be unique.


This includes D-List used as format lists.

● D-Cube names must be unique.

● If a cube consists of D-Lists that are all from the common library, it is an assumption cube.

Administration Guide 337


Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations

Because of this, such assumption cubes do not appear when selecting the e.List in the
Administration Console during application creation.

● D-Cubes used as allocation tables must be contained in the main library.

You do not explicitly select the other library. This is the first library other than the template library
that is referenced when looking for dependencies on other objects. If there are references to more
than one other library, errors are reported. It may be necessary to trace dependencies in Analyst to
establish where the reference occurred.

D-Cube Restrictions
D-Cube options can cause problems in Contributor applications.
The following options are not supported in Contributor but do not stop a Contributor application
from working:
● All settings in the D-Cube, Options menu: Widths, Lines, Zeros, Break-back, and Stored Copy

● Integer break-back
This is ignored, producing different results in Contributor if break-back is switched on in
Contributor

● D-Cube Sort

The following D-Cube cell options are ignored: holds, locks, protects, and annotations.

Forward-referenced Weighted Averages


Forward-referenced weighted averages prevent a Contributor application from being created and
synchronized.
If you weight an item by a calculated item, the priority of that calculated item must be lower than
the priorities of subtotals in other dimensions. Or, if all the priorities are equal, the dimension
containing the weighted average must be first in the D-Cube dimension order.
Forward weighted averages are present if an item is weighted by a high priority calculation, and
subtotals in other dimensions are medium or low priority. e.List review items are medium priority
subtotals in a Contributor application, regardless of any priority settings in the Analyst placeholder
e.List. Forward weighted averages are also present if an item is weighted by a medium priority
calculation, and other dimensions containing medium or low priority subtotals appear in the D-Cube
dimension order.
This restriction applies both to assumption and contribution cubes.

Example
In the following example, Sales is calculated as Total Units * Price and Price is weighted by Total
Units.

338 Contributor
Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations

The weighted average in Price, Q1 is calculated correctly by Contributor if the weightings in cells
Total Units, Jan to Total Units, Mar are already calculated. In other words Q1 must be a higher
priority calculation than Total Units. Or, if equal priority, the time dimension should be later in
the cube than the Sales calculation dimension.
If Total Units is higher priority than Q1, the cell Price, Q1 are calculated before the weightings in
cells Total Units, Jan to Total Units, Mar are calculated. As a result, the weighted average could
not be calculated correctly.

D-Links
There is a lot to consider relating to D-Links when designing a Contributor model in Analyst.
Target areas of D-Links are automatically protected in Contributor to prevent a D-Link from
overwriting data provided by a planner. You do not have to protect D-Link target areas using access
tables.

D-Links Run Automatically in Contributor


All model update D-Links must be included in D-Cube update lists, but there is no need for update
macros. This is because when a planner clicks on a tab in the Web front end, Contributor checks
whether any D-Links into the cube must be run. They do not need to be run if the source data for
the D-Links is unchanged. If D-Links must be run, Contributor runs all the update D-Links for the
cube in the order they appear in the D-Cube update list before the selected tab appears.
When a planner saves or submits, Contributor checks again to see whether D-Links must be run.
These links are run before data is sent back to the server.
Data in a model can be changed using the Contributor Administration Console through assumption
cubes (after application creation or synchronize), or by importing data. Planners see a changed
application after all changed data blocks are updated through the reconciliation process.
D-Links not included in D-Cube update lists are ignored, as well as import D-Links, which are
D-Links from external sources.

In Contributor, update links are used in the Contributor model only if the Execute check box for
the D-Link is selected.

Special D-Links
Lookup D-Links, internal D-Links, and some break-back D-Links run automatically as relevant
data is changed on a particular tab.
For example, with lookup D-Links, if a planner changes a D-List formatted value in the lookup
target cube and presses Enter, lookup D-Links are run into the cube.

Administration Guide 339


Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations

Automatically executing internal D-Links can be useful for solving problems, such as to express
values as a percent of a total. A lookup link can change its own source data so that the same D-Link
needs to be run again. In Contributor, such internal D-Links run until the source data stops changing
or up to a maximum of 100 times.
A D-Link that targets a subtotal to perform a break-back allocation to detail items is named a
break-back D-Link. The detail items can be writable by a planner, although normally another
D-Link supplies the weightings for these detail items. The planner can change values in these detail
items if they are not supplied by another D-Link. The break-back D-Link runs automatically when
they press Enter.

D-Links and Data Types


The Contributor D-Link engine has a formal treatment of data types that Analyst does not have.
Contributor recognizes four data types: number, date, text, and D-List formatted.
Some operations on these data types are not supported in Contributor. For example, the Contributor
link engine does not:
● transfer D-List formatted values into numeric formatted cells

● add numbers to D-List formatted items

● multiply dates

● subtract text from numbers

● add, multiply, or subtract text values or D-List formatted items

All operations on mixed data types are considered invalid by the Contributor link engine with one
exception. You can add numbers to or subtract numbers from dates.
The results obtained in Contributor when performing invalid data type operations depend on the
D-Link mode. Fill puts zeros into the relevant target cells, whereas Substitute leaves the relevant
target cells alone. Add and Subtract effectively behave like Substitute, adding or subtracting zero.
In Analyst, all operations on data types are permitted. The Analyst D-Link engine simply operates
on the underlying values.

Invalid D-Links
Invalid D-Links prevent Contributor applications from being created, and also prevent
synchronization.
D-Links that use the e.List in any way other than the following are invalid:
● Link between an assumption cube, which does not have an e.List, and the target, a cube with
an e.List, where nothing is selected and it is unmatched.

● Where the e.List is present in both the source and the target, and the matching is done using
match descriptions with the default options: Case Sensitive On, Match Calculated Target Items
Off.

The following are invalid D-Links:


● A D-Link from a cube with the e.List that targets an assumption cube.

340 Contributor
Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations

● D-Cube allocation tables that are not assumption cubes, which means they contain the e.List.

● Allocation tables that contain deleted dimension items but which have not had these items
removed or edited in Analyst.

● A D-Link that needs editing.


Analyst issues a warning such as:
A dimension has been removed from the target
cube since
the link was last saved. Please edit and resave the link.

● A D-Link that targets the wrong cube.


For example, you include a D-Link in an update list for a particular cube. Then, by editing the
D-Link, you change the target cube of the link, leaving the D-Link in the update list for the
original cube.

D-Links Between Assumption Cubes and Contribution Cubes


D-Links between contribution cubes and from an assumption cube to a contribution cube are
allowed.
D-Links between assumption cubes have no effect in a Contributor application, but do not cause
an error or warning. You should run the D-Link in Analyst before building the model.
D-Links from a contribution cube to an assumption cube are not allowed.

Dimensions
A dimension is also referred to as a D-List in Analyst. Consider the following when designing
dimensions:
● "Dimension Order" (p. 341)

● "Supported BiFs" (p. 343)

● "Format Priorities in Analyst and Contributor" (p. 345)

● "Scaling Differences Between Analyst and Contributor" (p. 346)

Dimension Order
As a general rule, in an Analyst model, choose dimensions in the following order:
1. Calculation D-Lists such as P&L, and Balance sheet D-Lists.

2. The e.List.

3. Other aggregation D-Lists, such as products, customers, divisions, cost-centers, regions, or


subsidiaries.

4. Time D-Lists, such as months, quarters, or years.

5. Only one timescale D-List can be chosen in each D-Cube.

6. Control D-Lists, such as Actual/Budget/Variance.

Administration Guide 341


Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations

We recommend that the e.List is second in the dimension order, because this affects the size of the
data blocks. The data blocks store all detail cells for each cube, together with any calculated cells
for which the calculation comes earlier in the calculation sequence than the aggregations up the e.
List. These calculations are referred to as pre-aggregation calculations. The dimension order is the
primary method for controlling the calculation sequence. As a result, the position of the e.List in
the dimension order affects the number of cells stored in the blocks, and therefore the block size.
In many cases you can choose a different dimension order without affecting the calculations, and
this can be used to minimize the block size.

Example
For example, in the cube Revenue Plan, the dimensions are
● Product Gross Margin

● Indoor and Outdoor Products

● Channels

● e.List

● Months

● Versions

With this order, the calculated items on the dimensions Indoor and Outdoor Products and Channels
are stored on the data blocks.
The dimensions can be reordered as follows without changing the calculation results
● Product Gross Margin

● e.List

● Indoor and Outdoor Products

● Channels

● Months

● Versions

The calculated totals on the products and channels dimensions are no longer stored on the data
blocks. They are recalculated when the data is loaded on the client or during publish. In general,
the e.List is not the first dimension because there is typically one dimension of the cube for which
the calculations must be pre-aggregation. However, in many cubes there are other hierarchical
dimensions in addition to the e.List (products and channels in the example), and the order of these
can be switched without affecting the calculations.
Low priority calculations are pre-aggregation and are always stored on the data blocks regardless
of dimension order. High priority calculations are post-aggregation and are never stored on the
data blocks.

342 Contributor
Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations

Supported BiFs
When creating dimensions that are used in cubes in the Contributor application, the following BiFs
(built in functions) are available.
● @Cumul

● @Days

● @DaysOutstanding

● @Decum

● @Delay

● @DepnAnnual

● @Deytd

● @Differ

● @Feed

● @Feedparam

● @Forecast

● @Funds

● @Grow

● @IRR

● @Lag

● @Last

● @Linavg

● @Mix

● @NPV

● @Repeat

● @Time

● @Timesum

● @TMin

● @TMax

● @TRound

● @Ytd

Administration Guide 343


Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations

Switch-over Dates and Generic Time Scales


A switchover date is used by certain BiFs to define the dividing point between past and future.
Generic time scales have no date associated with each period and all the periods are the same length.
For all BiFs, switch-over dates cannot be used in conjunction with generic time scales.
For @NPV and @IRR, use of generic time scales is not supported.

@Last Differences
@Last looks back along the series of data in the input row and returns the most recent non-zero
value.
In Analyst, any positive number greater than 1E-13 is non-zero. Negative numbers must be greater
than -1E-12.
In Contributor, any positive number greater than 1E-15 is non-zero. Negative numbers must be
greater than -1E-14.

@Time Restrictions
The implementation of @Time in Contributor is identical to the implementation in Analyst except
for the following restrictions:
When using Method 1, the calculation will give different results depending whether the dimension
on which the calculation is defined comes before or after the e.List in the D-Cube's dimension
sequence. In other words, its results depend on the time at which it is executed.
@Time is not supported in Contributor in the circumstances listed below. In all these cases the
function returns zero.
● Method 2 (date last saved) is not supported. If you use @Time(2) in a D-List, a warning appears
while you create or synchronize the application, and the result is always 0.

● Methods 3, 4, 5, 8,10, 12, and 16 return a result of 0 with generic timescales.

● Methods 9 and 15 return a result of 0 with a generic timescale, or if the switchover date is not
set.

For information about using these built in functions, see the Cognos 8 Planning - Analyst User
Guide.

Date Formats
The following date formats are supported. Using any others prevents a Contributor application
from being created and prevents synchronization:
● DD/MM/YY

● DD.MM.YY

● MM/DD/YY

● MM.DD.YY

● DD-Mon-YY

344 Contributor
Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations

● DD Mon YY

● Day DD-Mon-YY

● YY-MM-DD

● YYYY-MM-DD

● YYMMDD

● YYYYMMDD

● DD-MM-YYYY

● MM-DD-YYYY

● DDMMYYYY

● MMDDYYYY

● DD/MM/YYYY

● MM/DD/YYYY

● DD.MM.YYYY

● MM.DD.YYYY

Format Priorities in Analyst and Contributor


In Analyst, there is a rule of precedence between the various formats, based on a priority among
the data types as follows:
● dimension (first)

● text/date/time/number (equal second)

A dimension format applied on any dimension overrides a text/date/time/number format on any


other dimension. If different formats with the same priority are applied on different dimensions,
the first of these formats is used, taken in dimension order. Analyst treats formats applied to the
D-Cube consistently with those applied to dimensions: a format applied to the cube is included in
sequence after all the other dimensions. Thus, a dimension format applied to the cube overrides
text/date/time/number formats on the dimensions, but is overridden by another dimension format
on a dimension.
In Contributor, the format is tied to the calculation priority and dimension order. This means that
a format on a calculated item applies to all cells to which that calculation applies. However, this
means that there are cases where Contributor and Analyst resolve the formats differently when
formats are applied to detail items on two dimensions.
Lower ordered dimensions take priority in Contributor, which means that the second dimension
has formatting priority over the first, and the third has priority over the second. High priority
calculations have priority over any format on a detail item. A high priority calculation format on
the third dimension overrides a high priority calculation format on the second dimension, and so
on.

Administration Guide 345


Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations

Scaling Differences Between Analyst and Contributor


Where an item in a D-List has a scaling format applied, the behavior is different in Analyst and
Contributor.
● In Analyst the numbers are entered, ignoring any scaling applied in the formatting. For example,
an item is scaled to 1000s. If you type in 22k, it shows as 22, because the underlying value is
22000. If you type in 22, it shows as 0, because the value is 0.022k, assuming that less than
two decimal places are showing.

● In Contributor, the numbers are entered as shown. If the cell shows 1000 for an underlying
value of 10, and you type in 1200, the new value shows as 1200 with the underlying value now
being 12.

Creating Applications with Very Large Cell Counts


When creating or synchronizing an application, the system must check for forward-referenced
weighted averages. It opens as many items from each cube as necessary to see whether
forward-referenced weighted averages exist. In some cases, this can cause the system to run out of
memory, such as when very large cubes are used in the Analyst model. Access tables are applied to
cubes to reduce their size when used in Contributor.
Rules exist to determine which items are opened. All items are selected from any dimension that
has any calculations other than simple sums, has weighted averages or other calculation options,
or has formats or time scales. If the dimension is all-detail, only the first item is selected. If the
dimension has some calculated items, the first level subtotal with the fewest children is selected,
with its children.
For most large cubes, the entire cube is not opened. Problems can occur when every dimension that
is not a simple hierarchy consists of a number of detail items, and a single total. In such a case, the
entire cube would be opened.

Example
You can make small changes to such cubes so that the application can be created. For example,
this dimension would be opened in full:
A
B
C
:
:
Z
Total (A to Z) = A+B+C+...+Z
You can include additional calculated items to the hierarchy so that these items are opened instead
of the total of all detail items.
For example, you can add this extra dummy total.

346 Contributor
Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations

Total A = +A
Then, when the check for forward-referenced weighted averages is performed, only items A and
Total A are used, reducing the memory requirements by a factor more than 10. Such a dummy total
can be excluded from the Contributor application by setting it to no-data in an access table.
It is also good practice in such cases to keep the Analyst cube as small as possible by including only
a single-item e.List.

Break-Back Differences Between Analyst and Contributor


Break-back in Contributor is slower than in Analyst. However, forward calculations in Contributor
are the same, or slightly faster.

Analyst<>Contributor Links
You can transfer data between Analyst and Contributor using the Analyst D-Link function. All the
standard features of a D-Link are available, such as the use of A-tables, D-Cube allocations, local
allocation tables, match descriptions, and matching on codes.
These types of links are available:
● Analyst to Contributor

● Contributor to Analyst

● Contributor to Contributor

For small amounts of data, an Analyst<>Contributor link can be a quick and effective method of
transferring data. However, for large amounts of data, it is more effective to use Administration
links, see (p. 145).
Analyst<>Contributor links work in the same way as a standard Analyst D-Link. They treat
Contributor as a single large cube, which means that with large models, you can quickly run into
memory problems. We recommend that Analyst<>Contributor links be used only for ad-hoc transfers
of small amounts of data of no more than 5 to 10 e.List items.
You can avoid memory problems for links that target an entire e.List in Contributor by using the
@SliceUpdate macro. This macro processes the link in slices of the e.List, making it a much more
scalable solution.
Most D-Links that have Contributor as a source or target behave the same as standard Analyst
D-Links. The few exceptions are as follows:
● Only cubes that contain the e.List are available as a source or target for Analyst<>Contributor
links.

● Lookup D-Links are not allowed when Contributor is the target.


This is because Lookup D-Links depend on the data in the target cube. In a Web environment,
this data can be changing all the time.

● You cannot target calculated items in Contributor.

Administration Guide 347


Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations

This includes totals on the e.List dimension as well as any total in other D-Lists.

● Match descriptions in Analyst D-Links to or from Contributor treat the pipe symbol as a blank.
The pipe symbol is used in Analyst as a line-feed for column headers. It is stripped out when
you create a Contributor application from an Analyst model.

● You cannot target an assumption cube in Contributor, or use it as a source.

● You cannot target No Data cells as defined by Access tables in Contributor.


However, as the Administrator, you are not subject to the restrictions imposed on Contributor
users entering data using the Web client. Hidden and read-only cells are not applicable. You
can write to these cells just as you can using normal import routines.

● Analyst<>Contributor links cannot be run inversely.

Otherwise, most D-Link types are permitted. You can use Match Descriptions, local allocation
tables, A-tables, and D-Cube allocations. You can cut subcolumns, so that you can match on codes.
You can run accumulation links both ways, but lookup links run from Contributor to Analyst only.
If you use a saved allocation table and rename D-List items in the Contributor application when
using Contributor as a source or target in a D-Link, the allocation table must be manually updated
for the link to work.
Analyst users who do not have the Contributor Administration Console installed are not able to
run Analyst<>Contributor D-Links.
When you install Client tools onto a workstation, it is installed only for the user doing the
installation.
To run a Contributor<>Analyst link, users must have Analyst and the Contributor Administration
Console installed. They must also have rights to Analyst and the appropriate Contributor
applications.
In addition, organizations may prevent access to the database or the Web server using the IP address,
limiting who can run these D-Links.
D-Links from ASCII and ODBC directly into Contributor are not allowed. You must use Contributor
Import to do this.

Set Up a Link Between Analyst and Contributor and Between Contributor


Applications
You set up a link between Analyst and Contributor or between Contributor applications in the
same way as you would a standard D-Link, choosing Contributor as the source or target of a
D-Link. Only cubes that contain the e.List item are available as a source or target for these links.
All the data is prepared in Analyst.

Steps
1. In the Analyst D-Link editor, choose Contributor Data as the source or target.

2. If more than one datastore server is available, choose one.

348 Contributor
Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations

3. Click a Contributor application.

Tip: You may need to click the Refresh button to the right of the Name list to view the list of
available Contributor applications.

4. Click Test Connection, and click OK.

5. Click a Contributor cube.

6. Pair dimensions against the target (or source) cube as you would for a standard D-Link.

Analyst>Contributor D-Links
These links can target either the production or development version of Contributor. If targeting the
development version, they appear on the Contributor screens only after the Go to Production process
is completed.

Important: If targeting the production application, the link changes the data, even if the user has
submitted data.
When you run an Analyst>Contributor link that targets a development application, the data is read
out of Analyst when you run the D-Link. When you run the Go to Production process in the
Contributor Administration Console, or through Automation, the prepared data is written directly
to the import queue in the data store for the Contributor application as a prepared data block, e.
List item by e.List item.
There may be a delay between the Go to Production process and the data being reconciled in the
Web client. If, in the meantime, a planner edits one of the cells targeted by the link, that cell is
overwritten during reconciliation. This behavior is very similar to the reconciliation that takes place
when you import data into Contributor from text files or using DTS.
When you run an Analyst to Contributor link that targets the production application, the data is
read out of Analyst when you run the D-Link. An automatic activate process is run that applies the
data to a cube. If running the link using macros, you must run the @DLinkActivateQueue macro.

Contributor>Analyst Links
When you run a Contributor>Analyst link, the following occurs:
● A snapshot is taken of the production version of the Contributor Application.
To ensure a consistent read if you are using the @SliceUpdate macro, take the Contributor
application offline, or use the @DLinkExecuteList macro.

● A publish job is created and immediately set to ready.


Note that the job is not run, because the Contributor job executor is not used. Analyst transfers
the data.

● A Contributor session is loaded and the entire data block is loaded for each e.List item.
If the link is set up for more than one e.List item, it is equivalent to loading a multi-e.List item
view which is very memory intensive.

Administration Guide 349


Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations

● The data is written directly to the Analyst cube data file (H2D file).

Contributor>Contributor links
These links go from the production version of a Contributor source to either the development or
production version of the Contributor target.
They are typically used between separate applications. If the applications are small,
Contributor>Contributor links can be fast. However, if you transfer data between larger applications
this way, you may run into problems due to memory use and scalability problems. You can avoid
these issues by using the @SliceUpdate macro. It can be more effective to use administration links
in the Contributor Administration Console, which copy data between Contributor cubes and
applications. This process is scalable and can move large volumes of data into either the development
or production version of the Contributor application.

Copying Analyst<>Contributor Links


There are three methods for making copies of links. Save As, Library Objects, and Library Copy
Wizard. These three methods of copying Analyst<>Contributor links affect whether the link refers
to the original application or a new application.

Save As Method
This method results in a copy which refers to the original Contributor application(s) and/or Analyst
D-Cubes.

Steps
1. In Analyst, open the link.

2. From the File menu, click Save As.

3. Choose a Library in which to copy the link.

4. Enter a name for the link copy.

5. Click OK.

Library Method
This method lets you select the link with or without other objects and choose either to copy or
move the link. This results in a link which refers to the original Contributor application(s) although
the source or target Analyst D-Cube (if it is not a Contributor > Contributor link) could be changed
by this method if certain reference options are chosen when copying.

Steps
1. In Analyst, from the File menu, click Library, Objects.

2. Select the link with or without other objects and move it down.

3. Click the Copy selected objects button.

350 Contributor
Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations

4. In the Copy dialog box, enter a new name for the link, select a target library in which to copy
the link, and select how to remap references.

5. Click OK.

Library Copy Wizard Method


This method lets you make a copy of an Analyst<>Contributor link which refers to a new application
based on a copied library.

Things to Watch Out For


● Using the Library Copy wizard to create duplicate objects within the same library will not work
for making copies of links because if you copy template D-Cubes containing the e.List, and
include the e.List itself in the selection of objects, the e.List will be copied. Thus when you
synchronize Contributor, the new cube you have created will be an assumption cube as it does
not contain the original e.List.

● A link can only be pointed to an application where it will refer to template cubes which were
copied at the same time. You cannot copy a link into a library which already contains suitable
template cubes and then refer the link to an application based on that library.

● If you make a copy of a link using this method and copy the link and its associated objects at
the same time, then you will not be able to refer the link back to the original application. You
will have to make a new application based on the copied library and refer the link to this new
application.

● If you copy macros which refer to Contributor applications using the Library Copy wizard,
then the macros will continue to refer to the original application. You must open the copied
macros and manually edit them to refer to any new applications based on copied libraries.

Steps
1. Use the Library Copy wizard to copy the link and any related Analyst template cubes at the
same time.

2. Create a Contributor application based on the copied Analyst template D-Cubes.

3. Point the link to your new application by using one of two ways.
● Open the link and then select your new Contributor application when prompted.

● From the File menu, click Library, Object. Double-click the link to move it down and then
right click the link and select Change Contributor source on D-Links.

Links and Memory Usage


The following factors affect memory usage when transferring data using links:
● the density of data

● the available RAM

Administration Guide 351


Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations

● the use of multi-e.List item views with access tables, the size of which are not decreased as much
as single e.List item views.

● the maximum workspace setting (MAXWS) in Analyst

This is the amount of space reserved for Analyst. As a general rule, this should not be more than
half the available RAM. If you set this option too high the Analyst process can use so much memory
that it does not leave enough for the Contributor process.

Update a Link from a Computer That Cannot Access the Original Datastore
If a Contributor cube is used as a source or target, and the link is opened from a computer that
cannot access the original datastore, you are prompted to reselect the connection and application
to point to the data store and application name that holds the cube the link was built on. All
matching is then preserved. Save the link so it will run in the future.
Multiple data sources can be used. If two applications are built from the same Analyst library, the
GUIDs match when pointing the link to the original data store.
To run a link from a workstation that does not have access to the original datastore you must
manually open the link and reselect the connection. You can also update the connection for several
links at once.

Steps
1. From the File menu, click Library, Objects and select one or more links that you want to update
and move then to the bottom pane.

2. In the bottom pane, right-click and click Change Contributor Source on D-Links.

3. Enter the connection details for the new data store.

4. Select the appropriate substitution option.


This updates all the selected links with the new connection details.

Multiple D-Links Using the @DLinkExecuteList Macro


@DLinkExecuteList is a macro designed to run a series of D-Links in order.
The @DLinkExecuteList macro behaves similarly to a series of @DLinkExecute steps, with a subtle
difference when D-Links have Contributor as a source. When the macro runs the first D-Link that
has Contributor as a data source, it logs the time and reads the database. All subsequent D-Links
that have the same Contributor source use the same data. This ensures consistency across D-Links
coming from the same Contributor data source. If a subsequent D-Link in the macro or submacro
has a different Contributor data source, the old source is closed and the new one is opened.

Run D-Links While Making Model Changes


If you want to make changes to the Contributor model and import data into Contributor using
Analyst>Contributor D-Links, synchronize first and then run the D-Link. For instance, if you have

352 Contributor
Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations

inserted a new product as part of a model change, you cannot import data into the new product
until the Contributor model is synchronized. You must then run Go to Production to activate the
model and data changes.

Example
You can use Contributor>Contributor links to preserve data during cube dimensional restructuring,
like when adding a dimension.

Steps
1. Take the Contributor application offline.

2. Run a link from the production version of the Contributor application to the import queue of
the development application.

3. Run Go to Production.

Effects of Fill and Substitute Mode on Untargeted Cells


In Contributor<>Analyst D-Links, Fill and Substitute Modes behave the same way as standard
D-Cube to D-Cube D-Links in Analyst.
Fill and Substitute modes generally apply only to lookup and accumulation D-Links. In Substitute
mode, the data in the untargeted area of the D-Link remains unchanged. In Fill mode, the untargeted
cells are set to zero when the D-Link is run.
However, this applies only to lookup and accumulation D-Links. In standard D-Links, if an item
is not included in the right side of an allocation table, the original target data is unchanged, regardless
of whether you use Fill or Substitute mode. Similarly, on normal real dimension pairings that use
match descriptions, unmatched items are unchanged when the D-Link is run.
In Contributor, for a lookup or accumulation link that targets both detail and calculated items at
the same time, the zeros to fill and the data to set are merged into a single update of the target cube.
In Analyst, for lookup and accumulation D-Links, any cell within the target area of the cube is set
to zero if no data exists in the source cube for that cell. The fill is applied first to zero the data, and
then the data is written as if in substitute mode.
The different methods for Fill for Analyst and Contributor causes different breakback behavior to
occur. If the Analyst method is required for Contributor, you can use one link to zero the data,
then an accumulation link running in substitution mode.

Example Table One


The following table shows the action applied to untargeted cells for different types of D-Links when
a D-Cube or Contributor is the cube source.

D-Link type Fill Substitute

Allocation Tables Keep Keep

Administration Guide 353


Chapter 20: Cognos 8 Planning - Analyst Model Design Considerations

D-Link type Fill Substitute

Lookup Zero Keep

Accumulation Zero Keep

Match descriptions Keep Keep

Example Table Two


The following table shows the action applied to untargeted cells for different types of D-Links when
a file map (ODBC or ASCII) is the cube source.

D-Link type Fill Substitute

Allocation Tables Keep Keep

Lookup Not Applicable

Accumulation Not Applicable

Match descriptions Zero Keep

Effect of Access Tables in Contributor


If Contributor is the source, cells marked as No Data are treated as zero when running a D-Link
into an Analyst or Contributor target D-Cube.
If Contributor is the target, you cannot target No Data cells as defined by Access Tables in
Contributor. However, as the Administrator, you are not subject to the restrictions imposed on
Contributor users entering data using the Web client. You can write to these cells just as you can
using normal import routines.

354 Contributor
Appendix A: DB2 UDB Supplementary Information

This section provides an introduction to the Cognos 8 Planning - Contributor for administrators
with responsibility for DB2 Universal Database (UDB) databases within the large enterprise,
specifically IBM DB2® Universal Database (UDB) version 8.1 for UNIX/Windows/Linux.
It assumes a familiarity with the tools provided by the database and a knowledge of security and
backup practices within the enterprise.

The Contributor Datastore


The implementation of a Contributor datastore is database specific: on DB2 UDB each application
exists as a separate schema.
The following table conveys the database terminology used throughout this section and the DB2
UDB equivalent.

Cognos 8 Planning Description DB2 UDB


terminology equivalent

Datastore server The location where one or more datastore Database


applications are stored.

Datastore application The container that holds the datastore Schema


objects for the Contributor application, such
as tables.

Publish datastore A datastore container to which a Schema


Contributor administrator publishes data.

Requirements for the DB2 UDB Database Environment


Consider the following requirements when creating a DB2 UDB database environment for Cognos
8 Planning data.
● For DB2 UDB the target database for any operation must have been created beforehand by a
database administrator (DBA). We recommend that you create a new DB2 database to host
Cognos applications. You may also choose to create a separate database to host Contributor
publish datastores.

Administration Guide 355


Appendix A: DB2 UDB Supplementary Information

Installed component Required client

Contributor Server DB2 Administration client

Contributor Administration Client None

● Connections are made to the Cognos 8 Planning DB2 UDB database and all SQL statements
are fully qualified SCHEMA.TABLENAME.

For more information on installation requirements and procedures, see the Cognos Global Customer
Services Web site.

Background Information For DB2 UDB DBAs


Contributor has been designed to respect local enterprise database best practice. With the correct
permissions, Contributor will work without your intervention, creating and deleting tables. You
can also set up Contributor so that you can review, amend and execute DDL scripts against the
enterprise database. This is done using the Generate Scripts (GEN_SCRIPTS) option which may be
turned on or off.

Security and Privileges


Specific operations in Contributor expect to issue data definition statements against the DB2 UDB
database. The user ID that is used to connect to the database to perform these operations must have
the appropriate privileges in DB2 UDB.
The privileges required for Contributor accounts are determined by setting the Generate Scripts
(GEN_SCRIPTS) option. Enterprise Planning Series components create schemas and tables and
populate them with data. Therefore users specified for connections must have been granted privileges
to create schemas and tables.
If you allow it, the Contributor application
● create and drop schemas

● create and drop tables

● create and drop indexes

● create and drop views

Alternatively, the application generates DDL scripts to do these things. You can then review the
scripts and execute them yourself. Additionally, Contributor needs to be able to bulk load data
using the DB2 UDB import utility, as well as carry out a non-logged delete of all data in a table.
When determining whether to generate DDL scripts, you may need to consider whether Contributor
should execute DDL against your enterprise database without review. You should also consider
whether local policy allows the Contributor security context sufficient privileges to be able to create
and remove Contributor datastores.

356 Contributor
Appendix A: DB2 UDB Supplementary Information

You may want to generate DDL scripts to comply with your own storage standards or to customize
storage clauses to take advantage of sophisticated enterprise storage. You may also want to amend
or add sizing clauses to the Contributor default DDL.

Naming Conventions
Static object names are the same across all applications and do not change during the life cycle of
the Contributor application. Examples of static objects include the applicationstate table, which
contains the Contributor application definitions, and the history table, which log events and data
changes.
Dynamic objects, primarily the import tables and publish tables and views, are named after objects
within the Analyst model. Object names correspond to Contributor model object names with a
subsystem prefix. Examples of dynamic objects include im_cubename, which contains the import
staging tables.
During application creation, Contributor forces dynamic object names and Contributor application
datastore names to conform to the following conventions:
● only lowercase letters a to z and numeric characters are allowed

● no punctuation is allowed except for the underscore

The application datastore name defaults to the name of the Analyst library that is used to create
the application. Dynamic object names are based on the Contributor object name to which they
correspond, such as publish data is et_cubename.

Metadata
Every Contributor datastore contains the metadata subsystem. The content of the metadata tables
is critical to the functioning of the Contributor application. The metadata provides a mapping from
internal Contributor model identifiers to the external database objects.
DDL scripts may be amended to conform to local storage conventions. You must not amend database
object names within the DDL script or allow the information contained within the metadata tables
to become out of sync with the underlying database objects.

Backup
Contributor does not back up data stored in the DB2 UDB database. You must back up the
Contributor datastore using tools supplied by other vendors. We do not anticipate problems restoring
from backups that use these tools, provided that
● the backup is taken of the whole datastore application (and the CM datastore?)

● no attempt is made to restore individual tables from backups taken at different times

Standards
All SQL is standard ANSI SQL and is executed via ADO / OLEDB.

Administration Guide 357


Appendix A: DB2 UDB Supplementary Information

The design of the datastore objects remove the need for complex table joins (the only place JOINs
are used is within the Reporting Views) and the few SORTs are typically on small result sets.
Data for transmission over HTTP (to and from the users entering the numbers into the model) is
compressed and stored as XML documents.

Preventing Lock Escalation


By default, Contributor expects that DB2 will use row-level locking for concurrency control. Table
locking or any lock escalation may prevent Contributor from functioning normally. In particular,
operations, such as Validate Users and Reconcile do not work when table locks are applied.
Validate Users checks to see if users in the Web client have the rights to access the Contributor
data.
Reconcile ensures that the structure of the Contributor data is up to date in the Web client.

Steps
1. Set locks to default to row-level locking and try to avoid upgrading the locks to table-level
locking.

2. To prevent lock escalation, ensure that the LOCKLIST and MAXLOCKS settings are not too
small.

Large Object Types


Large object types (LOBs), binary large objects (BLOBs), and character large objects (CLOBs) are
used to store the XML documents comprising compressed data. Examples of this are the storage
of the model definition as well as user data for transmission over HTTP. In addition, users are able
to submit free-form comments or annotations which are formatted into XML documents and stored
as large objects.
You may have a policy on large object storage. The Contributor application allows you to specify
custom tablespaces for data, indexes, and large objects.
Alternatively, you can use the default tablespace, USERSPACE1.
We recommend that you store large objects within system-managed tablespaces. Projecting storage
and growth is essential to a successful long-term implementation. While most of the tables used for
Contributor datastores will grow and shrink, such as jobitem, other tables should be monitored
for growth, primarily nodestate.
Update of LOBS is via the OLEDB driver for DB2 UDB, IBMDADB2.

Notes: Currently, DB2 UDB does not use the buffer pool to manage LOBS. The datastores chosen
for tables containing LOBS should be placed in file containers that will be buffered by the operating
system.
For more information, you may want to refer to the IBM DB2 documentation on performance
considerations for LOBs.

358 Contributor
Appendix A: DB2 UDB Supplementary Information

Job Architecture
Contributor operates a consistent and proven code stream across multiple database providers; that
is, a large proportion of the code (excepting database administration and DDL functions) is common
across different databases. The code is distributed within a classic n-tier architecture.
Data processing is carried out by job servers via the job architecture
A job may contain multiple job items which represent atomic items of work. Jobs are queued for
execution and picked up automatically.
A single-processor machine normally executes a single job item at a time. Job items are executed
by job servers.
Members of the job server cluster identify items of work by polling the job subsystem within the
Contributor datastore at regular intervals. Each job server continues to execute job items until no
more work exists. An individual job server may be asked to monitor one or more Contributor
applications. An job server may therefore be polling one or more Contributor datastores which
contain job subsystems within the Contributor environment.
The job architecture enables database administrators to limit the number of DML operations carried
out against the enterprise database by adding and removing job servers from currently executing
jobs.

Concurrency
UDB configuration parameters related to applications are dependent on the expected concurrency
on the database.
For Cognos 8 Planning, database concurrency is a function of the number of job servers and the
number of threads per server. The maximum number of concurrent applications can be determined
by adding up all the active job tasks for all applications plus the epjobexec job itself plus active
connections for the Administration Console plus any run-time server side components.

Capacity Planning
Capacity planning and system sizing is dependent on model size and the number of Contributor
applications. Data volumes may grow during Publish. For more information, see "Reporting Data:
Understanding the Publish Job Process" (p. 360).

Importing and Reporting Data


Data is imported and published using two separate jobs: Import and Publish.

Importing Data: Understanding the Prepare Import Job Process


Existing planning data may be imported into Cognos 8 Planning - Contributor import staging tables
(prefixed with im_). This functionality is supported for tab separated files via the Administration
console and uses the DB2 import utility.

Administration Guide 359


Appendix A: DB2 UDB Supplementary Information

Alternatively, you may choose a tool, such as Cognos Data Manager to populate the tables directly.
Import Data Process
❑ Whichever method you choose, the import data is processed and compressed by the Prepare
Import job and data is made available to web client users by the Reconcile job.

❑ The Prepare Import job retrieves data from the datastore, processes it, and reinserts it into the
application datastore in XML format.

Reporting Data: Understanding the Publish Job Process


When publishing data, Contributor
● publishes to a separate datastore, where data expands from a compressed large object XML
format to a relational tabulated format
In this scenario, publishing data may require increased database processing resources. Tuning
logging, tablespace, and database parameters can help minimize the overall impact on your
database performance.

● using import load replace, truncates potentially large tables or large number of simple SQL
insert statements at start of job followed by a bulk load of data per node per cube plus
annotations

Logging needs to be monitored in this case to avoid a drop in performance.


The process of publishing data results in more storage being consumed because of how objects are
represented. While a conventional relational table can contain LOB columns, because the Publish
process uses LOBs to hold encoded data, which expands when transformed into simpler
representations, storage requirements are increased further.
The administrator can limit the data to be published and limit the volume of data by adding a
publish data dimension.

Data Loading
Publish is broken up into units of work and processed via the job cluster. Data is uploaded using
the DB2 import utility.
Contributor supports options to accumulate all the data into large text files before uploading to
the target tables in the publish datastore. This is an interrupted publish. It does not reduce the size
of the publish data but it may fit more easily into enterprise procedures.

Job Failure
If an attempt at loading data fails because of inadequate disk space, the Publish job will cancel the
job. After you have allocated more tablespace, the Contributor Administration Console user should
attempt to run the job from the beginning.
If Cognos 8 Planning fails to create a table during Publish then the next time Publish is run, the
application attempts to create a table again.

360 Contributor
Appendix B: Troubleshooting the Generate
Framework Manager Model Extension

Use troubleshooting information to help solve problems you may encounter generating Framework
Manager Models.

Unable To Connect to the Database While Using Oracle


To correct an error message that Generate Framework Manager Model cannot find the correct
ODBC Driver (installed when you install Oracle), specify which Oracle ODBC driver to use in
Configuration Manager.

Note: Make sure you specify the Oracle driver and not the Microsoft ODBC Driver for Oracle.

Unable to Create Framework Manager Model


You may get an error message stating that you are unable to create a Framework Manager model
using the Framework Manager Script Player. A log file is created at installation_Location\
DOCUME~1\cognos01\LOCALS~1\temp\6\BMTReport.log.
This log file has two sections. The first section is the output generated using the Framework Manager
Script Player. The second part contains the actions that were executed.
Search on the word skip to see the errors in the log file.
In the Framework Manager Script Player section, look to see if the log file contains the following
error (the database values may differ):
Action: FindOrCreateDataSource failed, skipping…

Reason: QE-DEF-0285 Logon failure.

QE-DEF-0321 The userid or password is either missing or invalid.

QE-DEF-0068 Unable to connect to at least one database during a multi-database


attach to 1 databases(s) in: test_sales_market_table

UDA-SQL-0031 Unable to access the "test_sales_marketi_table" database.

UDA-SQL-0129 Invalid login information was detected by the underlying database.

ORA-01017: invalid username/password; logon denied

This error has the following potential causes:


● The user running the Generate Framework Manager Model does not have access to the Signon
created for the data source connection.

Administration Guide 361


Appendix B: Troubleshooting the Generate Framework Manager Model Extension

● You are using Oracle or DB2 in a multi-computer environment and the configuration to access
the datastore is not configured in the same way on all computers.

Unable to Retrieve Session’s Namespace


You may receive error #CM-REQ-4159 stating that the session’s namespace cannot be retrieved.
This error may occur

● at the very end of the Generate Framework Manager Model process when the Finish button is
pressed and the system is trying to generate the Framework Manager model

● when testing the Gateway URL from Generate Framework Manager Model using a distributed
environment when the Cognos 8 BI Server is on the same computer as Cognos 8 Planning

To resolve this issue, delete the directory data source, and publish to a new container.

Steps
1. Stop and restart the services.

2. In Cognos Connection, in the upper-right corner, click Launch, Cognos Administration.

3. On the Configuration tab, click Data Source Connection. Delete any directory data sources.

4. In Analyst, republish the data to a new publish container.

Unable to Change Model Design Language


You cannot change the design language of the model that the Generate Framework Manager Model
Wizard creates. It is always English.

362 Contributor
Appendix C: Limitations and Troubleshooting when
Importing Cognos Packages

Use the following limitations and troubleshooting information to help solve problems you may
encounter when importing a Cognos Package into Cognos Planning.

Limitations for Importing Cognos Packages


The following are the known limitations for importing Cognos packages into Cognos Planning.

Aggregation of Semi-Additive Measures in SAP BW


The aggregation of semi-additive measures in SAP BW (aggregation exceptions for key figures in
SAP BW terms) is not supported in this release. An example of a semi-aggregate measure is anything
that can be classed as a movement, such as stock or headcount numbers. These measures can be
aggregated across some dimensions but not all.
These measures are fully supported by the Cognos SAP BW OLAP provider when used on its own.
Except for some aggregations that are not supported in Framework Manager, semi-aggregate
measures in other OLAP providers and relational sources are supported.

Tip:
● Extract the measures via the OLAP interface in a separate Administration link.

● Cognos Planning supports a wide range of aggregation types, for example, weighted averages.
You can load the leaf-level values from SAP BW into Cognos Planning for aggregation. This
requires that Cognos Planning is at the same level of aggregation as SAP BW which might
require a change to Cognos Planning or SAP BW.

● If you have Data Manager installed, and have a good working knowledge of it, you can bypass
the Administration link and achieve the desired result for most aggregation types by moving
the data directly into the Cognos Planning import tables.

Aggregation Support
SAP BW aggregation types that are not supported by Framework Manager, but that are supported
in Cognos 8 queries by pushing the aggregation to the SAP BW system, are not supported by the
new Cognos Planning access method for SAP BW.

Tip: You can load the leaf-level values and do the aggregation in Cognos Planning where more
complex aggregations can be achieved, but there are some aggregations that cannot be replicated.
This also requires that Cognos Planning is at the same level of aggregation as SAP BW which might
require a change to Cognos Planning or SAP BW. You can alternatively use the OLAP interface.

Administration Guide 363


Appendix C: Limitations and Troubleshooting when Importing Cognos Packages

Administration Links Rows Rejected


It can be difficult to know which rows have been rejected or inserted by the Data Movement Service.

Tip: Switch the flag for writeContributorXML and writeDmsSpecOnSuccess from False to True in
the \Cognos\c8\bin\ dataimportserviceconfig.xmlTags file:
A generated file named: dmresult_<adminlink name>_<timestamp>.xml can be used to see records
that are inserted, updated, or rejected.

Administration Links with Cognos Package as Source


Administration links that have a Cognos package as the source can move data only into Development,
not Production.

Tip: Use Macros to run Administration links and add a Go to Production step to move the data
into the Production application automatically.

Administration Links and Marked Data


When two or more numeric Query Items are marked as data and mapped to the same target, the
data is not aggregated, instead two rows are delivered to the import table.
The import table itself takes the last entry as the loaded entry, so if there are two deliveries into the
import table for the same target cell, then the last entry is used. This is normal behavior for the
import table.

Tip: Remodel the data in the source or in Framework Manager to avoid this scenario.

Non-Numerics Marked as Data Mapping


Non-numeric Query Items, like text or dates, when marked as data in planning links, can only be
mapped as 1-1 in the manual mapping part of the links (both Administration and D-Links). Links
cannot have non-numeric Query Items marked as data in them at all if there are any 1 to many
mappings of marked data Query Items, even if only numerics are actually being mapped as 1 to
many. Only numerics can be in such links.

Tip: Create separate links for the numerics and non-numerics. If non-numerics need to be mapped
as 1 to many, then adapt the Cognos Planning model to run the value in once and perform the 1
to many mapping using D-Links. You can create multiple Administration links if the model cannot
be changed.

Framework Manager Model and SAP BW Usage Limitation


When using the SAP BW feature in Cognos Planning, you can use only Single-Provider InfoCube
objects, not Bex Queries or InfoQueries.

Tip: There is currently no workaround for this except to use the SAP BW OLAP interface.

Framework Manager Expressions or Concatenations


Framework Manager Expressions such as concatenations or IF THEN ELSE statements do not
work across the OLAP and relational elements of the Framework Manager model, so a statement

364 Contributor
Appendix C: Limitations and Troubleshooting when Importing Cognos Packages

cannot include references to both the relational and the OLAP parts of the Framework Manager
model.

Using BW OLAP Queries


If you use BW OLAP queries for IF THEN ELSE statements, then wrap string values in quotes and
do not use a mixture of string and numeric values in the same expression.

Multi-Provider SAP BW Cubes Not Supported


Multi-provider SAP BW cubes are not supported in the new SAP BW data access method for
Planning.

Tips:
● Use the InfoCubes that underpin the multi-provider

● Create an InfoCube specifically for Cognos Planning

● Use the Cognos OLAP interface to SAP BW

Administration Link and Data Movement


Because of a limitation in the Data Movement Service, an individual Administration link element
(one query) can only run against one processor, so additional processors do not make an individual
link element perform better.

Tip: To improve performance, you may create separate link elements within a single link so when
the link executes, the link elements will be executed in parallel. Or you can create separate links
and run them in parallel.

Model Properties Not Automatically Updated


Model properties are not automatically updated when new objects are imported into the Framework
Manager model.
For example, if the model builder imports only some of the dimensions from a cube and then creates
a detailed fact query, and then imports additional dimensions, the new dimensions will not have
an important property set that is required by Cognos Planning. The detailed fact query subject does
not have the correct properties set for the dimensions added after the fact query was created.

Tip: The workaround is to delete the detail fact query subject and recreate it.

Publishing Multiple InfoCubes in One Package


It is possible to have more than one InfoCube in a Framework Manager model and to run the
Detailed Fact Query Subject feature against each of them. It is then possible to publish the entire
model as a Package, Cognos Planning can use the Package only if the Cognos Planning user uses
metadata from one InfoCube per link.

Tip: Publish each InfoCube within a model in its own Package.

Administration Guide 365


Appendix C: Limitations and Troubleshooting when Importing Cognos Packages

Cognos Package Security


Only Cognos Packages with an assigned database signon can be used with Cognos Planning since
no log-on dialog box is displayed during import.

SAP BW Hierarchies
Members in SAP BW hierarchies must have unique business keys, as assigned to the Business Key
role in Framework Manager, across all levels.
When working with SAP BW data, the Dimension Key field for any dimension should be hidden
in the Model (not the Package) - both for the OLAP and Detailed Fact Query Subject access before
the Package is published. It is not intended for direct use from within Cognos Planning.

Query Prompts
Query Prompts defined in Framework Manager are not supported in any Cognos Planning links.

Tip: Change the model in Framework Manager.

SAP Variables Not Supported


SAP variables that generate Prompts in Cognos 8 Business Intelligence are not supported by Cognos
Planning.

Tip: Do not use the SAP variables when the package will be consumed by Cognos Planning.

Troubleshooting Modeled Data Import


Use this troubleshooting information to help solve problems you may encounter when importing
data from a Cognos Package.
You can troubleshoot modeled data import functionality in the following ways:
● Viewing generated files that contain information about errors

● Using error messages to troubleshoot

● Techniques to troubleshoot problems with an import

Viewing Generated Files


There are several files available to help troubleshoot the Modeled Data Import functionality. All
of the files are written to your temp directory, usually: C:\Windows\Temp.
Some of the files are generated automatically, but for others you must activate the generation of
the files.

Files Generated Automatically


● dmrejected_<adminlink name>_<timestamp>.xml. Produced when rows from the data source
are rejected in the link processing. Not produced when rows are rejected in the Prepare Import
processing.

366 Contributor
Appendix C: Limitations and Troubleshooting when Importing Cognos Packages

● <adminlink name>_Result.xml. On link error it is not produced. On link success it is always


produced.

● <adminlink name>_ImportFile.xml. Deleted by Contributor/Analyst if link is successful.

● <adminlink name>.cmd. Deleted by Contributor/Analyst if link is successful.

Files Manually Activated


● contribXml_<timestamp>.xml. On link error: optionally by dataimportserviceconfig.xml; on
link success - optionally by dataimportserviceconfig.xml.

● dmspec_<adminlink name>_<timestamp>.xml. On link error: always; on link success - optionally


by dataimportserviceconfig.xml.

● dmresult_< adminlink name>_<timestamp>.xml. On link error: always; on link success -


optionally by dataimportserviceconfig.xml.

Steps to activate the files


1. Open the dataimportserviceconfig.xml file in the bin directory. This file contains parameters
for the Modeled Data Import component.

2. Switch the flag from False to True for the following tags:
● writeContributorXMl

● writeDmsSpecOnSuccess

Setting the value to true will cause the file to be written when the import succeeds.

Generated Files
The following files are generated automatically to help you troubleshoot the import functionality.

dmrejected_<adminlink name>_<timestamp>.xml

This file is a raw output of rows that were rejected in processing the link within Data Manager.
These rows come directly from the datasource that the link reads. Rows are rejected when data
from the query items do not match expected target allocations in the target cube. For most data
sources, the rejected rows will contain data from the query items the link references, making
troubleshooting easier because, for example, the value in the rejected file would contain the
descriptions. However, for SAP Administration links where the Detailed Key Figures performance
enhancement is being used, the rejected rows will contain key values, not descriptions.
Example: dmrejected_TestCaseSap4_Wed Jan 10 12_29_32 CST 2007.xml

<adminlink name>.cmd
Used with <adminlink name>_ImportFile.xml, this file can be used to rerun the import outside of
Contributor. This can be useful if it is unclear whether the Modeled Data Import is actually being
executed. Problems might be uncovered by running the import outside of Contributor. Double

Administration Guide 367


Appendix C: Limitations and Troubleshooting when Importing Cognos Packages

clicking the file will execute it. By adding a pause command after the Java command inside the file,
the command file window will stay open until a key is pressed.
Example: testAnalystFileWrite1.cmd
This file is created by Contributor and Analyst, and is deleted after the Modeled Data Import
completes successfully. If the Modeled Data Import data fails, the file remains. The file contains
the commands and parameters to run the Modeled Data Import. It also contains a valid passport
that is only good until it expires. If the valid passport is copied along with <adminlink name>.xml
while the import is occurring, a copy of the file can be used later. If the passport in a cmd file has
expired, the expired passport can be replaced with a valid one. A valid passport can be taken from
a recently created cmd file.

<adminlink name>_Result.xml

This file is used by Contributor to determine the success or failure of the import.
Example: TestCaseSap2_Result.xml
If the link executed successfully, this file contains a subset of the results contained in the
dmresult_<adminlink name>_<timestamp>.xml file. Also, if the dmrejected file (described above)
is created, this file will contain a message that lets you know where the dmrejected file can be found.
Contributor reads this file to get the resulting status of the import. This file name doesn't contain
a timestamp. If the link failed, this file will contain a portion of the exception message that can be
found in the Planning error log.

<adminlink name>_ImportFile.xml

Used with <adminlink name>.cmd, this file can be used to rerun the import outside of Contributor.
This file is created by Contributor and Analyst, and is deleted after the Modeled Data Import
completes successfully. If the Modeled Data Import data fails, the file remains.
This file contains the commands and parameters to run the Modeled Data Import, information
about the matched dimensions, unmatched dimensions, data dimensions, and import table connection
info and column names.

Manually Activated Files


The following files must be manually generated to help you troubleshoot the import functionality.

contribXml_<timestamp>.xml

This file can be used to check if Contributor or Analyst is producing a valid file for Modeled Data
Import. You can use this file to step through the Modeled Data Import code, if the model and cube
can be reproduced, or access to model and cube are provided.
Example: contribXml__Wed Jan 10 12_29_13 CST 2007.xml
This file contains the adminlink xml that the Contributor Application or Analyst model has sent
to the Modeled Data Import. It also contains information about the matched dimensions, unmatched
dimensions, data dimensions, and import table connection info and column names.

368 Contributor
Appendix C: Limitations and Troubleshooting when Importing Cognos Packages

dmspec_<adminlink name>_<timestamp>.xml

This file can be validated against Data Manager's Data Movement Service schema to validate it for
well-formedness and proper content. It can also be used to create a Data Manager package. Packages
can be imported into the Data Manager user interface to be inspected and executed. Problems with
the spec can be discovered when creating the package and executing the package within the Data
Manager user interface.
Example: dmspec_TestCaseSap4_Wed Jan 10 12_29_32 CST 2007.xml. This file contains the Data
Manager's Data Movement Service spec file. These are the commands sent to Data Manager to
import the data from a source to the target.

dmresult_<adminlink name>_<timestamp>.xml

This result information, before it's written out, is used to create the <adminlink name>_Result.xml
file. The <adminlink name>_Result.xml file is a subset of the information in this file.
Example: dmresult_TestCaseSap4_Wed Jan 10 12_29_32 CST 2007.xml
This file contains the result of executing the spec in the Data Movement Service. If the import was
successful, the dmresult file will contain 'T' for the componentSuccess e.List node, the number of
rows read from the datasource, number of rows rejected, and the number of rows inserted into the
import table or written to the Analyst output file.
If unsuccessful, the dmresult file will contain either 'F' for the componentSuccess e.List node and
useful error message information, or no information at all.

Using Error Messages to Troubleshoot


Occasionally import links will fail. Likely causes of these failures will be problems in interpreting
the model metadata when determining what should be imported, uses fields that aren't allowed in
links, or certain limits in xml parsing or data retrieval/filtering being exceeded.
When a failure occurs, error information is written to the PlanningErrorLog.csv file. Error messages
specific to the processing of the link itself will have "Admin Links Import" in the Component
column (regardless of whether Analyst or Contributor ran the link), "Data Import Service" in the
Source column, and the error message in the Error Description column.
The following error messages may occur when importing data from a Cognos Package. View the
description and fix options for each error message to help you troubleshoot how to make the link
run.

Added Query Items with Concatenation in the Expression


Error Message: DS-DBMS-E402: COGQF driver reported the following:~~COGQF failed
to execute query - check logon / credential path~~DS-DBMS-E402: COGQF driver
reported the following:~~GEN-ERR-0015 Initially, in data source type(s)
&amp;apos;BW&amp;apos;, function &amp;apos;ces_concatenate&amp;apos; is not
supported in &amp;apos;OlapQueryProvider&amp;apos;. After decomposition, in
data source type(s) &amp;apos;BW&amp;apos;,

Administration Guide 369


Appendix C: Limitations and Troubleshooting when Importing Cognos Packages

Description: Query Items added to a Framework Manager model under a Query Subject that use
the concatenation functions ( '||' or '+') in the Query Item's expression are not supported in this
release. The query engine used to retrieve data when processing the link is not able to handle these
query items. It does not matter if the link is SAP or not.

Fix: Added Query Items with concatenation in the expression can be used within Analyst to build
D-Lists that can be included in a cube. However, these same query items cannot be used as a source
query item within a link in Analyst or Contributor. To get the appropriate mapping to occur when
choosing the source query items for the link, pick one of the query items used in the concatenation
expression. Then, when mapping to the target dimension that includes the concatenated values,
map the single source query item to the target dimension and use a sub-string on the target dimension
to achieve the appropriate mapping.

Expected ConformanceRef Not Found


Error Message: Error encountered in Modeled Data Import of the Data Import Service.
~~Caused by: java.lang.RuntimeException: Processing of this link has been aborted
because the link contains a mixture of query items with and without the metadata
conformanceRef attribute.The conforamanceRef attribute for query item
[SottBwp1NikeBw].[Orders].[New Query Subject].[TestQueryItem] can not be
determined.This link will have to be run using a package that doesn't contain
the Detailed Fact Query Subject (aka Detailed_Key_Figures).~~~at

Description: ConformanceRef is a hidden Framework Manager attribute used to link the OLAP
query items to the relational query items and exists when the Detailed Fact Query Subject is created
for a SAP model. When processing a link with a SAP model that has the Detailed Fact Query Subject
created and a query item is discovered in the link that cannot be linked to the Detailed Fact Query
Subject, an exception occurs. Examples of this are if any query item in a dimension or the Key
Figures that has been added to the model since the Detailed Fact Query Subject was created, or if
a query item is added under a query subject folder.
Query items like this have an expression that pulls values from one or more dimension or Key
Figures values. These can never be linked to the Detailed Fact Query Subject. If the first query item
of the link can't be referenced to the Detailed Fact Query Subject, then the Detailed Fact Query
Subject won't be used for the entire link, and the link should run successfully. But, if a query item
that can't be linked to the Detailed Fact Query Subject is processed after one or more query items
that can be linked, then the link will fail.

Fix: Deleting and regenerating the Detailed Fact Query Subject and republishing the package will
fix query items added to the Key Figures or a dimension.
When dealing with query items added to a query subject, deleting and not regenerating the Detailed
Fact Query Subject, then republishing the package will allow the link to run. However, the added
query item can't contain an expression using concatenation.

Entity Expansion Limit exceeded


Error Message: com.cognos.ep.dataimportservice.modeleddataimport.ModeledDataImport.
main(Unknown Source)~~Caused by: org.apache.axis.AxisFault: ; nested exception
is: ~~org.xml.sax.SAXParseException: Parser has reached the entity expansion

370 Contributor
Appendix C: Limitations and Troubleshooting when Importing Cognos Packages

limit "64,000" set by the Application.~~~at org.apache.axis.AxisFault.makeFault


(AxisFault.java:129)

Note: The above error message occurred when the expansion limit was set to 64,000. The error
message would refer to 200,000 if a client were to encounter the error with the Cognos default
setting.

Description: Framework Manager import uses the Java coding language when processing the link
information from Analyst or Contributor. Java XML parsers have a built-in default limitation on
how large an XML document can be. Exceeding this limitation will cause an exception to be thrown
and the link will fail. The Framework Manager import code provides a configurable override to
this limit. The Java limit of 64,000 has been increased to 200,000 by the override value. However,
it might be possible to build a link that exceeds this increased limit.

Fix: Open the dataimportserviceconfig.xml file located in the bin folder of the Cognos installation
directory. Find the parameter with a name of "EntityExpansionLimit" and increase the value. A
suggested increase would be to make the value 300,000. Increasing the expansion limit may mean
that more internal memory will be needed, causing a out of memory problem if the maximum heap
size isn't also increased. Find the JavaMaximumHeapSize parameter and increase that as well.
Doubling the value to 256M should be safe for 300,000, but memory limitations on the machine
may still cause out of memory issues.

Too Many Filter Values


Error Message: ~DS-DBMS-E400: COGQF driver reported the following on connection
&amp;apos;3&amp;apos;:~~Unhandled Exception~~databuild -- failed

Description: There is a limitation on the number of filter values in the Cognos query engine, and
exceeding that limit by building a query with a very large number of filter values causes the link to
fail. This can happen in Analyst when building a D-Link with one or more matched dimensions
mapping to large cube dimensions. It happens in Analyst or Contributor when the link contains
one or more unmatched source dimension that are filtered with a large number of values. It happens
in Contributor when the link contains one or more matched dimension manually mapped with a
large number of values. Look at the link itself to determine if the quantity of filters might be causing
the problem. The point where a problem occurs is somewhere around 300 total filter values. If this
error message is encountered, and the link deals with a large number of filter values, the fix
description below is the best way to get the link to run.

Fix: Open the qfs_config.xml file in the configuration folder under the Cognos 8 installation
directory. In the provider e.List with the name of "OlapQueryProvider", add the following:
<!---Allow use of the optimization for IN operator--><parameter
name="ApplyExtensiveComparisonOptimization" value="true"/>

Save the file and restart the Cognos 8 service.

Note: Turning this parameter on affects all queries, not just queries in Framework Manager links.
While performance may suffer when this is on, it can be turned off or removed from the configuration
file after the link has executed.

Administration Guide 371


Appendix C: Limitations and Troubleshooting when Importing Cognos Packages

Comparing _businesskey With Random Values

Error Message: OP-ERR-0070 'Customer' can only be compared to actual values of


the query item. Filters referring to arbitrary string constants should be applied
to attribute query items.

Description: In the SAP models, each level in a hierarchy contains a query item that has a role of
_businessKey. This query item is not intended for use in links, and therefore should not be used.
This query item is a special field that contains specific key values. If the query item is compared to
values that are not in the field's domain of values, an exception is thrown.

Fix: Since these query items are not intended for use in links, they should be hidden from view in
the model (not the package) - both for the OLAP and Detailed Fact Query Subject access before
the package is published.

Techniques to Troubleshoot Problems with an Import


Perform the following techniques to troubleshoot problems with the modeled data import
functionality.

Using Data Manager User Interface to Troubleshoot an Import Problem


Perform the following steps to create a Data Manager package and import the package into the
Data Manager user interface.

Steps
1. Create a (.bat) file to convert the dmspec_<adminlink name>_<datastamp>.xml into a package
file
● The command in the .bat file is:
"Cognos
Installation Directory\bin\CatAdapterTest"
-x "D:\DMSpec\generatePkg\TestCaseSap4.xml" -p "D:\DMSpec\generatePkg\
TestCaseSap4.pkg"
where "D:\DMSpec\generatedPkg\TestCaseSap4.xml" is the dmspec file
to process and the "D:\DMSpec\generatePkg\TestCaseSap4.pkg" is the
resulting package file.This example shows the files have been
copied or renamed to an easier name to type. The path to the xml
and pkg files will have to match the computer’s directory structure.

● Following the above command with a pause command on the second line will leave the
command window open to view the package creation. This is useful if the package creation
fails so you can see messages.

2. Run the bat file. A (.pkg) file will be created if successful. If unsuccessful, error message on why
the package couldn’t be created will be displayed.

3. Open the Data Manager user interface, then open an existing catalog or create a new catalog.

4. Import the package file to create a build. From the File menu, click Import Package and navigate
to the package file that you just created. It is not necessary to backup the catalog.

372 Contributor
Appendix C: Limitations and Troubleshooting when Importing Cognos Packages

From the package file, a new build will appear under the Builds and Jobstreams e.List. You
can click the new build to see a graphical representation of it.

5. Fix the Connection. The build will not run until the Framework Manager package has been
associated with the build's source connection and the target connection is correct.
● Click the new build and note the number of the source connection that appears on the very
left. Also, note the name of the target connection, on the far right of the build. This is
usually CON1, or something similar.

● Expand the Library/Connections e.List and find the corresponding connections.

● Right-click the source connection and click Properties. Click the Connection Details tab.

● Connection types will be selected on the left. Click Published Framework Manager Package.
On the right, the Package box will be empty.

● Click the … button. A Cognos 8 logon window will appear. After that, a list of published
Framework Manager packages will appear and select the appropriate package and click
OK.

● From the Connection Properties window, click Test Connection to verify that the connection
is now good. Click OK.

● Right-click the target connection and click Properties, and then click Connection Details.

● Verify that the Connection Details are correct. Click Test Connection. Click OK to save
any changes.

● Click Save.

6. Highlight the build and click Execute. Even if the build is highlighted, it won't execute unless
it was the last thing clicked.
● A command window will open showing the status and results of the build execution.

7. To determine problems with the v5 query, right-click the datasource icon in the build and click
Properties.

Select one row. Click the Query tab and click Run. Problems with the V5 will be displayed as
it tries to run.

Rerunning an Import Outside of Contributor


If the import fails, the cmd and <adminlink name>_Importfile.xml will remain in the directory. If
the import succeeds, the two files will be deleted. If you want to re-run a successful import, you
have to copy the two files after the import starts, but before it ends. Remember, the cmd file contains
a passport. Once the passport expires or the C8 server is bounced, the passport is useless.

Administration Guide 373


Appendix C: Limitations and Troubleshooting when Importing Cognos Packages

Note: Check the status of a INTER_APP_LINKS in the Administration Console. If the cmd file has
been deleted, the contents can be copied from the Failure Information dialog box.

Steps
1. Edit the cmd file and add a line at the end of the file that contains pause. This will keep the
command window open after the import finishes.
Another option is to redirect the cmd file output to a text file. Add >> c:\windows\temp\
linkoutput.txt to the end of the first line. This will redirect the output to the linkoutput.txt
file and can be viewed after the cmd file execution completes.

2. Save the changes.

3. Locate and double-click the cmd file to view the output from the import. Also, dmspec, dmresult,
and <adminlink name>_Result files are created like when running within Contributor.

Keeping the Analyst Export File created by Data Manager.


After a run loads data into an Analyst Cube, the temp file that is created by Data Manger is typically
deleted. If you wish to keep that file around for debugging purposes add a registry key called
DropExportFile in the Analyst settings registry folder, and set the value to 0 (zero).

Job Timeouts
An Administration link that executes for more than 30 minutes may appear to have timed out,
showing up as Cancelled in the Monitor Links section of the Administration Console.
Even though the execution of the link may have failed, you can look in Task Manager for dmrunspec.
There will be one for each link element in the link. If the Administration link is marked as failed
or cancelled - check for dmrunspec instances.
To increase timeouts you need to edit epJobExecutorResources.xml, located at <install_location>\
cognos\c8\bin, and increase the value for Wait this long to see if RUNNING Job Items complete.
Default setting is 1800 (30 minutes). The file is installed as read-only. We recommend that you
back up the file and reset the read-only flag to writeable. After changing this setting, the Planning
service needs to be stopped and restarted on the machine that is executing the link.

374 Contributor
Appendix D: Customizing Cognos 8 Planning -
Contributor Help

This section provides extra information about creating information for planners and reviewers.

Creating Cube Help


You can create help for each cube. There are two types of help:
● Simple cube help: This is one line (the limit is 200 characters) of plain text only. This appears
at the top of the grid, below the tabs. For more information, see "Creating General Messages
and Cube Instructions" (p. 76).

● Detailed cube help. This appears as a separate web page when the user clicks the Help button
at the top of the grid. This is described in the following sections.

Detailed Cube Help


The administrator writes the help text either in plain text, or using HTML formatting.
You can customize the format of the detailed cube help using HTML text formatting (p. 375). If
you choose to use no formatting, the help will display in a standard format.
After the Contributor application has been made live, the user has opened the Contributor application
and loaded the grid, the user accesses the cube help by clicking the help button at the top of the
grid.
If you do not write any help for the cube, it will default to the default Contributor browser help.

Using HTML Formatting


Use basic HTML tags to apply text formats to text in Instructions. In addition to formatting text,
tags can be used to include hypertext links and images.
Some basic rules for using HTML text tags:
● Text tags are used in pairs with the text they alter between them. For example, <b>my text
here</b>.

● A start tag consists of a left and right angle bracket, with a tag name in between. For example,
<b>.

● An end tag consists of a left and right angle bracket, a tag name, and a forward slash. For
example, </b>.

Administration Guide 375


Appendix D: Customizing Cognos 8 Planning - Contributor Help

Sample HTML text


<h1>Sample Help Text</h1>
<h2>2nd heading level</h2>
<h3>3rd heading level</h3>
<p>This is a paragraph.</p>
<p>This is a paragraph with a hyperlink to the <A HREF=HTTP://www.cognos.com target=_blank
>Cognos web site</a></p>
<p>This is a paragraph with a sample e-mail link to <A HREF=mailto:support@company.com>E-mail
technical support</A>.</p>
<ol>
<li>This is the first numbered list item.</li>
<li>This is the second numbered list item.</li>
<li>This is the third numbered list item.</li>
</ol>
<p>This is another paragraph.</p>
<ul>
<li>This is a bulleted list item.</li>
<li>This is another bulleted list item.</li>
</ul>

Help text Description

<h1>Sample Help Text</h1> <h1> indicates the start of text that is displayed
in heading 1 style.
Sample Help Text is the text that is displayed
in heading 1 style.
</h1> indicates the end of heading 1.

<p>This is a paragraph.</p> <p> Indicates the start of a paragraph.


</p> Indicates the end of a paragraph.

<A HREF=HTTP://www.Cognos.com This is a hypertext link. For more information,


target=_blank >Cognos web site</a> see "Creating Hypertext Links" (p. 377).

<A HREF=mailto:support@company . This is an email link. For more information, see


com>E-mail technical support</A> "E-mail Link Example" (p. 378).

376 Contributor
Appendix D: Customizing Cognos 8 Planning - Contributor Help

Help text Description

<ol> This is a numbered list. The tag <ol> indicates


the start of an ordered list and </ol> indicates
<li>This is the first numbered list item.</li>
the end of the ordered list.
<li>This is the second numbered list item.</
<li> indicates the start of a list item and </li>
li>
indicates the end of a list item. You can have as
<li>This is the third numbered list item.</li> many list items as needed between the <ol> and
</ol> </ol> tags.

<ul> This is a bulleted list. The tag <ul> indicates the


start of an unordered list and </ul> indicates
<li>This is a bulleted list item.</li>
the end of the unordered list.
<li>This is another bulleted list item.</li>
<li> indicates the start of a list item and </li>
</ul> indicates the end of a list item.

Using Images, Hypertext Links, and E-Mail Links in Contributor Applications


In Contributor Help Text, you can enter instructions that appear to users in the Contributor
application.

Adding images to instructions


You can add images in .jpg or .gif format to instructions, for example, a company logo.

Steps
1. Create a folder for images in the same directory that you have used for the web site.

2. Reference the graphic in the following way:


<IMG SRC=http://servername/websitename/helpimages/image.gif>

3. You must use the full path to reference the image, otherwise it will not display to all users.

Creating Hypertext Links


You can use hypertext links to allow users to jump from a Contributor application to other web
pages. You can put links in Instructions and Cube Instructions.

Example
To link to a file named File.html located in the subdirectory Path found on the server www.cognos.
com, you enter the following:
<A HREF=HTTP://www.cognos.com/path/file.html target=_blank >text or image</a>

Administration Guide 377


Appendix D: Customizing Cognos 8 Planning - Contributor Help

E-mail Link Example


You can use e-mail links in Planning Instructions and Cube Instructions to allow users to e-mail
someone directly from the Contributor application. When they click the e-mail link, your default
e-mail tool is launched.

Example
To add a link to your technical support contact, you could use:
<A HREF=mailto:support@mycompany.com> E-mail technical support</A>
This will appear in a manner similar to this in the Web browser:
E-mail technical support

378 Contributor
Appendix E: Error Handling

This section covers the following areas:


● How Cognos 8 Planning - Contributor tracks problems, and information on the files you may
be asked to provide to Customer Support.

● How to use the epLogfetcher to locate and retrieve different error logging files.

● Timing and logging registry - timing and logging provides timing for processes. This is written
to a file named PlanningTraceLog.csv, which is in the same locations as the PlanningErrorLog.
csv file and is useful for troubleshooting.

Error Logs and History Tracking


This section describes the different ways in which Contributor tracks problems, and describes the
files you may be asked to provide in order to resolve them.
You may be asked to supply a number of different files to Customer Support, depending on the
nature of the problem.

Type Description Location File name

Application Contains details about the User Defined. applicationname.xml


XML Contributor application.
Can represent the
Development or
Production model.

History tracking Tracks actions performed Database table named Not applicable
by users. history.

Administration Tracks actions performed Database table named Not applicable


Console History via the Administration P_ADMINHISTORY.
tracking Console/Job system.

JCE error logs Errors with the calculation Job server, jce*.tmp
engine. administration
machine or client
machine on the local
Temp folder
(%TEMP%)

Administration Guide 379


Appendix E: Error Handling

Type Description Location File name

General Error Errors in the Local Temp folder PlanningErrorLog.csv


logs Administration Console. (%TEMP%), on
administration
machine, web server
or client.

Logging and Provides timing for Local Temp folder PlanningTraceLog.csv


timing processes, and verbose (%TEMP%), on
logging. administration
machine, web server
or client.

These files are described in more detail in the following sections.

Application XML issues


Details about the Contributor application are held in XML format. If there are problems with a
Contributor application, you may be asked by Technical Support to save the XML as it is at that
particular state and send the XML file. You can save the state of both the development application
and the current production application.
See "Save Application XML for Support" (p. 77) for more information.

Timeout Errors
If you are experiencing timeout errors on long running server calls. Change the default remote
service call timeout value (default 480 minutes) to allow for longer calls.

Steps
1. On the System Settings page, click the System Settings tab.

2. Change the default call timeout to allow for longer calls.

For information about the Maximum e.List items to display as hierarchy options, see "Import e.
List and Rights" (p. 92).

History Tracking
The history tracking feature in Application Options (p. 72) tracks the actions performed by users.
When you have Actions timestamps and errors, or Full debug information with data selected,
information is recorded in the database in a table named history.
You can use history tracking if you have problems with, for example:

380 Contributor
Appendix E: Error Handling

● Workflow - in which case you set history tracking to Actions timestamps and errors.

● Aggregation (if it appears that you have incorrect aggregation), you should set it to Full debug
information with data.

The actionid contained in the history table is made up of two codes: a result code and an action
code. The first 2 digits are the result code and the rest make up the action code.
The following table shows the result codes and their meanings.

Result Code (Hexadecimal) Result

00 Success

01 Not Owner

02 Being Edited

03 Data Changed

04 Annotation Changed

05 Locked

06 Not Locked

07 Not All Children Locked

08 Annotation Backup Failed

09 Data Backup Failed

0A Grantor Locked

0B Already Reconciled

0C Not Initialized

The following table shows the Actionid from the history table and the action that it refers to. Note
that sometimes actionids may be combined. For example, if a user has made a change and submitted,
you might get the Actionid oxo4Ao.

Actionid (Hexadecimal) Action

0x0000 None

0x0001 Get Data

Administration Guide 381


Appendix E: Error Handling

Actionid (Hexadecimal) Action

0x0002 Get Annotations

0x0004 Get Import Block

0x0008 Annotate

0x0010 Edit

0x0020 Save

0x0040 Start

0x0080 Submit

0x0100 Reject

0x0200 Reconcile

0x0400 Release

0x0800 Take Offline

0x1000 Bring Online

0x2000 Check Data Up To Date

0x4000 Update

0x8000 Edit If Owner

See "Change Application Options" (p. 72) for further information.

Calculation Engine (JCE) error logs


JCE error logs can be found either on the Web server, the Administration Console machine,
administration server, the job server, or the client. They only appear on the client when there have
been problems using the Web browser.
You may get a server side JCE Error log in the following circumstances:
● If the Administration Console crashes

● Problems when importing the e.List

● Problems during import

382 Contributor
Appendix E: Error Handling

● Problems during the Go to Production process

● Problems during synchronize

● Problems during publish

This is not an exhaustive list, and you may get an error message telling you that a log was created.
To search for a JCE error log, search for JCE*.tmp.
These log files are stored in hidden folders.

General Error Logging


Most areas of the application log their errors to a file named PlanningErrorLog.csv. In Windows
NT4, this is stored in the Temp directory. In Windows 2000, it is normally be found in:
Documents and Settings\user name\Local Settings\Temp\
These logs can be created on the client, the Administration Console machine, administration server,
web server, or the job server. Most components and applications log to this file including Cognos
8 Planning - Analyst.

How errors are logged in the Administration Console


Errors that occur in the Administration Console are logged to help bug tracking.
Due to the distributed nature of the execution of Contributor, it is also necessary to distribute the
error handling/logging. For example, it does not make sense to log all the errors on the users’ web
client for errors that happen on the web server.
The logs created by components are put on the machine on which they execute.
The log file created is in tab separated format, but it has a .csv extension so that it automatically
loads into Excel when it is double clicked (if Excel is installed). Excel 97 gives you a warning that
the format is not recognized, but actually opens the file without any problems. Opening the file in
Notepad makes it difficult to read as the columns do not align due the varying length of the log
entries. If the log cannot be written, for example, the file is locked by another process, or is read-only,
then the log entry is written to the NT event log (if it exists) and the application fails with a message
saying that logs cannot be written. If no log is written for a long time, the application can continue
to execute even if the logging would cause problems in the future - this is normal "on demand"
resource usage as recommended by Microsoft.
During execution you may get an error that passes through multiple components as it works its
way up the call stack. In this case there will be entries in the log for each procedure that is affected
with a line number. For errors to be traced correctly, we need the log for each machine and user
context affected by the error. This can be difficult if you do not know where all the code is executing,
so it is advisable to send technical support all the logs you can find. We can usually tell if there is
a log entry missing from the identity information that is associated with each error.
An error log contains the following fields:

Administration Guide 383


Appendix E: Error Handling

Field Name Description

GUID Each distinct error has a unique GUID (a unique identifier) by which it is
identified. By looking at entries in the log with matching GUIDs it is possible
to group log entries by particular errors. It is possible to cross reference errors
between different log files if the error stack spans server side and client side
components.

Stack This is used in conjunction with the GUID to determine the source of the error,
and the call stack that the error was passed through before being reported. A
value of 1 indicates the source of the error, and the highest value is the point at
which it was reported to the user. Again these sequences can span log files.

Date Time The date and time at which the error occurred. The time is taken from the
machine where the component represented in the current log entry is running.

Component The name of the component represented in the current log entry.

Module The code module in which the error occurred.

File The source file name.

Version The version of the component represented in the current log entry.
Information

Procedure The procedure within the file where the error occurred.

Line Number The line number within the procedure where the error occurred. This enables a
developer to trace exactly which call caused the error, and in conjunction with
the error number and error message, it gives a high degree of detail about the
problem.

Source The origin of the error. May or may not be within Cognos components.

Error Number The identifier for the error condition.

Error A description of the error which has occurred.


Description

User Domain/ The User Domain/User Name under which the component represented in the
User Name current log entry was executing.

Machine The Machine Domain/Machine Name on which the component represented in


Domain/ the current log entry was executing.
Machine
Name

384 Contributor
Appendix E: Error Handling

Field Name Description

Previous User The domain on which the user was logged into and the previous user name.
Domain/
Previous User
Name

Previous The machine from which the call was made to the current component. This is
Machine the indicator to go and look for error logs on this machine, where it may be
Domain\ possible to find corresponding entries (matched on GUID), lower down the call
Previous stack. It may also provide clues to other errors that occurred prior to the issue
Machine being investigated.
Name

Process ID The current process ID.

Thread ID The current thread ID.

It is imperative that these logs are provided to development when reporting problems.

Using the LogFetcher Utility


The LogFetcher utility should only be used on the advice of Customer Support.
The LogFetcher utility retrieves planning log files from multiple machines. It retrieves the following
logs:
● PlanningErrorLog --for general errors. This is typically the first log you would look at if you
have a problem.

● PlanningTimer--for timer files. These files will be present if timing has been enabled (see below.
)

● AnalystLog--Analyst errors.

● IISLog--Web connectivity or download problems.

● JLog--J server errors. It contains errors relating to data, links, and calculations.

Steps
1. Run epLogFetcher.exe from installation_location\Cognos\c8\bin\

2. Right click in the top panel and click Add.

3. Enter the search criteria for the log files:

Administration Guide 385


Appendix E: Error Handling

Machine to Enter the machine name or IP address with the log files. To search the
Search (or IP local machine, enter localhost.
Address)

Select Protocol Select HTTP if you are looking for components on the Administration
server.
Select COM if your Administration Console is on a separate machine
to the Administration server and you are searching locally Check -
was MTS server.

File to retrieve Select one of the following:


PlanningErrorLog
PlanningTimer
AnalystLog
IISLog
JLog

Working Folder Enter or browse for a folder to retrieve the files to on your local
machine.

4. Click Add. This adds the search criteria to the top panel. Repeat steps 2 to 4 until you have
added all the log files you need.

5. To start the search, select the lines containing the search criteria and click View File(s).

Tip: You can do this one line at a time, or you can select multiple lines by holding down CTRL
and clicking. The results of the search are displayed in the lower panel.

6. In the lower panel select the files you want to bring into the working folder and click Get File
(s).

If you select two or more files with the same name into the same working directory, a number is
appended to the file name.
This tool only finds IIS logs if they are in the default path. It is not capable of retrieving logs from
remote client machines.

386 Contributor
Appendix F: Illegal Characters

The following ASCII characters are not allowed as e.List item and user names, e.List item and user
captions, user logons and user email.
They are also not allowed in dimension names or in namespace names.
Note that these are non-printing characters below ASCII code 32.

Decimal Char Description

0 NUL Null

1 SOH Start of heading

2 STX Start of text

3 ETX End of text

4 EOT End of transmission

5 ENQ Enquiry

6 ACK Acknowledge

7 BEL Bell

8 BS Backspace

9 TAB Horizontal tab

10 LF NL line feed, new line

11 VT Vertical tab

12 FF NP form feed, new page

13 CR Carriage return

14 SO Shift out

15 SI Shift in

16 DLE Data link escape

Administration Guide 387


Appendix F: Illegal Characters

Decimal Char Description

17 DC1 Device control 1

18 DC2 Device control 2

19 DC3 Device control 3

20 DC4 Device control 4

21 NAK Negative acknowledge

22 SYN Synchronous idle

23 ETB End of transmission block

24 CAN Cancel

25 EM End of medium

26 SUB Substitute

27 ESC Escape

28 FS File Separator

29 GS Group Separator

30 RS Record Separator

31 US Unit Separator

388 Contributor
Appendix G: Default Options

The following sections describe the default options for a Contributor application.

Grid Options
In the Grid Options, you can set the following:

Option name Default

Set breakback option All cubes on

Saved data Black

Typed data not entered Green

Data entered but not saved Blue

Allow multi e.List item views Off

Allow slice and dice On

Recalculate after every cell change Off

Application Options
In Application Options you can set the following:

Option name Default

History tracking Action time stamps and errors

Cut-down models No cut-down models

Allow reviewer edit Off

Allow bouncing On

Prompt to send email when user takes Off


ownership

Administration Guide 389


Appendix G: Default Options

Option name Default

Use client-side cache On

Prevent off-line working Off

Prompt to send email on reject On

Prompt to send email on save Off

Prompt to send email on submit Off

Web client status refresh rate 5 minutes

Record Audit Annotations Off

Annotations Import Threshold 0 (all rows imported in a single


transaction are recorded as a single
entry)

Annotations Paste Threshold 0 (all rows pasted in a single


transaction are recorded as a single
entry)

Display Audit Annotations in Web Client Off

XML location and filename


This defaults to the local temporary directory and the name of the Analyst model used to create
the application.

Admin Options
You can configure the import and publish actions using the following options:

Option name Default

Datastore Version Number (DB_VERSION)

Import Block Size (IMPORT_BLOCK_SIZE -1 (All)

Import Location (IMPORT_LOCATION) Blank

Import Options IMPORT_OPTIONS) Blank

390 Contributor
Appendix G: Default Options

Option name Default

Publish Options (PUBLISH_OPTIONS) Blank

Generate Scripts (GEN_SCRIPTS) No (for DBA). Note that if the


DBAuthority key is set to USER, in the
datastore, GEN_SCRIPTS will be set to
true.

Table Only Publish Post-GTP No


(POST_GTP_TABLE_PUBLISH)

Act as System Link Source (LINK_SOURCE) No

Display warning message on Zero Data No

Base Language EN

Scripts Creation Path Blank

Note that Admin Options are not visible to users when the DBAuthority key is not set to DBA in
the registry.

Go to Production Options
You can set the following options prior to creating the production application:

Option name Default

Prevent Client-side reconciliation Off

Copy development e.List item publish setting On


to production application

Planning Package

Name Name of the Package

Screen tip: Blank

Description: Blank

Overwrite the package access rights at the next On


Go To Production

Administration Guide 391


Appendix G: Default Options

Go to Production Wizard Options


You can set the following Go to Production Wizard options:

Option name Default

Back-up Datastore On

Display invalid owners and editors Off

Create Planning Package On

Workflow States: Leave On

Workflow States: Reset Off

Publish Options-View Layout


You can set the following options to set the View-Layout publish options and configure the publish
datastore connection:

Option name Default

Publish Datastore No Container Set

Do Not Populate Zero/Null/Empty Data On

Publish only cells with writable access Off

Use plain number formats On

Remove all data before publishing new data On

Include user annotations On

Include audit annotations Off

Publish Options-Table Only Layout


You can set the following options to set the Table-Only Layout publish options and configure the
publish datastore connection:

392 Contributor
Appendix G: Default Options

Option name Default

Publish Datastore name of publish datastore

Create columns with data types based on the On


‘dimension for publish’

Only create the following columns Off

Include Rollups On

Include zero or blank values Off

Prefix column names with data type On

Include User Annotations On

Include Audit Annotations Off

Include Attached Documents Off

e.List
When importing the e.List with just the compulsory columns in the file, you get the following
defaults:

Option name Default

Publish No

View Depth All

Review Depth All

Rights
When importing rights with just the compulsory columns in the file, you get a default of Submit.

Access Tables
If no access levels are set, the following defaults apply:
● All cubes apart from assumptions cubes have a global access level of Write.

Administration Guide 393


Appendix G: Default Options

● Assumption cubes (cubes used to bring data into an application) have a global access level of
Read.

The following rules apply for an imported access table.

Option name Default

Name of e.List Applies to the whole e.List if omitted.

AccessLevel No Data.

The base access level for rule based access tables is Write.

Delete Commentary
You can set the following options:

Option name Default

Delete user annotations Off

Delete audit annotations Off

Delete attached documents Off

Delete annotations before Off

Delete any annotations containing text Off

394 Contributor
Appendix H: Data Entry Input Limits

The data entry limits for Cognos 8 Planning - Analyst and Cognos 8 Planning - Contributor are
affected by a number of different factors. Limitations may be imposed by a number of different
factors such as operating system, datastore provider, and computer hardware.

Note: The limits described here are guidelines, and are not hard and fast rules.

Limits For Text Formatted Cells


The data entry limits for text formatted cells for Analyst, Contributor Browser, and the Analyst
for Excel are 32K (32,767 characters). Note however that in some cases, further limits are imposed
by the datastore provider.
For Contributor for Excel, the maximum number of characters is 911. Special characters, like
Returns and Tabs are not supported. If you need to copy and paste multiple paragraphs of text into
a cell from another document, ensure that you remove the returns after each paragraph before you
copy and paste text into a cell. Otherwise Contributor for Excel will truncate the incoming text
after the first paragraph.
For annotations, the maximum number of characters are 3844 characters. For attached documents,
the maximum number of characters in the comments section are 5o characters.

View Publish
● SQL Server = unlimited

● UDB = unlimited
Note that the publish views cast down to a varchar: SQL = 8000, UDB = 1500 (that is you only
see 8000 characters in the SQL view)

● Oracle = 4000 characters

Table-only Publish
Table-only publish varies by format
Text fields (epReportingText)
● SQL Server = 8000

● UDB = unlimited

● Oracle = 4000

Administration Guide 395


Appendix H: Data Entry Input Limits

Limits for Numerical Cells


The following limits are for Analyst and Contributor numerical cells.

Numerical Cells in Analyst


There are no limits to the number of characters in numeric cells that you can enter in Analyst,
however, when the numbers get too large to fit, they will display in a scientific format, for example
2345628363E205.

Numerical Cells in Contributor


You can enter up to 60 characters in a numerical formatted cell in the Contributor Web client.

396 Contributor
Glossary

access tables
In Contributor, controls access to cells in cubes, whole cubes, and assumption cubes.

accumulation D-links
D-links that consolidate data from a source D-cube to a D-cube based on text data.

administration job
An administration task that runs on job servers and is monitored by the Contributor Administration
Console. These tasks are commonly referred to as jobs. Some examples of jobs are reconcile, publish,
cut-down models, links.

administration link
A link that enables an administrator to move data between Contributor applications. An
administration link can contain multiple applications and cubes as the sources and targets of the
link. A link can contain multiple elements which target either the development or the production
application. Administration links run using the job architecture and so are scalable.

administration machine
In Cognos Planning, the computer that is used to operate Contributor Administration.

administration server
In Cognos Planning, the server that contains the planning components package (COM+ package)
and where control of the online application is maintained. You connect to this machine when you
first run Contributor Administration.

application
In Cognos Planning, a Contributor application. Contributor applications are used for the collection
and review of data from hundreds, or thousands of Web servers. One application can be used by
many users in different locations at the same time.

Application server
See Job Server.

assumption cube
In Cognos Planning, a cube that contains data that is moved into the Contributor application when
the application is created or synchronized. It does not contain the e.List. Therefore, data applies to
all e.List items, and is not writable. The data it contains is often named "assumption data."

Administration Guide 397


Glossary

A-table
In Analyst, an allocation table that shows how two lists correspond. It is useful for transferring
data when no character matches are possible between lists of items.

BiF
Built in Function. In Cognos Planning a BiF is a special calculation formula that was set up
specifically for planning. For example, depreciation, discounted cashflow, forecasting using different
drivers, and stock purchase prediction based on future sales.

bounce
In Cognos Planning, a term used to refer to the removal of the currently editing owner of an e.List
item in the Contributor Web client. A planner or reviewer may "bounce" the owner.

commentary
In Cognos Planning, commentary represents any additional information attached to Contributor
cells, tabs, or e.List items, including both user annotations and attached files. You can use
administration links, system links and local links to copy commentary.

contribution
In Cognos Planning, data that is entered into an e.List in the Contributor application.

Contributor Administration
A tool which enables administrators to publish an Analyst business model to the Web, manage
access settings and model distribution, and configure the user's view of the model.

cube
A physical data source containing a multidimensional representation of data. A cube contains
information organized into dimensions and optimized to provide faster retrieval and navigation in
reports. In Cognos Planning, a cube (see also D-Cube) corresponds to a tab on Contributor client
user interface.

current owner
In Contributor, the person who is editing or lasted opened an e.List item for edit.

cut-down models
In Cognos Planning, customized copies of the master model definition that have been cut down to
include only the specific elements required for a particular e.List item.

datastore
In Cognos Planning, the location where one or more Contributor applications are stored. A datastore
contains the information needed to connect to a database supporting the Contributor applications.

398 Contributor
Glossary

D-cube
In Cognos Planning, a multi-page speadsheet made up of two or more dimensions. A D-cube must
contain at least two dimensions. In Contributor a D-cube is referred to as a cube.

dimension
In Cognos Planning, the rows, columns, and pages of a cube are created from dimensions. Dimensions
are lists of related items such as Profit and Loss items, months, products, customers, and cost centers.
Dimensions also contain all the calculations. One dimension can be used by many cubes.
In Cognos 8 BI a dimension is a broad grouping of descriptive data about a major aspect of a
business, such as products, dates, or markets. Each dimension includes different levels of members
in one or more hierarchies and an optional set of calculated members.

D-link
In Analyst, a link that copies information in and out of cubes, and sometimes to and from text or
ASCII files.

D-list
An alternative term for dimension.

D-list format
Lets you enter text from another D-List in a row or a column. The format may be used in
database-type functions to consolidate data in a similar manner to query-style reports.

drill down
In Cognos Planning, drill down is a technique used to analyze D-Cube data that was imported by
a D-Link. You can drill down on any single cell in a D-Cube. If the cell contains data transferred
by a D-Link, drill down opens a view of the source data. If the data was imported from another
D-Cube, drill down opens the appropriate selection from the source D-Cube. If the data was imported
from an external source (a mapped ASCII file or an ODBC database), drill down extracts the relevant
data from the source file and displays it in a special drill-down results dialog box.
In Cognos 8 BI, drill down refers to the act of navigating from one level of data to a more detailed
level. The levels are set by the structure of the data. See also drill up.

e.List
The basis for the structure of a Contributor application. An e.List is a hierarchical dimension which
typically reflects the structure of the organization (for example, cost centers and profit centers).

editor
In Cognos Planning, a planner or reviewer who is editing a contribution

Administration Guide 399


Glossary

extensions
In Cognos Planning, extends the functionality of Contributor Administration and Web Client. There
are two types of extensions: Admin Extensions and Client Extensions. Admin Extensions run in
the Administration Console. Client Extensions are activated from the tool options on the Contributor
Grid.

file map
In Analyst, a file map tells the program how to split an ASCII or text file into columns of data. A
file map puts in the divisions, or breaks, between one column of numbers and another. It defines
the start point and width of each column of data within an ASCII file, and denotes whether the
column is a numeric, text, or date field. If there is only one column, a file map is superfluous. File
maps are always necessary when using an ASCII file as the source for a D-Link.

Get Data
In Cognos Planning, a command in the Web client that loads the screen that displays local links
and system links.

go to production
In Cognos Planning, a process in the Contributor Administration Console that takes the development
application and creates the live production application.

grid
In Cognos Planning, a tabular form for viewing and entering data.

GUID
Global Unique Identifier. A unique internal reference for items in a model. For example, when you
add a dimension item, this item is assigned a GUID.

hold
In Cognos Planning, a function that protects a cell against breakback.

import block
In Cognos Planning, a package of data from Analyst or an external system that is validated and
prepared for import into a Contributor application. The import block is imported into the
Contributor application datastore via a reconcile job.

import link
A function used in Analyst to update the items in a dimension on a regular basis from a source file
or database.

400 Contributor
Glossary

job server
In Cognos Planning, a machine that runs the administration jobs. There may be multiple job servers.
A job server is sometimes referred to as an application server.

library
In Cognos Planning, the storage location of the model. The library includes a group of connected
Analyst objects: macros, reports, D-Links, selections, D-Cubes, maps, A-Tables, D-Lists, and formats.
A library is similar to a Windows directory.

local links
In Cognos Planning, a link defined and run by a user in the Web client.

lock
In Cognos Planning, a function that prevents data being entered into cells whether by typing or via
a D-Link.

lookup d-links
In Cognos Planning, D-Links that look up data from a source D-Cube based on text data. It uses
a database D-Cube as a target.

macros
In Cognos Planning, a single object defined by an administrator to automate a series of
Administration tasks in Contributor. Each task is known as a step. In Analyst, a set of commands
that have been recorded and grouped together as a single command, which is used to automatically
complete a list of instructions in one step.

match descriptions
In Cognos Planning, used to automatically match source and target dimension items with the same
name. In addition, match descriptions can be used to perform an allocation by date.

maximum workspace
(MAXWS) The amount of memory reserved for Analyst. May be changed to allow larger models
to run more effectively.

model
A physical or business representation of the structure of the data from one or more data sources.
A model describes data objects, structure, and grouping, as well as relationships and security.
In Cognos 8 BI, a design model is created and maintained in Framework Manager. The design
model or a subset of the design model must be published to the Cognos 8 server as a package for
users to create and run reports.
In Cognos Planning, a model is a group of D-Cubes, D-Lists, D-Links, and other objects stored in
a library. A model may reside in one or more libraries, with a maximum of two for Contributor.

Administration Guide 401


Glossary

namespace
For authentication and access control, a configured instance of an authentication provider. Allows
access to user and group information.
In XML, a collection of names, identified by a URI reference, which are used in XML documents
as element types and attribute names.
In Framework Manager, namespaces uniquely identify query items, query subjects, and so on. You
import different databases into separate namespaces to avoid duplicate names.

offline grid
In Cognos Planning, the application that is used to access a section of an offline Contributor
application. The purpose is to enable users to enter or view data while there is no network
connection.

owner
In Contributor, a user who is assigned to an e.List item through the Rights screen and is permitted
to edit or review it. These rights may be directly assigned, or may be inherited.

planner
In Cognos Planning, a person who enters data in the Contributor application in the Web client.

product application
In Cognos Planning, the version of the Contributor application seen by the Web-client user. The
version of the Contributor application that is seen in the Contributor Administration Console is
the development application.

protect
In Cognos Planning, a function that is used to prevent data from being typed into a cell. However,
data can still be transferred into a protected cell via a D-Link.

publish
In Cognos 8 BI, refers to the creation of a package that makes metadata available to the Cognos 8
server. Information in the package is used to create reports and other content.
In Cognos Planning, refers to a function that is used to copy the data from Contributor or Analyst
to a datastore, typically so it can be used for reporting purposes.

publish container
In Cognos Planning, a datastore container created specifically to publish data to.

402 Contributor
Glossary

reconciliation
In Cognos Planning, a process that ensures that the copy of the Contributor application that the
user accesses on the Web is up to date, for example, all data is imported. Reconciliation takes place
after Go to Production has run and a new production application is created.

reviewer
In Cognos Planning, a person who reviews the submissions of reviewers or planners.

rights
In Contributor, assigning rights enables administrators to determine what users can do in a
Contributor application. Rights determine whether a user can view, edit, review, and submit data.

saved selections
In Contributor, dynamic groups of items from a dimension or e.List. When used in conjunction
with access tables, access tables provide a high level of control over the access or cells.
In Extensions, sets of data configured during an export or refresh. A user can choose a saved selection
and update just the data without reconfiguring the report or export criteria.
In Analyst, sets of data used to save a specific D-Cube orientation, including a selection of rows,
columns, and pages for later use. The selected items, sort order, and slice of the D-Cube are all
saved in a named selection.

synchronize
In Contributor, a function used to update all cubes, links, and so on in an application when the
underlying objects in Analyst change. Changes include renaming dimensions, adding, deleting, or
renaming dimension items.

system links
In Contributor, a link that is defined by the Contributor administrator and run by a user in the
Web client. This is part of the Get Data functionality in the Web client.

table-only layout
In Cognos Planning, a publish schema that consists of a table-only layout, and is particularly suitable
for the Generate Framework Manager Model extension.

view layout
In Cognos Planning, a publish schema that consists of a layout of views over text values.

Administration Guide 403


Glossary

404 Contributor
Index

act a system link source, 77


Symbols adding
.cpf, 305 applications, 58
e.List items, 98
A add-ins
Microsoft Excel, 84
access
Admin extensions, 303, 329
Contributor, 85
Generate Framework Manager Model, 308
access control, 27
running, 303
access levels, 119
Administration Console
definition, 116
actions that run jobs, 49
hidden, 116
performance issues, 126
loss of access to e.List items, 251
administration jobs
no data, 116
definition, 397
no data settings and block sizes, 118
administration links, 145
planner data entries, 134
Contributor, 20
reading, 116
CPU usage, 156
updating no data settings, 118
creating, 148
writing, 116
definition, 397
access permissions
exporting, 154
users, 30
importing, 154
access rights
macros, 21
go to production options, 79
model changes, 156
granting, 37
moving commentary, 148
access tables, 20, 21, 114, 115
rights, 37
changing, 121, 133
running, 153
cut-down models, 126, 136
setting source batch size, 157
default, 393
setting target batch size, 157
definition, 397
troubleshooting memory issues with detailed fact
editing, 121
query subject, 168
exporting, 125
troubleshooting tuning settings, 158
formatting, 123
tuning, 154
importing, 122
upgrading, 330
importing data, 134
using existing, 157
large, 125
administration machines
memory usage, 126
definition, 397
performance issues, 126
administration servers
rules, 116
definition, 397
viewing imported, 124
administrators, 24
accumulation D-links
multiple, 20
definition, 397
admin options, 77

Administration Guide 405


Index

allow automatic cab downloads and installations, 69 A-table


allow bouncing, 72 definition, 397
allow bouncing example, 75 attach documents, 290
allow multi-e.List item views, 70 attached documents, 290
allow reviewer edit, 72 attaching, 291
allow slice and dice, 70 configuration, 69
Analyst configuring the properties, 290
Generate Framework Manager Model wizard, 308 maximum number, 290
Analyst - Contributor links, 347 publishing, 292
upgrading, 329 attaching a document, 291
annotations, 287 audit annotations, 287
auditing, 287 recording, 72
behavior, 288 authentication
deleting, 289 third-party providers, 28
display audit annotations in Web client, 72 authentication providers, 28, 33
linking to Web pages, files, and email, 288 automation, 21
restrictions, 287
annotations import threshold, 72 B
annotations paste threshold, 72 backups, 357
anonymous access, 33 base language, 77
application containers base models, 308
rights, 37 best practices, 13
application details BiFs
viewing, 77 definition, 398
application folders, 66 supported in Contributor, 343
monitoring, 57 binary large objects, See large objects (LOBs, BLOBs,
application options, 72 CLOBs)
applications BLOBs, See large objects (LOBs, BLOBs, CLOBs)
adding, 58 block sizes and no data access settings, 118
creating, 22, 58, 63 BMTReport.log, 361
definition, 397 bounce
information, 68 definition, 398
linking, 86 bouncing
monitored, 56 allowing, 72
synchronizing, 177 example, 75
upgrading, 58 breakback
application tabs setting, 70
translating, 185 business cases, 317
application XML, 77 business logic
Application XML defining, 227
issues, 380 business rules
assign access rights, 37 defining, 233
assigning rights, 22 enforcing, 227
assumption cube planning for, 228
definition, 397
assumption cubes, 115, 119, 122

406 Contributor
Index

C commentaries
cab downloads deleting, 289
allowing, 69 commentary
caching breakback considerations, 293
Contributor data for Cognos 8, 308 copy, 292
Calculation Engine (JCE) cumulative, 292
error logs, 382 definition, 398
capabilities, 32 deleting, 289
capacity planning, 359 moving with administration links, 148
cascaded models, 144 moving with system links, 148
cascade rights, 37 viewing and editing, 291
changing components tabs
applications and translations, 183 translation, 185
e.List, 133 concurrency, 359
character large objects, See large objects (LOBs, BLOBs, condition
CLOBs) specifying for event, 222
client-executed links, 142 configure application, 68
client extensions, 302 configuring attached document properties, 290
configuring, 303 configuring the Web client rights, 37
extension groups, 302 contribution e.List items, 105
client-side cache, 72 contributions, 24, 83
client-side reconciliation, 52 definition, 398
CLOBs, See large objects (LOBs, BLOBs, CLOBs) Contributor add-ins, 84
CM-REQ-4159, 362 Microsoft Excel, 313
code pages, 190 Contributor Administration
Cognos 8, 304 definition, 398
caching Contributor unpublished data, 308 contributor-only cubes, 75
Cognos 8 Business Intelligence studios copy
connecting to data sources, 308 import, 173
Cognos Connection copy commentary, 292
create a data source connection, 161 copy development e.List item publish setting to
Cognos namespace, 27 production application, 79
Cognos Performance Applications, 315 copying
Cognos Series 7 namespace, 28, 335 Analyst Contributor links, 350
color copyright material
selecting for changed values, 70 printing, 15
column headings create a data source connection, 161
EListItemCaption, 96 creating
ELIstItemIsPublished, 97 application, 63
EListItemName, 96 applications, 37, 58
EListItemOrder, 96 applications using a script, 37
EListItemParentName, 96 connections from Cognos 8 BI products, 308
EListItemReviewDepth, 97 cube help, 375
EListItemViewDepth, 96 datasource connections, 305
detailed fact query subject, 166
Framework Manager projects, 164, 305

Administration Guide 407


Index

planning packages, 240 databases


Planning tables, 45 backing up, 357
PowerCubes, 311 object names, 277
production applications, 23 privileges, 356
publish containers, 37 data blocks, 241
scripts, 37 dataCacheExpirationThreshold parameter, 308
source files, 172 data dimensions for publish, 260
system links, 159 data entry
Web sites, 23 validating, 227
credentials, 42 data entry limits, 395
cube dimension order numerical cells, 396
setting, 70 data loads, 360
cube instructions, 76 data source connections, 305
cube order creating, 308
setting, 69 datastores
cubes, 19, 115 definition, 398
access tables, 119 rights, 37
changing, 246 datastore servers, 46
creating help, 375 information, 47
definition, 398 data validation, 227
detailed help, 375 and e.List items, 237
importing data, 172 defining business rules, 233
no access tables, 122 defining fail actions, 235
types, 115 impact of aggregations, 229
cumulative commentary, 292 setting up, 228
current owners, 91 setting up D-Cubes in Analyst, 229
definition, 398 DB2 import utility, 359
cut-down models, 72, 134 D-Cubes
access tables, 136 definition, 398
definition, 398 setting up pre- and post-aggregation ordering, 229
examples, 138 DDL scripts, 356
Go to Production process, 252 delete
impact from access tables, 126 job server cluster, 54
languages, 242 delete annotations
limitations, 135 rights, 37
options, 135 Delete Commentary, 394
processes, 135 deleting
restrictions to cutting down dimensions, 137 annotations, 289
translations, 136 annotations for e.List items, 289
commentaries, 289
D commentary, 289
data Contributor applications, 58
loss from changes, 177 e.List items, 100
moving using links, 144 import queues, 175
database object names, 263 jobs, 53
namespaces, 29

408 Contributor
Index

server definitions, 58 reconciliation, 252


undefined items, 94 reordering, 99
deployment status, 171 e.Lists, 19, 89, 94, 102
designing e.Lists, 22 aggregation and data validation, 229
detailed fact query subject, 166 changes, 133
memory usage, 168 default options, 393
developing plans, 22 definition, 399
development applications, 243 designing, 22
rights, 37 importing file examples, 95
development environment, 168 importing from Performance Applications, 315
dimensions reconciliation, 101
changing, 248 editor lagging, 251
definition, 399 editors
D-Links, 341 definition, 399
editing selections, 112 EListItemCaption column heading, 96
saving selections, 111 ELIstItemIsPublished column heading, 97
dimensions for publish EListItemName column heading, 96
selecting, 79 EListItemOrder column heading, 96
dimensions for publishing EListItemParentName column heading, 96
rules for non-defined, 305 EListItemReviewDepth column heading, 97
disable job processing, 54 EListItemViewDepth column heading, 96
display audit annotations in Web client, 72 e-Lists
display warning message on zero data, 77 associating validation rules, 237
D-Links, 20 email
definition, 399 sending on save, 72
designing Contributor model in Analyst, 339 sending on submit, 72
dimensions, 341 e-mail, 61
D-List aggregations links, 378
impact on data validation, 229 email character separator, 69
D-List format E-mail function, 25
definition, 399 environments, 168
D-Lists error messages
definition, 399 importing the e.List and rights, 93
importing using IQDs, 315 out of memory when exporting during
drill down deployment, 171
definition, 399 errors
dynamic objects, 357 handling, 379
logging, 383
E estimating model and data block size, 138
e.List item properties eTrust SiteMinder namespace, 28
previewing, 295 event
e.List items condition, 222
adding, 98 event condition
configuring display number, 92 specifying, 222
deleting, 100 Everyone group, 35
multiple owners, 91

Administration Guide 409


Index

examples fonts, 190


e.List files, 95 force to zero option, 134
importing data source files, 172 formatting imported access tables, 123
rights file, 107 Framework Manager, 304
Export for Excel extension, 315 creating and publishing a Framework Manager
exporting package, 165
access tables, 125 creating a project and import metadata, 164
Analyst library, 168 creating projects, 305
application links, 168 model troubleshooting, 361
e.Lists, 97 model updating, 311
macros, 168
model, 168 G
rights, 97 Generate Framework Manager Model extension
export tables, 266 accessing published data from Cognos 8, 308
expression generate scripts, 77
specifying for event condition, 222 generating
extending functionality, 19 Transformer models, 311
extensions Get Data
definition, 399 definition, 400
group, 302 global administration
external namespaces rights, 37
Cognos Series 7, 28 Global Customer Services Web site, 15
eTrust SiteMinder, 28 go to production
LDAP, 28 definition, 400
Microsoft Active Directory, 28 go to production options, 79
NTLM, 28 access rights, 79
SAP, 28 planning package setting, 79
Go to Production process, 239
F buttons, 25
failure of reconciliation, 52 e.List items to be reconciled, 252
file formats, 95 finishing, 252
file maps importing data details, 250
definition, 400 importing process, 175
filesys.ini, 45 invalid owners and editors, 250
file types model changes, 246
IQDs, 315 options, 244
fill and substitute mode, 353 rights, 37
Filters running, 244
importing from SAP BW, 166 show changes screen, 245
financial planning, 302 grid options, 70
finding, 94 default, 389
e.Lists, 94 grids
rights, 94 definition, 400
finding information, 14 group extensions, 302
finish screen, 252 groups, 29
folders, 66 validate, 109

410 Contributor
Index

GUID translated files, 188


definition, 400 import links
definition, 400
H import location, 77
help, 25 import options, 77
adding, 76 import queues, 175
getting, 15 incremental, 275
translating, 189 incremental publish, 275
hidden items, 116 information
history tracking, 72 finding, 14
hold inherited rights, 104
definition, 400 integration, 301, 304
HTML formatting, 375 Cognos Contributor and Cognos Controller, 315
hypertext links, 377 Invalid owners and editors tab, 250
IQD files
I importing D-Lists and e.Lists, 315
illegal characters, 387 items tables for table-only layout, 263
images, 375
import blocks J
definition, 400 jobs
import block size, 77 architecture, 359
Import data details tab, 250 canceling, 52
import files deleting, 53
prepared data blocks, 175 managing, 47, 50
preparing, 175 pausing, 52
testing, 175 preparing import jobs, 359
Import from IQD wizard publishing, 51, 360
Performance Applications, 315 running, 23
importing run order, 48
access tables, 122 securing, 49
Analyst library, 169 securing scheduled, 42
application links, 169 types, 47
copy process, 173 job server clusters
data, 37, 141 adding, 54
data into cubes, 172 rights, 37
e.Lists, 37, 92, 98 job servers, 54
example, 172 adding, 54
loading, 174 adding objects, 56
macros, 21, 169 definition, 400
models, 169 rights, 37
multiple times, 147
preparing, 174 L
rights, 92 lagging
rights file formats, 106 editor, 251
SAP BW data, 166 large access tables, 125

Administration Guide 411


Index

large objects (LOBs, BLOBs, CLOBs), 358 Cognos Connection, 41


LDAP namespace, 28 creating, 192
libraries definition, 401
Analyst guidelines, 337 deleting commentary, 214
definition, 401 development, 200
limitations, 363 executing a command line, 218
limitations when importing Cognos packages, 363 executing an Admin extension, 215
limit document size, 69 importing access tables, 21, 204
limits for data entries, 395 importing e.List and rights, 21
link access importing e.Lists and rights, 206
rights, 37 managing job servers, 197
linking production, 208
to existing applications, 37 publishing, 21, 208
linking to applications, 86 rights, 40
linking to a publish container rights needed to transfer, 40
rights, 37 running, 220
link modes, 147 securing scheduled, 42
links synchronizing, 21
administration, 145, 148, 153 troubleshooting, 225
Analyst - Contributor, 347 upgrading, 330
annotations, 288 upload development model, 207
changing, 248 maintaining the application
client run, 142 rights, 37
local, 143 manage extensions
memory usage, 351 rights, 37
model designs, 144 managing
order, 146 jobs, 47, 50
system, 142, 159, 160 sessions, 59
using to move data, 144 match descriptions
loading data, 174, 360 definition, 401
LOBs, See large objects (LOBs, BLOBs, CLOBs) matrix management, 145
local links, 143 maximum number of attached documents, 290
definition, 401 maximum workspace
lock definition, 401
definition, 401 MAXLOCKS setting, 357
lock escalation, 358 measures dimension, 305
LOCKLIST setting, 357 metadata, 357
LogFetcher utility, 385 organizing data using Framework Manager, 305
lookup d-links Microsoft Active Directory, 28
definition, 401 Microsoft Excel, 84
design considerations, 313
M migrating applications, 168
macros, 21 model
administrator links, 216 designing Analyst model for Contributor, 337
authentication, 41 model and data block sizes, 138
automating tasks, 191 model changes screen, 246

412 Contributor
Index

model designs offline grids


using links, 144 definition, 402
models offline working
advanced changes, 180 preventing, 72
changes that impact publish tables, 258 Oracle, 46
creating in Framework Manager, 305 orientation, 70
definition, 241, 401 out of memory error when exporting during
details, 66 deployment, 171
using Generate Framework Manager Model owners, 91
functionality, 308 definition, 402
modify datastore connection details ownership
rights, 37 e.List items, 108
modifying rights manually, 107
monitored applications, 56 P
Monitoring Console, 59, 171 parameters
moving data using links, 144 dataCacheExpirationThreshold, 308
multi-administration roles, 20 LOCKLIST, 357
multi e.List item views, 70 MAXLOCKS, 357
multiple access tables, 132 percentages, 314
multiple administrators, 20 performance
multiple owners CPU usage, 156
e.List items, 91 model changes, 156
tuning settings, 158
N variables, 155
namespace permissions, 101, 356
validate users, 109 planner, 24
namespaces, 27 rights, 105
definition, 401 planner-only cubes, 75
deleting, 29 planners
multiple, 28 definition, 402
restoring, 29 Planning Administration Domain
upgrade, 335 upgrading, 330
See Also authentication providers Planning Contributor Users, 31
naming conventions, 357 Planning Data Service, 304
navigation, 69 planning package, 240, 244
No Data access level, 116 Framework Manager, 304
NTLM namespace, 28 Go to Production, 304
numerical cells Planning Rights Administrator, 31
limits, 396 Planning tables
creating, 45
O plans
objects, 357 developing, 22
offline post production tasks, 253
store, 88 precalculated summaries, 257
prepared data blocks, 175
preparing data, 174

Administration Guide 413


Index

preproduction process, 252 select e.List items, 257


prevent client-side reconciliation, 79 table-only layout, 262
preventing publish containers
client-side reconciliation, 53 definition, 402
prevent offline working, 72 rights, 37
preview data publish data
rights, 37 rights, 37
previewing publishing
e.List item properties, 295 definition, 402
e.List items, 100 macros, 21
production workflow, 295 publishing attached documents, 292
properties, 295 publish options, 77
previewing the production workflow, 295
printing copyright material, 15 R
Print to Excel extension, 315 read access, 116
privileges, 356 recalculate after every cell change, 70
product application reconciliation, 52
definition, 402 changes to access tables, 133
production definition, 402
tasks, 242 e.Lists, 101
production applications, 241 prevent client-side, 79
rights, 37 record audit annotations, 72
production environment, 168 reject depth, 102
production workflow related documentation, 13
previewing, 295 removing applications, 58
projects rights, 37
creating in Framework Manager, 305 reordering
prompt to send email on reject, 72 e.List items, 99
prompt to send email on save, 72 reporting directly from publish tables, 257
prompt to send email on submit, 72 reporting on live data, 304
prompt to send email when user takes ownership, 72 resetting development to production, 25
properties restoring
previewing, 295 namespaces, 29
protect review depth, 102
definition, 402 review e.List items, 104
providers reviewer edit
security, 28 allowing, 72
publish, 255, 275 reviewers, 24
access rights, 256 access levels, 134
data dimensions, 260 definition, 403
data types, 267 rights, 104
export tables, 266 reviews, 83
hierarchy tables, 264 rights, 103
items tables, 263 assigning, 22
layouts, 255 default, 393
scripts, 256 definition, 403

414 Contributor
Index

e.List items, 101 select color for changed values, 70


file formats, 106 send email on reject
inherited, 104 prompt, 72
modifying, 107, 108 server call timeout, 380
reordering, 108 servers for datastores, 46
submitting, 104 server-side reconciliation, 52
summary, 103, 108 sessions
user, 103 managing, 59
roles, 29 set an application on or offline
valdiate, 109 rights, 37
row-level locking, 358 Set Offline function, 25
rule sets Set Online function, 25
associating to e.Lists, 237 setting data cache expiration, 308
defining fail actions, 235 simplifying models, 314
running jobs, 23 size of models, 66
run order of jobs, 48 size of models and data blocks, 138
slice and dice
S allowing, 70
sample HTML text, 376 source files
SAP BW creating, 172
importing data, 166 SQL, 357
limitations, 363 SQL Server, 46
SAP namespace, 28 static objects, 357
saved selections, 20, 111 stop job processing, 54
definition, 403 submitting rights, 104
editing, 112 synchronize
Save function, 25 definition, 403
saving application XML for support, 77 synchronizing
scenario dimensions, 257 advanced model changes, 180
scheduler credentials, 42 avoiding data loss, 178
jobs, 49 data loss, 177
script.sql, 67 examples, 179
scripts creation path, 77 Generate Scripts option, 178
searching macros, 21
e.Lists, 94 rights, 37
rights, 94 system administrator, 27
translating, 189 system link, 77
securing jobs, 49 system links, 142
security, 356 creating, 159
access control, 27 definition, 403
providers, 28 running, 160
third-party authentication, 28 using to move commentary, 148
upgrade, 335 system locale, 190
validate users, 109 system settings, 42
Web client settings, 85
security overview, 32

Administration Guide 415


Index

T upgrading
table-level locking, 358 Admin extensions, 329
table-only layouts administration links and macros, 330
definition, 403 Analyst - Contributor links, 329
table-only publish layout, 262 applications, 58, 329
table-only publish post GTP, 77 Contributor web site, 336
take ownership planning administration domain, 330
send email, 72 rights, 37
test environment, 168 Web sites, 336
text formatted cells what is not upgraded in Contributor, 329
data entry limits, 395 wizards, 332
timeout, 380 use client-side cache, 72
translating user annotations, 287
help, 189 behavior, 288
searches, 189 restrictions, 287
strings, 185 user models, 308
translation users, 19, 29, 98
application tabs, 185 annotations, 287
assigning to users, 183 classes and permissions, 30
changes, 183 loss of access to e.List items, 251
cycles, 183 validate, 109
exporting files, 188
importing and exporting files, 187 V
importing files, 188 validate
rights, 37 users, 109
trees, 83 validation methods, 227
troubleshooting version dimensions, 257
Generate Framework Manager Model view application details, 77
extension, 361 view depth, 98, 102
importing Cognos packages, 363 viewing
importing from Cognos package, 366 imported access tables, 124
macros, 225 view layout
modeled data import, 366 definition, 403
unable to change model design language, 362 view rights, 108
unable to connect to Oracle database, 361
unable to create framework manager model, 361 W
unable to retrieve session namespace, 362 warning messages
importing the e.List and rights, 93
U Web clients
underlying values, 314 settings, 85
unowned items, 91 web client settings, 69
unpublished Contributor data, 308 web client status refresh rate, 72
unregistering Web sites, 83
namespaces, 29 creating, 23
upgrading, 336

416 Contributor
Index

wizards X
Import from IQD wizard, 315 XML
workflow state definition, 297 default locations and filenames, 390
write access, 116

Administration Guide 417

You might also like