You are on page 1of 180

IBM StoredIQ Platform

Version 7.5.0.1

Data Server Administration Guide 

SC27-5692-00

IBM StoredIQ Platform
Version 7.5.0.1

Data Server Administration Guide 

SC27-5692-00

duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.1 of (product number 5725-M86) and to all subsequent releases and modifications until otherwise indicated in new editions. © Copyright IBM Corporation 2001. 2013. read the information in “Notices” on page 161.0. Note Before using this information and the product it supports. This edition applies to Version 7.5. US Government Users Restricted Rights – Use. .

. . . 42 Configuring network settings . . . . . . . . . . . . . . . . . . . . . . . . . 9 Alternate-access mappings . . . . . . . . 23 Creating primary volumes . . 28 Configuring Enterprise Vault retention Specifying data object types . . . . . . . 25 About Editing Volume Definitions . . . . . . . . . 13 Configuring Enterprise Vault . . . . . . .0 and Understanding the user interface . . . . . . . . . . . . . . 33 What is IBM StoredIQ Administrator? . . . . . . . . 23 Creating volumes . . . . . . . . . 34 What is IBM StoredIQ Data Script? . . . 30 volumes . . . . . . 18 Configuring Enterprise Vault (retention volumes) 42 Configuring DA Gateway settings . . . . . . . . 3 Windows Share (CIFS). . . . . . . . . . . . 15 NewsGator . . . . . . 4 Exchange servers . . . . . 18 Allowing DCOM traffic through the Windows Configuring mail settings . . . . . . . . . . . . . . . . . . . .Contents How to send your comments . . . . . . . . . . 40 Configuring retention servers . 35 Improving performance for IIS 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 on Exchange servers . . . 4 NFS . 18 Enabling remote DCOM . . . . . . vii Configuring desktop settings . . 36 Audit . . . . . . . . . . . . 34 What is IBM StoredIQ eDiscovery? . . . . 2001. . . . . . . . . . . . . . 13 Vault servers . . . . 39 Checking IBM StoredIQ Platform's status . 24 Configuring primary volumes using Importing cncrypted NSF files from Lotus Notes 25 Enterprise Vault . . . . 27 Adding a retention volume . . . . . . . . . . . . . . . . . . 1 Volume indexing . . . . . . . . 36 Folders . . . . . 53 Locking a user's account . . . . . . . . . . . . . . 40 Configuring IBM StoredIQ Platform . . . . . . . . . . . . . . . . . . 30 Contacting IBM . . . . . . . . . . 37 Web interface icons and buttons . . . 46 Changing the administrative account . . . . . . . . . . . . . . . . . . . . . . . 39 Restarting and rebooting IBM StoredIQ Platform . . . . . 34 Supporting IBM StoredIQ Data Workbench . . . . . . . . . 38 Performing IBM StoredIQ Platform Configuring security settings for Enterprise administration . 9 Documentum . . . . . . . . 24 Special note: adding SharePoint volumes . . v Configuring audit settings . . . . . . . . . . . . . . . 5 Enabling integrated Windows authentication What Is IBM StoredIQ Policy Manager. . . . . . 31 Contacting IBM StoredIQ customer Upgrading the IBM Desktop Data Collector agent 31 support. . . . . . . . . . . . . . 22 Hitachi HCAP configuration requirements . . . 43 Configuring SNMP settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 SharePoint . . . . . . . 40 Performing system configurations . . . 7 Exchange 2003 . 55 Configuring application configuration options . 17 Configuring the Dell DX Object Storage Platform 41 Configuring system configuration options . . . . . 31 Downloading the IBM Desktop Data Collector installer from the application . 26 Creating retention volumes . 45 Setting system backup configurations. . . . . 22 Configuring FileNet . . 59 © Copyright IBM Corp. . . 53 Deleting a user's account . . . . . 55 Optical character recognition processing . . . . . . . 2013 iii . . . . . . 37 Discovery Accelerator . . . . . . . . . 7 Privileges required by user account . . . . . . . 23 Configuring Chatter messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 firewall . . . 38 Discovery Accelerator permissions. 33 distributed architecture . . 30 Configuring hash settings . . . 35 Navigating within IBM StoredIQ Platform . . . . . . 45 Logging in and out of the system . . . . . . . . . . 2 Configuring server platforms . . . . . . . . 33 What is IBM StoredIQ Data Workbench? . . . . . . . . . 46 Creating users . . . . 54 Unlocking a user's account . . . . . . . . . . 24 Server support . . . . . . . . . 35 Administration . . . . . . . . . . . 23 Configuring Exchange 2007 Client Access Editing users . . . . . . . . . . . . 44 Setting the system time and date . 24 Adding Domino as a primary volume . . . 45 Managing users . 43 Platform . . . . . 55 Configuring harvester settings . . . . . . . . . . . . . . ix Using the Encrypted File System recovery agent 32 Understanding IBM StoredIQ Platform Creating volumes and data sources . . . . . . . . . . . . . 56 Configuring full-text index settings . 21 NetApp SnapLock . . . . . . . . 9 Enterprise Vault . . . . 17 Creating Centera pools . 20 Configuring IBM Information Archive retention Configuring notifications from IBM StoredIQ servers . . . . . . . . . . . . . . . .

. . . 126 Utilizing desktop collection . 69 Viewing harvest audits . . . . . 161 Using Folders. . . . . . . . . . . . . . . 101 Deleting a volume cache . 77 Monitoring processing . . . . . . . . . . . . . . . 74 Viewing a policy audit by volume . 93 Lightweight harvest parameters . 96 Working with jobs . . . . . . . . . . 75 Viewing a policy audit by discovery export. 101 Determining if a harvest is stuck . . . . . . . 81 Desktop collection processes . 94 Types of IBM StoredIQ Platform jobs . . . . . . . . 62 Saving items into different folders . . . . . . . . . . . . . . . . . . . . 93 Determining volume configuration settings . . . . . . . . . . Supported server IBM Desktop Data Collector client installation . . . . . . . . . . . . 98 Saving a job . . . . . . . . . . . . 86 Volume data import to a system volume. . . . . . . . . . . . . . . Creating discovery export volumes . . . . . . . 87 Deleting volumes . . . . . . . 97 Starting a job . . . . 143 Using the “delete” policy with the IBM Desktop WARN event log messages . . 111 SharePoint attributes . . . . . . . 83 ERROR event log messages. . . . . . . . . . . . . . 70 Working with event logs . 92 Performing a lightweight harvest . . . . . 72 Clearing the current event log . . . . . . . 129 IBM Desktop Data Collector installation methods 82 Installing the IBM Desktop Data Collector in Appendix C. . . . Supported file types . 133 Configuring IBM Desktop Data Collector collection 83 INFO event log messages . . . . . . . . . . . . 79 Supported file types by name . 85 Index . . . . . . . . . . . . . . 94 Determining hash settings . 81 Appendix B. . 65 Using audits and logs . . . . 63 Filtering items within the folder view . . . . . . . . . . . . . . . . . . . . . . 86 Creating system volumes . . Event log messages . . . . . . 77 Policy audit messages . . . . . . . . . . . . . . 133 stealth mode . . . . . . . . . . . . . . . . . . . 97 Creating a job . . . . . . . . . 86 Exporting and importing volume data . 94 Downloading an event log . 73 Viewing policy audit details . . . . . . . . . . . . . . . . 59 Moving a folder . . . . 76 Understanding the search audit feature . . . . . . . . . . . . . . . . . 152 Data Collector: special notes . . . . . . 70 Understanding import audits . . . . . . . . . . . . . . 70 Understanding event logs . . 75 Viewing a policy audit by time . . . . . . . . . . . . . . . . . 85 Understanding folder types . . . . . . . 85 Creating a folder . . . . . 73 Understanding policy audits. . . . . . . . 61 Renaming a folder . . 86 Volume data export to a system volume . . . . . . . . . . . . . . . . . . . . . 72 Subscribing to an event . . . 85 iv Administration Guide . . . . . . . 89 Understanding harvest audits . . . . . . . . 165 Deleting a folder . . . 69 Downloading harvest list details . . . . . . . . . . . . . 64 Policy limitations for volume types . . 93 Determining harvester configuration settings 71 Viewing event logs . . . . 97 Creating a job to discover retention volumes . 92 Harvesting properties and libraries . . 99 Running a predefined job. . . 81 platforms and protocols . . . . . . . . . 99 Deleting a job . . . . . . . . 97 Editing a job . . . . . . . . . . 79 Supported file types by category . . . . . . . . . 83 Notices . . . . . . 70 Viewing volume import audit details . . . . . . . 62 Copying items to different folders . . . . . . . . . . . . 89 Harvesting data . . . . . . . . . . . . 74 Viewing a policy audit by name . 93 Determining full-text settings . . . . . . 94 Configuring jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Appendix A. . . . . 76 Saving results from an audit. . . . . . . . 91 Understanding Harvests . . .

and the name and publication number of the information (if applicable). a title. © Copyright IBM Corp. If you are commenting on specific text. 2013 v .How to send your comments Your feedback helps IBM® to provide quality information. v Send your comments by e-mail to comments@us. You can use any of the following methods to provide comments: v Add comments by using the Comments pane at the bottom of every page in the information center.ibm. or a page number). the version number of the product. a table number.com.ibm.com/software/data/rcf/. please include the location of the text (for example. Please share any comments that you have about this information or other documentation that IBM Software Development ships with its products. v Send your comments by clicking the Feedback link at the bottom of any topic in the information center. Include the name of the product. 2001. v Send your comments by using the online readers' comment form at http://www.

vi Administration Guide .

call one of the following numbers: v In the United States: 1-888-426-4343 v In Canada: 1-800-465-9600 For more information about how to contact IBM. see the Contact IBM web site at http://www. © Copyright IBM Corp. 2001. call 1-800-IBM-SERV (1-800-426-7378).ibm. To learn about available service options. 2013 vii .Contacting IBM To contact IBM customer service in the United States or Canada.com/contact/us/.

viii Administration Guide .

see the Contact IBM web site at http://www. 2013 ix .com/contact/us/.com For more information about how to contact IBM.ibm. contact IBM StoredIQ customer support at this phone number: v 1-866-227-2068 To e-mail IBM StoredIQ customer support.ibm. 2001. © Copyright IBM Corp.Contacting IBM StoredIQ customer support For IBM StoredIQ® technical support or to learn about available service options. use this email address: v storediqsupport@us.

x Administration Guide .

IBM StoredIQ Data Workbench helps you to ensure that your company's data is an asset. and how the different components work together is key to understanding how the system works. harvests. Using IBM StoredIQ Platform Data Server. IBM StoredIQ Data Script focuses on repeatable. The IBM StoredIQ Platform Administrator understands and manages IBM StoredIQ Platform concepts such as volumes. and configurations. and approved processes for the purposes of culling and refining data in an approved manner. 2001. allowing you to script.Understanding IBM StoredIQ Platform distributed architecture Visualizing IBM StoredIQ Platform distributed architecture. v IBM StoredIQ Platform Data Server—IBM StoredIQ Platform Data Server is an invaluable tool in helping you to truly understand your company's data landscape. helping you to identify potential red-flag issues. v IBM StoredIQ eDiscovery—IBM StoredIQ eDiscovery is integral to the eDiscovery process. and action and target set management. all unstructured data in a company's enterprise network is indexed. platform and application. and to alert people about potentially interesting or useful data. and collect data that is relevant to legal matters. and monitor processes that would normally be a manual process run within IBM StoredIQ Data Workbench. you can perform eDiscovery work more efficiently while ensuring that you've captured the proper data. IBM StoredIQ Administrator sits between the IBM StoredIQ Platform interface and the applications and facilitates the transfer and communication of information. understood. automate. the Administrator is divided into two sections. v IBM StoredIQ Data Script—IBM StoredIQ Data Script enables automated execution within IBM StoredIQ Platform. Indexing of unstructured data allows you to gain information about it. v IBM StoredIQ Data Workbench—IBM StoredIQ Data Workbench is the tool that allows you to visualize this indexed data. helping you to discover. to know how much data you have on different types of servers. you gain information about unstructured data such as file size. file data types. the Administrator is also concerned with application concerns such as infoset lifecycle and creation. 2013 1 . By using IBM StoredIQ eDiscovery. By indexing this data. not a liability. © Copyright IBM Corp. data server and volume configuration. To this end. refine. its functions. so Administrators have a clear place to go in order to accomplish a task. identify. preserve. file owners. v IBM StoredIQ Administrator—IBM StoredIQ Administrator monitors and manages the IBM StoredIQ Administrator distributed infrastructure at a customer site. indexes. At the same time.

and actions. What is IBM StoredIQ Administrator? IBM StoredIQ Administrator helps you to manage global assets common to the distributed infrastructure behind IBM StoredIQ Platform applications. and the amount and size of information that is contained by each volume. what data is being managed. executing policies that affect data objects without requiring review. v Managing concepts—This feature allows you to relate business concepts to indexed data. v Configuring and managing actions—An action is any process that is taken upon the data that is represented by the indexes. data servers. Volume management is a central component of IBM StoredIQ Platform. Although infosets are a core component of IBM StoredIQ Data Workbench. These views are unique to the IBM StoredIQ Administrator application as they provide an overview of how the system is running. system infosets are created as a shortcut for users in IBM StoredIQ Administrator. This individual has strong understanding of data sources. Actions are run by data servers on indexed data objects. the Administrator can identify what data servers are deployed. which data server is responsible for that volume. The Administrator is the person responsible for managing the IBM StoredIQ Platform. v Creating system infosets—System infosets that use only specific indexed volumes can be created and managed within IBM StoredIQ Administrator. the state of the volume after indexing. allow access to various pieces of information that are being shared across applications or allow for the management of resources in a centralized manner. indexes. and the status of each data server in the system. v IBM StoredIQ Policy Manager—IBM StoredIQ Policy Manager allows you to act on your data in an automatic fashion at scale. their location. IBM StoredIQ Administrator provides at-a-glance understanding of the different issues that can crop up in the IBM StoredIQ Platform environment. Any errors or warnings that are generated as a result of an action are recorded as exceptions in IBM StoredIQ Data Workbench. v Managing target sets—Provides an interface that allows the user to set the wanted targets for specific actions that require a destination volume for their actions. infosets. v Managing users—The user management area allows Administrators to create users and manage users' access to the various IBM StoredIQ applications. IBM StoredIQ Administrator also allows the Administrator to see what volumes are currently under management. Administrators can also add volumes to data servers through this interface. Related concepts: Viewing data servers and adding volumes Creating system infosets Managing users 2 Administration Guide . This list provides an overview as to how IBM StoredIQ Administrator works: v Viewing data servers and volumes—Using IBM StoredIQ Administrator. jobs. Note: Actions can be created within IBM StoredIQ Administrator and then made available to otherIBM StoredIQ applications such as IBM StoredIQ Data Workbench.

Configuring and managing actions
Managing target sets
Managing concepts

What is IBM StoredIQ Data Workbench?
This section provides a high-level explanation of what IBM StoredIQ Data
Workbench is and its potential uses.

You have a big data problem: SharePoint sites, wikis, email, files, blogs, discussion
threads, and attachments. Your company's information is its most valuable—and
potentially most dangerous—asset.

Big data is a pervasive problem, not a one-time occurrence. Most companies
willingly admit that big data is problematic, but quite often, they don't even know
what problems they have. This is one instance where out of sight definitely does
not mean out of mind. Big data is all about the unknown, but the unknown cannot
be off limits. Plausible deniability is a risky bet. IBM StoredIQ Data Workbench can
help you learn about your data, to help you make educated decisions with your
most valuable asset, and can help you turn your company's most dangerous risk
into its most valuable asset.

IBM StoredIQ Data Workbench is a data visualization and management tool that
helps you to actively manage your company's data. IBM StoredIQ Data Workbench
helps you to determine how much data you have, where it resides, who owns it,
when it was last utilized, and so on. Then, once you have a clear understanding of
your company's data landscape, IBM StoredIQ Data Workbench helps you take
control of and make informed decisions about your data and act on that
knowledge by copying, copying to retention, or performing a discovery export.
That once-risky data is now a legitimate company asset.

Here are just some of the ways you could use IBM StoredIQ Data Workbench.
v Let's say that you need to find all company email sent from or received by
Eileen Sideways (esideways@thecompany.com). You can use IBM StoredIQ Data
Workbench to find all email and then copy that data to a predefined repository.
You could also use IBM StoredIQ Data Workbench to find all of the
esideways@thecompany.com email that occurred between specific dates and then
make that email available for review.
v As an administrator, you'd like to rid your networks and storage of unused data.
You can use IBM StoredIQ Data Workbench to find all files that have not been
modified in more than five years.
v You would like to find all image files created in 2007. Not only can IBM
StoredIQ Data Workbench find all image files created in 2007, it can also tell you
how much space they occupy on your network.
v A user needs to understand how data regarding Windows is being retained.
Using IBM StoredIQ Data Workbench, you can provide that user with a visual
overview of the number of objects retained and a breakdown of files per data
source. Additionally, you can apply overlays to show the user if those files
contain forbidden information such as credit-card numbers or Social Security
numbers.
Related concepts:
“Understanding IBM StoredIQ Platform distributed architecture” on page 1

Understanding IBM StoredIQ Platform distributed architecture 3

Supporting IBM StoredIQ Data Workbench
IBM StoredIQ Platform is an invaluable tool in helping you to truly understand
your company's data landscape. Using IBM StoredIQ Platform data server, all
unstructured data in a company's enterprise network is indexed. Indexing of
unstructured data allows you to gain information about it. By indexing this data,
you gain information about unstructured data such as file size, file data types, file
owners, and so on.

IBM StoredIQ Data Workbench is the tool that allows you to visualize this indexed
data, helping you to identify potential red-flag issues, to know how much data you
have on different servers, and to alert people about potentially interesting or useful
data. IBM StoredIQ Data Workbench helps you to ensure that your company's data
is an asset, not a liability.

What is IBM StoredIQ eDiscovery?
This section provides a high-level explanation of what IBM StoredIQ eDiscovery is,
by whom it will be used, prerequistes, and its potential uses.

IBM StoredIQ eDiscovery is an end-user application that helps legal users during
the initial phases of the eDiscovery process. By helping you to capture needed
electronic data, you can use IBM StoredIQ eDiscovery to communicate easily with
other users regarding the status and review process of work being done. IBM
StoredIQ eDiscovery does not drive the eDiscovery process, but instead helps legal
users to control and communicate those processes more effectively.

IBM StoredIQ eDiscovery helps to address the left side of the Electronic Discovery
Reference Model, as shown below.

Here are just some of the ways you could use IBM StoredIQ eDiscovery.
v Let's say that you need to find all electronic information regarding an upcoming
personal-injury lawsuit. You can use IBM StoredIQ eDiscovery to create a matter
for the suit, and then create boxes to contain email and reports regarding the
case.
v As a legal user, you'd like to see the status of all currently active matters. You
can use the Matter Dashboard to visualize the different matters' states.

4 Administration Guide

v As a data expert, you know that you need to respond to users in a timely
fashion. Using IBM StoredIQ eDiscovery, people can email you directly from the
application regarding a matter's questions.

Proactive eDiscovery consists of identifying and cataloging data or types of data
that may eventually be responsive to legal matters, collecting and retaining the
data, then producing that data in a way that matches accepted/required legal
practices.

Within IBM StoredIQ eDiscovery, the user is a non-technical end user who needs to
obtain certain pieces of data in order to complete a legal process. The IBM
StoredIQ eDiscovery user has the knowledge of the information or parameters for
what kinds of data they want, but does not always have knowledge of how to
obtain that data.

Before using IBM StoredIQ eDiscovery, ensure that prerequisites are met:
v The IBM StoredIQ Platform must be deployed, configured, and ready for use.
v IBM StoredIQ eDiscovery is dependent upon other IBM StoredIQ applications
such as IBM StoredIQ Data Workbench and IBM StoredIQ Administrator.
Related concepts:
“Understanding IBM StoredIQ Platform distributed architecture” on page 1

What is IBM StoredIQ Data Script?
IBM StoredIQ Data Script enables automated execution, allowing you to script,
automate, and monitor processes that would normally be a manual process run
within IBM StoredIQ Data Workbench. IBM StoredIQ Data Script focuses on
repeatable, understood, and approved processes for the purposes of culling and
refining data in an approved manner.

Through the IBM StoredIQ Data Script interface, you can monitor each of the steps
as they are executed and then view any defined outputs, such as reports, infosets,
or exports that are generated as a result. By running IBM StoredIQ Data
Workbench workflows, the user can reapply processes that have been reviewed
and approved to apply to a wide variety of different data problems.
Related concepts:
“Understanding IBM StoredIQ Platform distributed architecture” on page 1

What Is IBM StoredIQ Policy Manager
IBM StoredIQ Policy Manager enables policy execution at scale.

IBM StoredIQ Policy Manager allows users to run mature policies and processes at
scale across a wider range of data. What makes IBM StoredIQ Policy Manager so
powerful is that it lets users define and execute systemwide policies, focusing on
the execution of the process rather than understanding or reviewing affected data
objects. Additionally, IBM StoredIQ Policy Manager's reports lets you record what
actions were performed, when they were performed, and what data was affected
by the policy's execution. IBM StoredIQ Policy Manager is an extremely powerful
tool in managing your data effectively.

Understanding IBM StoredIQ Platform distributed architecture 5

6 Administration Guide .

ongoing. or 90-second intervals to refresh the page. and previous processes as well as its current status. This table describes the primary tabs. executed policies. This table describes Administrator-level features and descriptions. 2001. Today’s job schedule View a list of jobs scheduled for that day with links to the job’s summary. Folders Create folders and jobs. The menu bar at the top of the interface contains three primary tabs that are described in this table. © Copyright IBM Corp. Primary Tabs IBM StoredIQ Platform users perform most tasks using the Web interface. Page refresh Choose from 30-. Navigating within IBM StoredIQ Platform Primary tabs and subtabs found within the user interface allow you to access data-server functionality. Dashboard The Dashboard subtab provides an overview of the system’s current. and Configuration. Name Description Administration Allows Administrators to perform various configurations on these subtabs: Dashboard. imports. Related concepts: “Web interface icons and buttons” on page 9 Administration This topic provides descriptions of the Administration tab as well as its subtabs: Dashboard. run jobs. 2013 7 . Data Sources.Understanding the user interface This section provides an introduction to the IBM StoredIQ Platform Web interface. 60-. Data Sources. Audit Examine a comprehensive history of all harvests. and Configuration. It outlines the features within each tab and provides references to sections where you can find additional information on each topic. and event logs.

contained data objects. Dell DX Storage Clusters. Network settings Configure the private and public network interfaces. and the dates of the last completed harvest. A variety of server types and volumes can be configured for use in managing data. including estimated time to completion. Celerra. Appliance status Provides a status view of the appliance. remove and edit users. Event log Review the last 500 events or download the entire event log for the current date or previous dates. Mail server settings Configure what mail server to use and how often to send email. SNMP settings Configure Simple Network Management Protocol (SNMP) servers and communities. volumes. harvest exceptions. View cache details for volumes and discovery exports. including system data objects. Settings Description System DA Gateway settings Configure the DA Gateway host or IP address. Data Sources The Data sources subtab is where Administrators define servers and volumes. and binary processing information. average speed. and FileNet servers through the Specify servers area. Jobs in progress View details of each job step as it is running. Reboot or restart the appliance through the about appliance link. These can be places that are indexed or copied to. Harvest statistics Review the performance over the last hour for all harvests. Lotus Notes user administration Add a Lotus Notes User. Centera pools. System summary View a summary of system details. Manage users Add. Administrators can add Enterprise Vault sites. total system and contained objects encountered. Volumes are configured and imported in the Specify volumes section. Configuration The Administrator configures system and application settings for IBM StoredIQ Platform through the Configuration subtab. System time and date Set the system time and date on the appliance. NetApp. Application 8 Administration Guide .

Audit settings Configure how long and how many audits will be kept. See “Configuring full-text index settings” on page 28. Settings Description Harvester settings Set basic parameters and limits. and reasons to run binary processing. Data object types Set the object types that appear in the disk use by data object type report. Related concepts: “Creating volumes and data sources” on page 33 “Navigating within IBM StoredIQ Platform” on page 7 Folders The Folders tab is where users create and manage application objects. Administrators can review harvests and examine the results of actions. Desktop settings Configure the desktop collection service. Understanding the user interface 9 . data object extensions and directories to skip. Full-text settings Set full-text search limits for length of word and numbers and edit stop words. Related concepts: “Navigating within IBM StoredIQ Platform” on page 7 “Using Folders” on page 85 Audit The audit feature allows you to review all actions taken using the data server. Hash settings Configure whether to compute a hash when harvesting and which kind of hash. Related concepts: “Using audits and logs” on page 89 Web interface icons and buttons This section describe the icons and buttons used throughout IBM StoredIQ Platform Web interface.

the technical documentation is loaded as HTML help. and logs you out of the system. Help Clicking the Help icon loads IBM StoredIQ technical documentation in a separate browser window. Folders icons and buttons The table describes the different icons and buttons seen within the Folders tab. see Using Folders. provides information regarding version and system times. Action button The Action button allows you to take action on Workspace objects. including the ability to move. and delete jobs and folders. For more information. Icon or Button Name Description New button The New button enables you to add jobs and folders. By default. Folder Folders are a container object that can be accessed and used by Administrators. Icon or Button Name Description User The User account icon accesses your user account account. Inbox The inbox link allows you to access PDF audit reports. For more information. 10 Administration Guide . see Logging In and Out of the System. see Types of IBM StoredIQ Platform Jobs. For more information. IBM StoredIQ Platform icons and buttons These icons and buttons are seen within the IBM StoredIQ Platform dashboard. Jobs tasks such as harvesting Job are either a step or a series of steps.

Related concepts: “Understanding the user interface” on page 7 Understanding the user interface 11 . see Using Folders. by clicking this icon. you move to the parent folder in the structure. you view the contents of the Workspace folder.Icon or Button Name Description Folder Up Folders are a container object that can be accessed and used by Administrators. By default. Audit icons and buttons There are no specialized icons or buttons used on the Audit tab. For more information. however.

12 Administration Guide .

Stopped. 2. v Status—the current status of each service.Performing IBM StoredIQ Platform administration This section provides procedural information regarding IBM StoredIQ Platform administration. The Appliance details page shows the following information: v Node v Harvester processes v Status v Software version v View details link 3. Error. Click About appliance to open the Appliance details page. 2001. To check the status of IBM StoredIQ Platform: 1. This value is zero when a service is idle. This table defines appliance details data and describes the data provided for the node. Click the View details link for the controller. Checking IBM StoredIQ Platform's status This topic provides procedural information regarding how to check IBM StoredIQ Platform's status. Option Description View appliance details Shows software version and details of harvester processes running on the controller for the appliance. 2013 13 . Initializing. Application services Shows a list of all services and current status. © Copyright IBM Corp. including: v Service—the name of each service on the appliance component v PID—the process ID associated with each service v Current memory (MB)—the memory being used by each service v Total memory (MB)—total memory being used by each service and all child services v CPU percentage—the percentage of CPU usage for each service. Status messages include Running. Go to Administration > Dashboard > Appliance status. Unknown.

Option Description
System services
Shows a list of basic system information
details and memory usage statistics. System
information includes:
v System time—current time on the
appliance component
v GMT Offset—the amount of variance
between the system time and GMT.
v Time up—the period of time the appliance
component has been running since the
last reboot, in days, hours, minutes and
seconds
v System processes—the total number of
processes running on the node
v Number of CPUs—the number of CPUs in
use on the component
v Load Average (1 Minute)—the average
load for system processes during a
one-minute interval
v Load Average (5 Minutes)—the average
load for system processes during a
five-minute interval
v Load Average (10 Minutes)—the average
load for system processes during a
ten-minute interval

Memory details include:
v Total—total physical memory on the
appliance component
v In use—how much physical memory is in
use
v Free—how much physical memory is free
v Cached—amount of memory allocated to
disk cache
v Buffered—the amount of physical memory
used for file buffers
v Swap total—the total amount of swap
space available (in use plus free)
v System services Swap in use—the total
amount of swap space being used
v Swap free—the total amount of swap
space free
v Database Connections
v Configured
v Active
v Idle
v Network interfaces
v Up or down status for each interface

14 Administration Guide

Option Description
Storage
Storage information for a controller includes:
v Volume
v Total Space
v Used space
v Percentage
Controller and compute node status
Indicator lights show component status:
v Green—Running
v Yellow—The node is functional but is in
the process of rebuilding; performance
may be degraded during this time. Note:
The rebuild will progress faster if the
system is not in use.
v Red—not running

Expand the node to obtain details of the
appliance component by clicking the image.

Related concepts:
“Performing IBM StoredIQ Platform administration” on page 13

Restarting and rebooting IBM StoredIQ Platform
This topic provides procedural information regarding how to restart and reboot the
IBM StoredIQ Platform.

Note: The Web application is temporarily unavailable if you restart or reboot.

To restart services or reboot IBM StoredIQ Platform:
1. On Administration > Dashboard > Appliance status. On the Appliance status
page, you have two options:
v Click the Controller link.
v Click About Appliance.
2. The Restart services and Reboot buttons appear at the bottom of the window.
These buttons are available on the View details page and on each of the tabs.
Click either of these options:
v Restart services to restart all system services running on the node.
v Reboot to reboot the IBM StoredIQ Platform components.
The Web application is temporarily unavailable if you restart or reboot.
Related concepts:
“Performing IBM StoredIQ Platform administration” on page 13

Performing IBM StoredIQ Platform administration 15

16 Administration Guide

For more information. NIS domain membership. see Configuring DA Gateway settings. The Configuration subtab (Administration > Configuration) is divided into System and Application sections. v Setting backup configurations.Performing system configurations This section provides procedural information regarding system configurations. 2013 17 . IP address. See Configuring Hash Settings. v View and modify date and time settings for the IBM StoredIQ Platform. v Upload Lotus Notes user ids so that encrypted NSF files can be imported into IBM StoredIQ Platform. v Manage users. See Specifying Data Object Types. and edit known data object types. add. an Administrator can: v Configure the DA gateway. Application Section In the Application section. v Configure SNMP servers and communities. See “Configuring notifications from IBM StoredIQ Platform” on page 21. “Supported server platforms and protocols. © Copyright IBM Corp. See Configuring network settings. v Manage notifications for system and application events. See Configuring Harvester Settings. System Section In the System section. Related reference: Appendix B. See Configuring SNMP settings. v Specify options for computing hash settings when harvesting. and use. v Specify options to configure the desktop collection service. an Administrator can: v Specify directory patterns to exclude during harvests. including hostname. See Setting the system time and date. See “Configuring desktop settings” on page 31. v Specify options for full-text indexing. v View. v View and modify network settings. See Managing Users. v View and modify settings to enable the generation of email notification messages.” on page 129 Configuring IBM StoredIQ Platform This topic lists what an Administrator can configure from the Application and Network areas when configuring IBM StoredIQ Platform. See Configuring mail settings. See Configuring Audit Settings. v View and edit settings for policy audit expiration and removal. 2001. See Configuring Full-text Index Settings. See Importing Encrypted NSF Files from Lotus Notes. See “Setting system backup configurations” on page 22.

Click Controller Settings. v Separate network for file/email servers—Specify the additional subnet for accessing file/email servers. For example. Set or modify the following options. Related concepts: “Configuring application configuration options” on page 25 “Configuring IBM StoredIQ Platform” on page 17 “Performing system configurations” on page 17 Configuring DA Gateway settings This topic provides procedural information regarding how to configure the DA Gateway. and Default Gateway fields are disabled. Option Description Primary Network Interface v IP type—Set to stic or dynamic. To configure DA Gateway settings: 1.example. v Default gateway—Enter the IP address of the default gateway. In the Host text box. enter mgmt. 3. If set to dynamic.Configuring system configuration options This topic lists all System configuration options.10. v Netmask—Enter the network mask of the IP address. Related concepts: “Configuring system configuration options” Related tasks: “Restarting and rebooting IBM StoredIQ Platform” on page 15 Configuring network settings This topic provides procedural information regarding how to configure various network settings. enter the gateway host. 1. Click OK. v Available ports—Indicate the available ports. Go to Administration > Configuration > System > Network settings. v IP address—Enter the IP address if specifying the address manually.10.168. the IP address. Go to Administration > Configuration > System > DA Gateway Settings.com or 192. Services must then be restarted. 2. Netmask. v Hostname—Enter the fully qualified domain name assigned to the appliance. Select this check box if you are using the Web application from one subnet and harvesting from another subnet. 2. v Ethernet speed—Select the Ethernet speed. 18 Administration Guide .

specify the IP address of the NIS domain server here. Set the following for the data server: v Windows Share (CIFS) file server name resolution—These settings take effect upon saving. 3. v Nameserver 1—Set the IP address of the primary DNS server for name resolution (required). v Specify NIS server—If not using broadcast. Option Description Windows Share File Server Name v LMHOSTS—Enter the IP hostname Resolution format. Option Description DNS Settings v DNS search order—Enter the DNS search order for multiple DNS servers. Because the server has a new IP address. Changes to the server's IP address take effect immediately. Click OK. v NIS Domain—Specify the NIS domain. Option Description NIS (for NFS) v Use NIS—Select this box to enable NIS to perform UID/GID to friendly name resolution in an NFS environment. v Nameserver 2 —Set the IP address of the secondary DNS server for name resolution (optional). v Active Directory—These settings take effect upon saving. Click Server name resolution. See Restarting and Rebooting the Appliance. v Doc broker settings (for Documentum) Performing system configurations 19 . Option Description Active Directory Active Directory server—Enter the name of the Active Directory server. v WINS Server—Enter the name of the WINS server. This option does not work if the NIS domain server is on another subnet. v Broadcast for server on local network—Select this box if the NIS domain server is located on the local network and can be discovered by broadcasting. A system restart is required for any primary network interface changes to take effect. v Nameserver 3—Set the IP address of the tertiary DNS server for name resolution (optional). DNS settings take effect after they have been saved. you must reflect this new address in the browser address line before continuing. v NIS (for NFS)—These settings take effect upon saving. 4.

To make OID values available to SNMP client applications. 20 Administration Guide . 4. To modify the frequency of notifications. Port number 162 is the default. v In the Community field. A sender address also simplifies the process of filtering email notifications based on the sender's email. In Mail server. enter the SNMP community name. enter the name of the SMTP mail server. v In the Community field. enter the port number. 3. complete these fields in the Environmental trap delivery area: v Send environmental traps only every __ minutes. Go to Administration > Configuration > System > SNMP settings. To capture messages containing status information in the Trap destination area: v In the Host field. enter a valid sender address. 2. Option Description Doc Broker Settings (for Documentum) v Enter Host name for doc broker. v Documentum global registry: – Registry name – User – Password 5. which can be provided to an SNMP client application. For information about environmental circumstances monitored by IBM StoredIQ Platform. enter the community string that the SNMP clients will use to connect to the SNMP server. 4. see the table below. Click OK to save changes. This document provides the MIB definition. 3. as well as to receive status information or messages about system events in a designated trap. v In the Port field. In From address. Related concepts: “Configuring system configuration options” on page 18 Configuring SNMP settings You can both configure the system to make Object Identifier (OID) values available to Simple Network Management Protocol (SNMP) client applications. v To view the MIB. click Download Appliance MIB. 2. to open port 161 on the controller). To configure SNMP settings: 1. in the Appliance Public MIB area: v Select the Enabled check box to make the MIB available (that is. To configure mail settings: 1. Some mail servers reject email if the sender is invalid. Related concepts: “Configuring system configuration options” on page 18 Configuring mail settings This topic provides procedural information regarding how to configure mail settings. Go to Administration > Configuration > System > Mail Server settings. Click OK. enter the common name or IP address for the host.

v Send environmental traps again after __ minutes.
5. Click OK. Environmental traps monitored by IBM StoredIQ Platform are
described in the table below.

Option Description
siqConsoleLogLineTrap This is a straight conversion of a console log
line into a trap. It uses these parameters:
messageSource, messageID, severity,
messageText.
siqRaidControllerTrap Sent when the RAID controller status is
anything but normal. Refer to the MIB for
status code information. It uses this
parameter: nodeNum.
siqRaidDiskTrap Sent when any attached raid disk's status is
anything but OK. It uses this parameter:
nodeNum.
siqBbuTrap Battery Backup Unit (BBU) error on the
RAID controller detected. It uses this
parameter: nodeNum.
siqCacheBitTrap Caching indicator for RAID array is off. It
uses this parameter: nodeNum.
siqNetworkTrap Network interface is not UP when it should
be. It uses this parameter: nodeNum.
siqDbConnTrap Delivered when the active Postgres
connection percentage exceeds an acceptable
threshold. It uses this parameter: nodeNum.
siqFreeMemTrap Delivered when available memory falls too
low. It uses this parameter: nodeNum.
siqSwapUseTrap Sent when swap use exceeds an acceptable
threshold. Often indicates memory leakage.
It uses this parameter: nodeNum.
siqCpuTrap Sent when CPU load averages are too high.
It uses this parameter: nodeNum.
siqTzMismatchTrap Sent when a node’s time zone offset does
not match that of the controller. It uses this
parameter: nodeNum.

Related concepts:
“Configuring system configuration options” on page 18

Configuring notifications from IBM StoredIQ Platform
You can configure the system to notify you using email or SNMP when certain
events occur.

For a list of events that can be configured, see Event Logs.

To configure notifications from IBM StoredIQ Platform:
1. Go to Administration > Configuration > System > Manage notifications.
2. Click Create a notification.
3. In the Event number: field, search for events by clicking Browse or by typing
the event number or sample message into the field.
4. Select the event level by clicking the ERROR, WARN, or INFO link.

Performing system configurations 21

5. Scroll through the list, and select each event by clicking on it. The selected
events appear in the create notification window. To delete an event, click the
delete icon to the right of the event.
6. In the Destination: field, select the method of notification: SNMP, or Email
address, or both. If you choose email address, enter one or more addresses in
the Email address field. If you choose SNMP, the messages will be sent to the
trap host identified in the SNMP settings window, with a trap type of
siqConsoleLogLineTrap.
7. Click OK.
8. To delete an item from the Manage notifications window, select the check box
next to the event, and then click Delete. You can also request a notification for
a specific event from the dashboard’s event log. Click the Subscribe link next
to any error message and a prepopulated edit notification screen containing the
event is provided.
Related concepts:
“Configuring system configuration options” on page 18

Setting the system time and date
This topic provides procedural information regarding how to set the system's time
and date.

Note: A system restart is required for any changes made to the system time and
date. See Restarting and Rebooting the Appliance.

To set the system time and date:
1. Go to Administration > Configuration > System > System time and date.
2. Enter the current date and time.
3. Select the appropriate time zone for your location.
4. Enable Use NTP to set system time to use an NTP server to automatically set
the system date and time for the data server. If NTP is used to set the system
time, then the time and date fields set automatically. However, you do need to
specify the time zone.
5. Enter the name or IP address of the NTP server.
6. Click OK to save changes.
Related concepts:
“Configuring system configuration options” on page 18

Setting system backup configurations
To prepare for disaster recovery, you can back up the configuration of a data server
to a gateway. This will back up volume definitions, discovery export records, and
data-server settings. This will not back up infosets, data maps, or indexes.

The gateway must be manually configured to support this backup. For additional
information, configure the gateway in the util menu. This can done by contacting
your service representative.

To initiate the run-configuration back up:
1. Go to Administration > Configuration > System > Backup configuration.
2. Press Start backup. To better understand this button and its functionality,
please contact your service representative.
Related concepts:

22 Administration Guide

“Configuring system configuration options” on page 18
Related reference:
“Contacting IBM StoredIQ customer support” on page ix

Managing users
When managing users, you can log in and out of the system and perform various
account administration tasks.
Related concepts:
“Configuring system configuration options” on page 18

Logging in and out of the system
The system comes with a default administrative account named admin, the default
password for which is admin. For security purposes, IBM StoredIQ Platform
recommends that this password be changed as soon as possible.

To log into or log out of the Web application:
1. Open a browser and in the address line, enter the URL for your system.
2. On the Login page, enter admin for the email address and admin for the
password the first time you log in.
3. Click Log In to enter the system.

Note: Database Compactor—If someone tries to log in while the appliance is
performing database maintenance, the Administrator can override the
maintenance procedure and use the system. For more information, see Types of
IBM StoredIQ Platform Jobs.

4. To log out of the application, click in the upper right-hand corner, and
then click the Log out link.
Related concepts:
“Managing users”

Changing the administrative account
To change the admin account:
1. On the Administration > Configuration tab, click Manage users.
2. From the list, select The Administrator account, and then select Change the
“admin” password.
3. Enter a new password, and then click OK to save the change.
Related concepts:
Administering Accounts
“Managing users”

Creating users
To create a user:
1. Go to Administration > Configuration > Manage users, and then click
Create new user.
2. In the First name field, enter the user's first name.
3. In the Last name field, enter the user's last name.
4. In the Email address field, enter the user's email address.
5. For Authentication, select Active Directory or Local.

Performing system configurations 23

In the list. Click OK to save your changes. 9. To receive Notification of reports by email. indicating that the account is now locked. principal. The user cannot log in to the account while it is locked. and then click OK. Go to Administration > Configuration > Manage users. 7. and then click the name of the user that you would like to edit. select the user name of the account you wish to lock. click Manage users. 3. 2. 3. Select the Administrator button to select the Role. In the list. 2. On the Administration > Configuration page. enter the name of the Active Directory 6. Click Unlock account to re-enable the account. Select the View data objects in the viewer check box if you would like to see data objects in the viewer. Click OK to create the user. Related concepts: Administering Accounts “Managing users” on page 23 Locking a user's account To lock a user account: 1. Click Edit User and edit the fields as needed. 2. On the Administration > Configuration page. and then click the name of the user that you would like to delete. 2. In the Active Directory principal. 10. Click Delete. Related concepts: “Managing users” on page 23 Editing users To edit a user: 1. Related concepts: Administering Accounts 24 Administration Guide . 3. 8. select either Yes or No. Related concepts: Administering Accounts “Managing users” on page 23 Unlocking a user's account To unlock a user account: 1. Go to Administration > Configuration > Manage users. select the user name of the account you wish to unlock. A padlock icon appears next to the account name in the list. Click Lock user account to disable the account. Related concepts: “Managing users” on page 23 Deleting a user's account To delete a user account: 1. click Manage users.

id key v Multiple unencrypted emails contained in an encrypted NSF file v Multiple encrypted emails contained within an unencrypted NSF file v Multiple encrypted emails. Click Upload a Lotus user ID file. click Browse. In the dialog that appears. Once the emails or containers have been unlocked. go to Administration > Configuration > Lotus Notes user administration. (Optional) In the Description: field. e. contained in an encrypted NSF file v Encrypted emails from within a journaling database To import encrypted NSF files from IBM Lotus Notes: 1. you must restart services. In the confirmation dialog that appears. import. and then click Delete. Repeat until the keys for all encrypted items have been uploaded. Related concepts: “Configuring system configuration options” on page 18 “Configuring IBM StoredIQ Platform” on page 17 “Performing system configurations” on page 17 Performing system configurations 25 . To delete an item from the list. using the same or different user.id/key pairs that have been imported into the system with the key values that lock each encrypted container or email. This topic provides procedural information regarding how to import encrypted NSF files from IBM Lotus Notes®. type the password that unlocks the selected file. a. When the correct match is found. The feature works by comparing a list of user. and process encrypted NSF files from IBM Lotus® Domino® v7. “Managing users” on page 23 Importing cncrypted NSF files from Lotus Notes IBM StoredIQ Platform can decrypt. Note that once the list has been compiled. from the Registered Lotus users screen. select the check box next to a user. c. IBM StoredIQ Platform analyzes and processes them in the usual fashion. On the primary data server. Click OK. d. enter a description for the file. the file is unlocked using the key. In the Lotus Notes password: field. Related concepts: “Configuring system configuration options” on page 18 Configuring application configuration options This topic lists all Application configuration options. Note: After uploading user ids. These use cases are supported: v Multiple unencrypted emails contained within a journaling database that has been encrypted with a single user. b. you can add new entries to it later. 2. click OK. and navigate to a user file.id keys.

the hash is a hash of the file-path and size of the object. and the harvester collects 1. Harvest non-standard Exchange message classes—Select to harvest message classes that do not represent standard Exchange email and miscellaneous items. d. address and the like). if the maximum value is 1. b. they are audited as skipped: Configured max. To configure Limits: a. regular expression. if full-text/content processing is enabled for the volume. To configure Basic settings: a. Since they were not read. regardless of what the hash settings are for the volume (full/partial/off). Configuring harvester settings This topic provides procedural information regarding how to configure harvester settings. Max entity values per entity—For any given entity type (date. f. g. the total (cumulative) number of values that will be collected from a data object 26 Administration Guide . c. notes. for example. See Restarting and Rebooting the Appliance. Include extended characters in object names—Select to allow extended characters to be included in data object names during a harvest. During a harvest. 5. These objects still appear in the volume cluster along with all file-system metadata. 4. Enable OCR image processing—Select this option to control at a global level whether or not Optical Character Recognition (OCR) processing is attempted on image files. you must restart services.000. scoped. and proximity) and all standard attributes. This setting applies to all user-defined expressions (keyword. Note that IBM StoredIQ Platform accepts only one entry per line and that regular expressions can be used. c. Go to Administration > Configuration > Application > Harvester settings. iles exceeding the maximum data object size will not be read. Determine whether data objects have NSRL digital signature—Select to check data objects for NSRL digital signatures. The values do not need to be unique. Data object extensions to be skipped—Specify those file types you want the harvest to ignore by adding data object extensions to be skipped. Maximum data object size—Specify the maximum data object size to be processed during a harvest. and tasks from the Exchange server. As a result. Harvester Processes—Specify the number of harvester processes according to your solution. object size. e. To configure Locations to ignore. a. Harvest miscellaneous email items—Select to harvest contacts. the number of values set in this field. Max entity values per data object—Across all entity types. calendar appointments. 3. enter each directory that should be skipped. Parallelized grazing enables harvests to begin where they left off when interrupted and to start at the beginning if the harvest completes normally. b.000 instances of the same date (8/15/2009) in a Word document. 2. the system records. Specify Skip Content processing. the system stops counting dates. To configure harvester settings: 1. city. If you select this option. per data object. Enable parallel grazing—Select to harvest volumes that have already been harvested and are going to be reharvested.

You can select options for when to perform this extended processing and how to scan content. when the format of the file is unknown to the system parameters. How Optical Character Recognition Processing Works OCR processing enables text extraction from graphic image files stored directly on or inside archives stored on volumes where the Include content tagging and full-text index option is selected. The system searches UTF-16 and UTF-32 by default. Changes to harvester settings do not take effect until the appliance is rebooted or the application services are restarted. The system runs further processes against content that failed in a harvesting. Text encoding—Set options for what data to scan and extract when performing binary processing. when the data object type is not supported by the harvester scan. Click OK. enter one per line without a period. enabling OCR processing will route the following files types through an optical character recognition engine OCR to extract recognizable text. the system begins text processing when four consecutive characters of a particular select text encoding are found. if you enter 4. A 0 in this field means unlimited. To add extensions. 7. This setting applies to all user-defined expressions (key–word. b. regular expression. Related concepts: “Configuring application configuration options” on page 25 Optical character recognition processing This topic provides conceptual information regarding optical character recognition (OCR) processing. For example. Configure Binary Processing. reducing the amount of false positives. Binary processing can enact when extracting text from a file fails. d. Binary processing does not search image file types such as . and proximity) and all standard attributes. scoped. consecutive characters to begin processing for text extraction.JPG for text extraction. Data object extensions—Set binary processing to process all data files or only files of entered extensions.GIF and . Performing system configurations 27 . Failure reasons to begin binary processing—Select the check boxes of the options that define when to perform extended processing. Minimums—Set the minimum required number of located. v Windows or OS/2 bitmap (BMP) v Tag image bitmap file (TIFF) (TIFF) v Bitmap (CompuServe) (GIF) v Portable Network Graphics (PNG) The text extracted from image files is processed through theIBM StoredIQ Platform pipeline in the same manner as text extracted from other supported file types. e. a. during a harvest. or when the data object format does not contain actual text. This setting helps find and extract helpful data from the binary processing. After content typing inside theIBM StoredIQ Platform processing pipeline. c. This extended processing can accept extended characters and UTF-16 and UTF-32 encoded characters as text. Run binary processing when text processing fails—Select this option to run binary processing. 6.

999. Policies that have a specific feature to write out extracted text to a separate file for supported file types will do so for image files while OCR processing is enabled. if you choose to full-text index the number 999 when it appears in data objects with the file extensions . 99. 2. if you choose to full-text index words limited to 50 characters. Related concepts: “Configuring application configuration options” on page 25 Related tasks: “Configuring harvester settings” on page 26 Configuring full-text index settings Prior to configuring or searching the full-text index. To configure Limits: a. then a full-text filter returns only those instances of the number 999 that exist in data objects with the file extensions . – Number indexing in data objects limited by file extensions—For example. For example.XLS or . Do not limit the length of the words that are indexed—Select this option to have no limits on the length of words that are indexed. only the number 999 and the word stock are indexed. To configure Numbers: v Do not include numbers in the full-text index—Select this option to have no numbers indexed. The numbers 9 and 99 are not indexed. v Do not include numbers in the full-text index—Select this option to have no numbers indexed. and the word stock. Define these limits as follows: 28 Administration Guide . consider the following: v Full-text filters containing words may not return all instances of those words—You can limit full-text indexing for words based on their length. To configure full-text index settings: 1. Enter the maximum number of characters at which to index words. Words with more characters than the specified amount will not be indexed. 3. b. v Include numbers in full-text index but limit them by—Select this option to have only certain numbers indexed. an example of which is discovery export policies.DOC. Go to Administration > Configuration > Application > Full-text settings. Limit the length of words indexed to___characters—Select this option to limit the length of words that are indexed. then no words greater than 50 characters are indexed.XLS and . v Full-text filters containing numbers may not return all instances of those numbers—This can occur when number searches have been configured as follows: – The length of numbers to full-text index have been defined—If you configure the full-text filter to index numbers with three digits or more and try to index the numbers 9. Although the number 999 may exist in other data objects harvested.DOC.XLS and .DOC. How Fast Is Optical Character Recognition Processing The OCR processing rate of image files is approximately 7–10 KB/sec per IBM StoredIQ Platform harvester process. these data objects do not have the file extensions .

but. v If you want the system to find a word listed. *ive. only some punctuation is indexed as a letter in a word by default. such as account numbers. such as one. To configure Include word stems in index. then. Enter the number of characters a number must contain in order to be indexed. if. To make certain that a specific punctuation character is indexed. these. 6. *tion. such. Note: If stemming is enabled. By default. Stop words are common words found in data objects that are not indexed in the full-text index. Note that punctuation characters should be entered without spaces separating them. this. by. Most punctuation is turned into a space. and so on. it. By default. you can focus your filter on meaningful numbers. To Optimize wildcard suffix searches. the use of double quotes will return stemmed terms in results. employee. the. on. filters will be more precise. which includes hyphens and apostrophes. To find exact words with no stemmed terms. an. credit card numbers. The Number length feature enables you to index longer numbers and ignore shorter numbers. not. in. data sources will be indexed faster. or *less. employ is the stem word of words such as employed. no. *ious. Social Security numbers. select whether or not to optimize searches by suffix such as *ology. use single quotes. select whether or not to stem words that have been indexed. delete the word from the list to allow the full-text index to find it. and more. Without stemming. as. they. a filter for trade would need to be written as trade or trades or trading or traded to get the same effect. v Stem words that are indexed (improved searching)—By stemming indexed words. add it to this list. will. 4. v Do not stem words that are indexed (faster indexing)—By not stemming indexed words. IBM StoredIQ Platform will denote any found instances of employment. Select Limit numbers for all extensions to limit numbers in all file extensions to the character limits set above in Number length.and two-character numbers that rarely mean anything. without punctuation. for. enter one word per line. the following words are omitted from the full-text index: a. their. to. By not indexing shorter numbers. Configure Stop words. and even then a user may miss an interesting variant. employs. license plate numbers. employment. of. select Limit numbers for these extensions to limit the numbers selected in Numbers length only to data objects having certain file extensions. data sources will be indexed faster. You must reindex to find the word. there. Configure Punctuation or special characters to index by entering punctuation characters to be included in the index. 5. Performing system configurations 29 . – Number length—Only include numbers that are longer than ____ characters. Enter the file extensions one per line that should have limited number indexing. at. are. be. into. telephone numbers. that. with. v Do not optimize searches by suffix (faster indexing)—By not optimizing searches by suffix. is. Any data object with a file extension that is not listed will have all numbers indexed. although slower. 7. – Extensions—Index numbers based on the file extensions of the data objects in which they appear. If you utilize stemming and search for the term employed. Alternatively. To index stop words like the or but would compromise full-text indexing speed and the relevancy of the results. For example. and. employ. or. and so on when viewing the data object. v To add a stop word. was.

Note: Stemming will not be performed for search terms with wildcards ("?". Specify the maximum number of policy audits to keep before automatically deleting them. Go to Administration > Configuration > Application > Audit settings. Click OK to save changes. Go to Administration > Configuration > Application > Data object types. 3. 2. Related concepts: “Configuring application configuration options” on page 25 Specifying data object types On the Data object types page. To configure hash settings: 1. enter one or more extensions to associate with the data object type. These entries must be separates by spaces. "*") in them. Related concepts: “Configuring application configuration options” on page 25 Configuring hash settings The Hash Settings page allows you to configure whether to compute a hash when harvesting and provides different types of hashes. click Choose email fields and select which email attributes are to be used. For example. Note: When hashing emails. however. 2. Related concepts: “Configuring application configuration options” on page 25 Configuring audit settings This topic provides procedural information regarding how to configure audit settings. Click Add to add the extension(s) to the list. 2. Specify the number of days to keep the policy audits before automatically deleting them. non-suffix searches or standard wildcard searches (such as bird*) are not affected. you can add new data object types as well as view and edit known data object types. 4. This is true regardless of the term being placed within single quotes. There are currently over 400 data object types available. Click OK. 3. To specify data object types: 1. searches will be conducted more quickly. enter doc txt xls. Select Compute data object hash. Note that the 30 Administration Guide . 5. v Optimize searches by suffix (faster searching)—By optimizing searches by suffix. Enter the name of the data object type to be used with the extension or extensions. These data objects appear in the Disk usage (by data object type) report. For example. Specify the file limit for drill down in policy audits. In the add data object type section. 4. Go to Administration > Configuration > Application > Hash settings. enter Microsoft Word. 8. To configure audit settings: 1.

Performing system configurations 31 . Click OK. click Save File. To download the IBM Desktop Data Collector installer from the application: 1. 3. allowing them to be used just as other types of added data sources. Click Apply. specify the following options: v Entire data object content (required for data object typing) v Partial data object content 4. Related concepts: “Configuring application configuration options” on page 25 Configuring desktop settings When configuring desktop settings. and is installed according to the typical method (such as Microsoft Systems Management Service (SMS)) used within your organization. except for emails. 4. Select either the Enabled or Disabled button to enable or disable desktop services. The client is provided as a standard MSI file. that is. Once the download is complete. Once the desktop client has been installed on a desktop and then connected to and registered with the data server. In the Desktop Services area. Downloading the IBM Desktop Data Collector installer from the application This topic provides procedural information regarding how to download the IBM Desktop Data Collector installer from the application. Go to Administration > Configuration > Application > Desktop settings. Connectivity and the correct IP address is required. Related tasks: “Configuring desktop settings” Upgrading the IBM Desktop Data Collector agent This topic provides procedural information regarding how to upgrade the IBM Desktop Data Collector agent. 2. 3. When hashing data objects. a data object can have a binary hash or an email hash. but not both. you are enabling or disabling encryption within IBM StoredIQ Platform. 3. In the Download the Desktop Agent installer area. that desktop is available as a data source within the list of primary volumes. Go to Administration > Configuration > Application > Desktop settings. The IBM Desktop Data Collector (desktop client or client) enables desktops as a volume type or data source.ZIP files as well as other data objects and is capable of removing itself once its work is completed. 2. email hash selections operate independently from the data object hash settings. select the Encrypt all traffic to/from desktops check box. To configure desktop settings: 1. The client can collect PSTs and . click Download the desktop client installer.

the system takes a . click Add encrypted file system user. and then select that version. For Available versions options: v Select Manually publish new version. This is the credential of the user to whom this recovery agent belongs. allowing you to open the encrypted file. 6. To upgrade clients for registered workstations: 1. The file is uploaded. allowing the client to open the encrypted file and harvest from it. 2.PFX password in the . enter a description in the Description: text box. if IBM StoredIQ Platform finds an Encrypted File System–encrypted file. For Automatic upgrade options: v Upgrades disabled—All upgrades are disabled. 3. v Select Automatically publish the latest version. In the Encrypted file system recovery agent users area. The Upload Recovery Agent Certificate dialog box appears.PFX file. they can also be edited or deleted using the Edit or Delete options. In the Password: text box. 8. and the added user is visible within the User name column. 4. In the Select a . v Upgrade all workstations—All workstations will be upgraded. 5. enter the username for the user.PFX file. a SAM compatible/NT4 Domain name–style user name.PFX password: text box.PFX file to upload: text box. Related tasks: “Configuring desktop settings” on page 31 32 Administration Guide . meaning that none will be applied. Go to Administration > Configuration > Application > Desktop settings. In the Upgrades area. select from either Automatic upgrade options or Available versions options. Click OK. 5. 7. Click Apply. To add an encrypted file-system user: 1. This password protects the file itself. click Browse to navigate to the desired . Note that once users have been added. 2. In the Username: text box. By default. For example. 4. then IBM Desktop Data Collector will install a recovery agent certificate. the IBM Desktop Data Collector will install a recovery agent certificate. Enter the . Related tasks: “Configuring desktop settings” on page 31 Using the Encrypted File System recovery agent During IBM Desktop Data Collector collection. During IBM Desktop Data Collector collection. Optionally. 3. enter the password for the user. enter MYCOMPANY\esideways. Go to Administration > Configuration > Application > Desktop settings. if IBM StoredIQ Platform finds an Encrypted File System–encrypted file.

IBM StoredIQ Platform volumes perform the same function as partitions on a hard drive.ZIP files. 2001. configure. Naturally. you also determine the depth at which you want the volume to be indexed. which are described here. you must configure the server platforms you will use for the different volume types. and the like) are not included. Each server type has requisite permissions and settings. completing a full-text index requires more system resources than a metadata index. When defining volumes. There are three levels of analysis: v System metadata index—This level of analysis runs with each data collection cycle and provides only system metadata for system data objects in its results. Related concepts: “Creating volumes and data sources” © Copyright IBM Corp.MP3s). Parameters and limitations on “full-text” indexing are set when the system is configured. Note that only Administrators can define. Volumes behave the same way that hard drive disk partitions behave. You can set up three separate volumes originating from the same server or across many servers. B. Related reference: Supported server platforms by volume type Volume indexing This topic describes volume indexing and the different depths at which volumes can be indexed. and add or remove volumes to IBM StoredIQ Platform. PSTs. Users must carefully design their volume structure and harvests so that the maximum benefit of IBM StoredIQ Platform sophisticated analytics are used where necessary. v Full-text and content tagging—This option provides the full native language analysis that yields the more sophisticated entity tags. It is useful as a simple inventory of what data objects are present in the volumes you have defined and for monitoring resource constraints (such as file size) or prohibited file types (such as . When you format the hard drive on your PC into drive partitions A.Creating volumes and data sources A volume represents a data source or destination that is available on the network to the IBM StoredIQ Platform appliance. 2013 33 . and C. container data objects (. Related concepts: “Creating volumes and data sources” Configuring server platforms Prior to configuring volumes on IBM StoredIQ Platform. emails with attachments. This level of analysis provides container-level metadata in addition to the system metadata for system data objects. but not on resources that do not require them. v System metadata plus containers—In a simple system metadata index. you are creating three partitions that function like three separate physical drives. A volume can be a disk partition or group of partitions that is available to network users as a single designated drive or mount point.

you must add port number 443 after the server name. enable Exchange’s transport dumpster settings. the user must be in the backup operator group on the Windows Share server exposing the shares on IBM StoredIQ Platform. v Deleted items—To harvest items that have been deleted from the Exchange server. qa03exch2000. you must enable root access on the NFS server that is connected to IBM StoredIQ Platform. Related concepts: “Configuring server platforms” on page 33 Exchange servers This topic describes Microsoft Exchange servers and various connections and permissions. this secure connection can result in some performance degradation due to SSL overhead. v Secure connection—If you want to connect to Exchange volumes over HTTPS. enable Integrated Windows Authentication on each Exchange server. For more information.com. To harvest and run policies on volumes on Windows Share (CIFS) servers. the default connection will be over HTTP.microsoft. Related concepts: “NFS” 34 Administration Guide . refer to Microsoft® Exchange Server 2010 Administrator's Pocket Consultant. Related concepts: “Configuring server platforms” on page 33 NFS This topic describes the NFS server platform. – Read – Execute – Read permissions – List contents – Read properties – List object – Receive as v Permissions for Exchange 2007 and 2010—The Full Access permissions must be set on the Exchange server to the mailbox store or the mailboxes from which you will harvest. In some cases. v Recoverable items in Exchange 2010—To harvest the recoverable items folders in Exchange 2010. If you enter the volume information without the 443 suffix. v Windows Authentication—For all supported versions. for example. To harvest and run policies on NFS servers. v Permissions for Exchange 2003—The following permissions must be set on the Exchange server to the mailbox store or the mailboxes from which you will harvest. Configuration information is also available online at www. Windows Share (CIFS) This topic describes the Windows Share (CIFS) server platform.qaw2k. you must be logged in as either an Administrator role.local:443.

Restart IIS Services. From Microsoft Windows.0 and Exchange 2003 This topic provides procedural information regarding how to improve performance between IIS 6. 6. Right-click Default Web Site. 8. log onto the Exchange Server. Select Properties. If the number of worker processes is different from the default value of 1. In the Authentication access pane. For a example. From Microsoft Windows. or may add the port number 443 after the server name when setting up the volume on StoredIQ. 9. 7. Enabling integrated Windows authentication on Exchange servers This topic provides procedural information regarding how to integrate Windows authentication on Exchange servers.com:443. select the Integrated Windows authentication check box. 4. Creating volumes and data sources 35 . In some cases. Go to Administrative Tools > Internet Information Services (IIS) Manager. The Authentication Methods window appears. If you enter the volume information without the 443 suffix. 4. locate the Web Garden section. qa01. In the Authentication and access control pane. Go to Administrative Tools > Internet Information Services (IIS) Manager. 5. 5. the default connection is over HTTP. Right-click Application Pools and select Properties. Restart IIS Services. 6. Go to Internet Information Services > Name of Exchange Server > Web Sites > Default Web Site. 7.company. Click OK. log in to the Exchange Server. 2. then change the number of worker processes to 1. To enable integrated Windows authentication on Exchange servers: 1. 8. Secure Connection If you wish to connect to SharePoint volumes over HTTPS. Related concepts: “Exchange servers” on page 34 SharePoint This topic describes SharePoint servers and various connections and privileges.0 and Exchange 2003: 1. click Edit.0 and Exchange 2003. To improve performance for IIS 6. this secure connection can result in some performance degradation due to Secure Socket Layer (SSL) overhead. Click OK. Related concepts: “Exchange servers” on page 34 Improving performance for IIS 6. and then click the Directory Security tab. 3. you may use the Use SSL checkbox. 3. 2. On the Performance tab. Select Internet Information Services > <Name of Exchange Server> > Web Sites > Application Pools.

If you plan to use SharePoint as a destination for policies. Related concepts: “SharePoint” on page 35 Alternate-access mappings SharePoint 2007 and 2010 require the configuration of alternate-access mappings to map IBM StoredIQ Platform requests to the correct Web sites. meaning that you can write content into SharePoint using IBM StoredIQ Platform. Related concepts: “Configuring server platforms” on page 33 Privileges required by user account This topic provides conceptual information regarding what SharePoint privileges are required by user account along with IBM StoredIQ Platform recommendations. then you must use user credentials with Read privileges on the site and on all of the lists and data objects that you expect to process. Refer to Microsoft SharePoint 2007 or 2010 documentation to configure alternate-access mappings based on the public URL configured by the local SharePoint administrator and the mapping used by the IBM StoredIQ Platform SharePoint volume definitions. Privileges To run policies on SharePoint servers. An alternate-access mapping is required between the server name and optional port defined in the SharePoint volume definition and the internal URL of the Web application containing the Web site being accessed is required. Recommended Privileges We recommend using a Site Collection Administrator to ensure all data is harvested from a given site or site collection. ensure that the alternate-access mapping URL uses https:// as the protocol. Alternate-access mappings map URLs presented by IBM StoredIQ Platform to internal URLs received by Windows SharePoint Services. 36 Administration Guide . you must use user credentials with Contribute privileges on the site. Consider these points: v Attributes are not set/reset on a SharePoint harvest or if you copy from SharePoint. If SSL will be used to access the site. then the user credentials must own privileges to Manage Social Data as well. We recommend using a Site collection administrator to harvest subsites of a site collection. If you plan to only read from the SharePoint (harvest and source copies from). IBM StoredIQ Platform is typically used with SharePoint for one of these instances: to harvest and treat SharePoint as a source for policy actions or to use as a destination for policy actions. Additional Privileges for Social Data If you wish to index all the social data for a user profile in SharePoint 2010. you must use credentials with Full Control privileges. v Attributes are set only if you copy to SharePoint.

Related concepts: “Configuring server platforms” on page 33 Enterprise Vault In order to configure an Enterprise Vault volume. 3. Consider the following: v The URL must be valid with respect to the configured Alternate Access Mappings configured in SharePoint.storediqexample.storediqexample. you must first configure an Enterprise Vault site. then enter the login name as domain\user. To run harvests and copy from Documentum servers. Click Enterprise Vault sites. It is recommended that the Enterprise Vault Service Account or a user with equivalent privileges be used. Related concepts: “Configuring server platforms” on page 33 Related tasks: Creating volumes and data sources 37 . enter the login name. In the Enterprise Vault site alias field.com for the Intranet zone must be configured for the SharePoint 2007 or 2010 Web application hosting the site to be accessed by the volume definition. from the Intranet zone. Each server can only be added one time. If you are accessing the same volume using SSL. the mapping added should be for the URL https://itweb. 2. and then click Add new Enterprise Vault site. To configure an Enterprise Vault site: 1. 5. Related concepts: “SharePoint” on page 35 Documentum This topic describes requirements for Documentum servers. In the User name field.com. Go to Administration > Data sources > Specify servers. you must use the Contributor role. This topic provides procedural information regarding how to configure an Enterprise Vault site. v If the host name in the URL does not convey the fully qualified domain that should be used to authenticate the configured user. An alternate-access mapping for the public URL http://itweb. This name will appear in the screens used to configure Enterprise Vault volumes. enter the fully qualified domain name (FQDN) of the Enterprise Vault Server. The specified Active Directory must be a fully qualified domain name and will be used for authentication. Example You are accessing a SharePoint volume with the fully qualified domain name.com instead. 7. Click OK to save the site. you are entering the URL for a SharePoint site collection/site that is leveraged by IBM StoredIQ Platform in the volume definition. In the Site name field. 4. In the Password field. 6. http://itweb. If the user is a domain user. enter the user’s password to authenticate with Active Directory. an Active Directory server must be specified.storediqexample. Note: When configuring SharePoint volumes using non-qualified names. enter a unique logical site name.

ensure that the API Enabled setting is enabled. depending on the size of the cluster that will be deployed—typically. you must log in to the Discovery Accelerator server and run the ImportExport. consider configuring the Temporary Storage Area Cleanup Interval. v Review messages permission for the case used in the volume definition. Before performing these tasks. Related concepts: “Configuring server platforms” on page 33 Discovery Accelerator permissions This topic describes Discovery Accelerator permissions and credentials. “Configuring primary volumes using Enterprise Vault” on page 55 Discovery Accelerator This topic describes configurations that must be made to the Discovery Accelerator server. Click the Configuration tab and expand the API options on the Settings page.0—If the Discovery Accelerator server runs over IIS 6. We recommend that the hotfix described in Microsoft Knowledge Base article 917557 (http://support. a user must have the following privileges: v A role defined in the Discovery Accelerator Web Application. an existing bug in IIS causes severe performance degradation when used along with Kerberos authentication. In the API settings group. as needed.microsoft. 3. 2.com/kb/ 917557) be applied to the server in this case. 1. Ensure that it has sufficient free space and that any authenticated users that will define volumes against Discovery Accelerator have Full Control permissions on this storage area. v Folder review permissions on a case. if a folder (sometimes also called a Research Folder) in the case is going to be harvested. IBM StoredIQ Platform will validate that the credentials are strong enough for it to: v Login remotely to the specified server v Perform DCOM operations over RPC remotely To harvest a Discovery Accelerator volume successfully. Prior to configuring Discovery Accelerator primary volumes. the default value of 30 minutes should be sufficient. Additionally. the following configurations must be made on the Discovery Accelerator service by logging into the Discovery Accelerator client interface as a Vault User or Discovery Administrator. 38 Administration Guide .0. this interval must be reduced accordingly for more frequent cleanups to free up storage space. v Improve performance for IIS 6. you must configure Discovery Accelerator customer information and Enterprise Vault sites (first customer. If greater than four nodes will be deployed in the cluster.exe tool (located in the install folder) to obtain the appropriate Customer IDs and customer database names. v Discovery Accelerator Web-Services Interface—In order for IBM StoredIQ Platform to interface with Enterprise Vault using the Discovery Accelerator Web services. Configure a Temporary Storage Area. It is recommended that the credentials used for referencing the Enterprise Vault Site are those of the Vault User or any other administrator. then site) so that certain configuration items can appear in the volume configuration lists.

IBM StoredIQ Platform requires this path so that Discovery Accelerator can be accessed during configuration. including network security providers. could be in use on a given target server.v Permission to set all of the review marks that will be selected for the volume definition. enter the path where Enterprise Vault Business Accelerator was installed on the Discovery Accelerator server. it is impossible to address them all. This name will appear in the screens used to configure Enterprise Vault volumes. Go to Administration > Data sources > Specify servers. Click OK to save the site. enter a unique logical site name. Each unique combination of Discovery Accelerator server name and customer ID can be used only one time. In the Site name field. See Enabling Remote DCOM. 5. To configure an Enterprise Vault site: 1. 3. Creating volumes and data sources 39 . 8. enter the IIS Virtual Directory where the Discovery Accelerator Web service is located. Each server can only be added one time. Related concepts: “Discovery Accelerator” on page 38 Configuring Enterprise Vault This topic provides procedural information regarding how to configure an Enterprise Vault site. 2. v Allow DCOM traffic through Windows Firewall—Required on all Enterprise Vault servers and Discovery Accelerator. 4. enter the FQDN of the Enterprise Vault Server. To configure Discovery Accelerator Customer Information: 1. Go to Administration > Data sources > Specify servers. 2. v Enable Remote DCOM—Required on all Enterprise Vault Servers and Discovery Accelerator. Click Enterprise Vault sites. In the Customer name field. Related concepts: “Discovery Accelerator” on page 38 Configuring security settings for Enterprise Vault servers This section lists the standard security settings that must be configured on the Windows Servers hosting Enterprise Vault to allow it to interact with IBM StoredIQ Platform. In the Customer virtual directory field. and then click Add new Enterprise Vault site. In the Enterprise Vault site alias field. Because any number of security applications. In the Discovery Accelerator installation folder field. 6. and then click Add new Discovery Accelerator customer. In the Discovery Accelerator server field. 4. enter a unique display name of the DA customer. Click Discovery Accelerator customers. In the Discovery Accelerator customer ID field. See Allowing DCOM Traffic through the Windows Firewall. 3. 7. enter the customer ID value obtained from Discovery Accelerator. enter the DNS name of the physical server running Discovery Accelerator. This name will appear in the screens used to configure Enterprise Vault volumes.

Privileges Required by User Account The user account used to harvest or copy from a NewsGator volume must have the Legal Audit permission on the NewsGator Social Platform Services running on the SharePoint farm. which must be configured prior to adding retention volumes due to unique requirements. enter the login name. Related concepts: “Creating volumes and data sources” on page 33 Creating Centera pools When a Centera pool is created. The volume set cannot be edited or deleted from the manage volume sets page. 3. This feature enables the harvesting of unknown Centera volumes so business policies can be applied to data objects on already retained storage servers. 6. It is recommended that the Enterprise Vault Service Account or a user with equivalent privileges be used. In the Manage Service Applications screen. Related concepts: “Configuring server platforms” on page 33 Configuring retention servers IBM StoredIQ supports various types of retention servers. select Manage Service Applications. select Administrators. ensuring that the account has the Legal Audit permission. 5. an empty volume set is automatically created and associated with the Centera pool. then enter the login name as domain\user. In the Password field. the following effective access profile rights to a Centera pool must be enabled: – Read – (D)elete – (Q)uery 40 Administration Guide . select the NewsGator Social Platform Services row. 7. Login as an administrator to your SharePoint Central Administration Site. Add the user account that will be used for the NewsGator harvest to the list of Administrators. 4. In the User name field. 2. If the user is a domain user. v Permissions—To support all IBM StoredIQ Platform features. enter the user’s password to authenticate with Active Directory. The retention feature requires the Advanced retention feature to be enabled. To configure this permission: 1. From the button ribbon that activates near the top of the page. Under Application Management. Click OK to save the site. Related concepts: “Discovery Accelerator” on page 38 NewsGator This topic provides procedural information regarding required NewsGator privileges and how they should be configured. 5.

5. To create a Dell DX Storage Cluster: 1. Multiple access points can be specified allowing for fail over in the event of a problematic access node.pea file—This is a pool-entry authorization. – (E)xist – (W)rite – Retention – (H)old v Centera Pools—If you have an integrated Centera server. and then click Add. Once you have defined the cluster. Select either Structured Output or Single text field and enter a connection string. Enter the IP address for an access point on the server. click Create new pool to open the Centera pool editor. 6. Once you have created a pool. 4. 6. Unlike other volumes. The pool is now available in the Add volumes dialog when adding a Centera volume Related concepts: Centera Configuring the Dell DX Object Storage Platform A Dell DX Storage Cluster must be defined before a Dell DX Storage volume can be added. the cluster will be available in the list of available options when adding a Dell DX Storage volume. Optionally. Go to Administration > Data sources > Specify volumes. v Specify Access and enter a profile name and Secret for the pool. This should be the same name used when configuring the DX CSN. specify the IP address or DNS name for a DR Site DX CSN that can be used for failover. Enter a port number that will be used to communicate with the storage cluster. Enter a unique name for the Profile name. In Centera pools. 7. Individual entries should be separated by a comma. To create a Centera pool: 1. the Centera pool will be in the list of available choices when adding a Centera volume. select Add new Dell DX Storage Cluster. Click OK to save the pool. 3. 4. 5. For information about creating this type of file. 7. The default value is 80. specify alternate addresses to be used to communicate with the storage cluster. Choose how to define the pool. Go to Administration > Data sources > Specify Servers > Dell DX Storage Clusters. Enter a unique name for the Centera pool in the IBM StoredIQ Platform pool profile name text box. Optionally. 2. 3. 2. Enter the Dell DX Storage Cluster name. In the Dell DX Storage Cluster list. Creating volumes and data sources 41 . click Add new connection in the Connections section. Centera servers are not placed into volume sets but into Centera Pools. v Use . This should be one or more IP addresses or DNS names that map to DX Storage Nodes or DX CSN in the cluster. refer to the Centera Monitor and Administration Guide. If you chose Structured input. you need to create a Centera pool before you can add a Centera volume.

In the Component Services dialog box.17. valid IP address within the Dell DX Storage Cluster private network. including security settings and DCOM. and 172. which should be used on the IBM StoredIQ Platform switch to link with the Dell DX Storage Cluster.exe) found in Administrative Tools in Control Panel as Component Services. If the Enterprise Vault Service Account (or the user whose credentials are used to define the Enterprise Vault Site IBM StoredIQ Platform) does not have permissions to connect remotely for DCOM. expand Computers. you must configure Discovery Accelerator customers and Enterprise Vault sites (customers first.17. For example.100. Run dcomcnfg as a user with Administrator privileges. Note: If you use a system other than IBM StoredIQ Platform to ingest data into Enterprise Vault but still want to use IBM StoredIQ Platform for exporting out of Discovery Accelerator.17. then sites) so that certain configuration items can appear in the retention volume configuration lists. Before performing these tasks.17.17.17. To enable remote DCOM: 1.17.102. 172.101.17. Prior to creating retention volumes using Enterprise Vault.104 will be used by IBM StoredIQ Platform. you must define an Enterprise Vault site within IBM StoredIQ Platform and then use that site to define a Discovery Accelerator volume. The Starting IP address should be an unused. if 172. then perform this procedure on the target server. Note the value of the Switch port. Note: Remote DCOM is required on all Enterprise Vault servers and Discovery Accelerator. this value will be 255. 42 Administration Guide .255. This utility exposes the settings that enable certain users to connect to the computer remotely through DCOM. select the Enable Distributed COM on this Computercheck box.0.17. 8.17. Enabling remote DCOM This topic provides procedural information regarding how to enable remote DCOM for Enterprise Vault servers. DCOM configuration is a prerequisite.103. 9. expand Component Services. Enter the Netmask value for the Dell DX Storage Cluster private network.255. 172. Enter a Starting IP address for IBM StoredIQ Platform cluster.17. When defining the Enterprise Vault site. 2. Typically. 172.exe tool to obtain the appropriate Customer IDs and customer database names.17. You can configure DCOM settings using the DCOM Config utility (DCOMCnfg. 10. Members of the Administrators group are allowed to connect remotely to the computer by default. and then right-click My Computer and click Default Properties.100 is entered for a five-node IBM StoredIQ Platform cluster. you must log in to Discovery Accelerator and run the ImportExport. Each node in the IBM StoredIQ Platform cluster will be assigned an IP address in the Dell DX Storage Cluster private network. 172. Related concepts: “Configuring retention servers” on page 40 Configuring Enterprise Vault (retention volumes) This section provides both conceptual and procedural information regarding Enterprise Vault. If not already enabled.

and then click OK. policy domain. select Remote Launch and select Remote Activation. v In the dialog box. In the Launch Permission dialog box. Computers. Click the COM Security tab. enter Name as DCOM and Port number as 135. To open the Firewall Port to allow DCOM Traffic: 1. click Add. This command opens this port if it is closed: netsh firewall add portopening protocol=tcp port=135 name=DCOM_TCP135 The port can also be opened using the Firewall User Interface. Use the Tivoli Storage administrative client. dsmadmc. or Groups dialog box. In the Control Panel. and then click the Exceptions tab. To allow DCOM traffic over the network on the target server. These classes require a defined class. follow these steps if the user name does not appear in the Groups or user names list: v In the Launch Permission dialog box. select the check box for DCOM to enable DCOM traffic through the firewall. click Edit Limits. 2. double-click Windows Firewall. select your user and in the Allow column under Permissions for User. add the username and then click OK. If there is no such check box: v Click Add Port. v In the Select Users. 5. Related concepts: “Configuring retention servers” on page 40 Configuring IBM Information Archive retention servers Before creating retention volumes. Note: Required on all Enterprise Vault servers and Discovery Accelerator. Click Change Settings. To configure an IBM Information Archive retention server: 1. 3. define policyset Example_PD Example_PS Creating volumes and data sources 43 . In the Exceptions window. you need to configure settings on your IBM Information Archive retention server. 4. Ensure that the TCP radio button is selected and click OK. Consult the Tivoli Storage Manager Administrator's Reference for more details about using the administrative client. 3. and node on the IBM Information Archive retention server. Under Launch and Activation Permissions. Files sent to retention must have a corresponding management class to manage the retention period settings. You can use the administrative client in either interactive or batch mode. v In the Launch Permission dialog box. Create a policy domain. policy set. For example: define domain Example_PD desc=’Example Domain’ archret=1 2. Create a policy set for the policy domain. to enter commands. the DCOM TCP port (135) must be open on the Firewall. Allowing DCOM traffic through the Windows firewall This topic provides procedural information regarding how to allow DCOM traffic through the Windows firewall.

Create one or more management classes. When preparing to add SnapLock retention volumes. v Define mgmtclass Example_PD Example_PS Example_MG_CR desc=’Exam– ple Domain chronological retention management class' v Define mgmtclass Example_PD Example_PS Example_MG_0DAY desc=’Example Domain zero day retention’ v Define mgmtclass Example_PD Example_PS Example_MG_10DAY desc=’Example Domain 10 day retention’ 4. Assign defmgmtclass Example_PD Example_PS Example_MG_CR. Each management class will have only one copy group assigned to it. you must have licensed SnapLock. For example: register node Example Domain password domain=Example_PD Note: List Nodes—You can also list nodes using the query node command. 3. Activate policyset Example_PD Example_PS. This acts as a connection between IBM StoredIQ Platform and the server and is used when defining IBM StoredIQ Platform volumes. Note: Manage files using IBM StoredIQ Platform—NetApp SnapLock retention servers do not lock out user deletion and modification of files that do not have an active retention period. Related concepts: “Configuring retention servers” on page 40 NetApp SnapLock This topic provides conceptual information regarding NetApp SnapLock retention server configuration requirements. 9. Complete the definition of the policy domain by validating and activating the policy set. The copy group must include retinit=creation. Create a node. These classes will be referred to as Retention Classes in the application’s user interface. For example: v Define copygroup Example_PD Example_PS Example_MG_CR STANDARD type=archive destination=archivepool retver=1 retinit=creation v Define copygroup Example_PD Example_PS Example_MG_0DAY STAN– DARD type=archive destination=archivepool retver=0 retinit=creation v Define copygroup Example_PD Example_PS Example_MG_10DAY STANDARD type=archive destination=archivepool retver=10 retinit=creation 5. 7. Create an archive copy group for each management class defined above. 8. created at least one SnapLock volume and shared it using either 44 Administration Guide . 10. 6. Always use IBM StoredIQ Platform to manage files on retention. Assign one of the management classes created above as the default management class for the policy domain. Validate policyset Example_PD Example_PS.

There are certain administrative permissions that a user must have when that user account is used in the Connect as text box in Chatter. Access the SnapLock server and enter the following commands: vol options <vol-name> SnapLock_minimum_period 0d vol options <vol-name> SnapLock_default_period min Related concepts: “Configuring retention servers” on page 40 Hitachi HCAP configuration requirements This topic provides conceptual information regarding Hitachi HCAP retention server configuration requirements. the HTTP gateway must be enabled on the server. enter the port number. Windows Share or NFS (or both). Depending on the current allow/deny lists for the HTTP gateway. and the FileNet domain configuration editor page appears. v In the Server name text box. v In the Port text box. but the proper permissions are required in order to harvest private messages. 3. When setting up a Chatter Creating volumes and data sources 45 . When configuring the server. and initialized the SnapLock compliance clock. Consult the NetApp administrator documentation for specific instructions. 2. v In the Path text box. Click Add new FileNet domain configuration. v In the Stanza text box. In the FileNet domain configuration editor page. enter the path for this retention server. configure these fields: v In the Configuration name text box. Related concepts: “Configuring retention servers” on page 40 Configuring FileNet By providing the configuration values for a FileNet domain. IBM StoredIQ Platform recommends setting the minimum setting to zero while not entering a maximum setting. you are supplying the values needed to bootstrap into a domain. you may need to add the IBM StoredIQ Platform data server's IP addresses to the Allow IP addresses list. enter the configuration name for this retention server. Go to Administration > Data sources > Specify Servers > FileNet domain configurations. v In the Connection list. Related concepts: “Configuring retention servers” on page 40 Configuring Chatter messages Within Chatter. set the retention period settings. select the desired connection type. To configure the FileNet domain: 1. The IBM StoredIQ Platform application accesses the Hitachi HCAP server using HTTP. enter the server name. the default administrator profile does not have the Manage Chatter Messages permission. Click OK to save your changes. enter the stanza information for this retention server. Consequently. 4.

On the Primary volume list page. Select one of the following: v Add another volume on the same server. based on your server type. Note: Case-sensitivity rules for each server type apply. 2. see Editing Volume Definitions. Red asterisks within the user interface denote required fields. v Add another volume on a different server. these administrative permissions must be assigned to the account you use: v API enabled v Manager Chatter Messages (required if you wish to harvest Chatter Private Messages) v Manage Users v Moderate Chatter v View All Data Note: For Chatter administrators using the Auth token option.” on page 129 Creating primary volumes This topic provides procedural information regarding how to create primary volumes as data sources. In general however. Related concepts: “Creating volumes and data sources” on page 33 Related reference: Appendix B. To add a primary volume: 1. “Supported server platforms and protocols. 4. Enter the information described in the table below. Go to Administration > Data sources > Specify volumes > Volumes. 5. user account that should be used to harvest and run actions against Chatter. read how to set up a sandbox account. 46 Administration Guide . v Finished adding volumes. The following table describes the fields that are available in the Add volume dialog box when configuring primary volumes. click Add primary volumes. Related concepts: “Creating volumes” Creating volumes This section describes the types of volumes in IBM StoredIQ Platform as well as how to create them as data sources within the platform. Click OK to save the volume. For best-practice information regarding editing volume definitions and system restarts. we recommend that you use an account with the built-in System Administrator profile. 3.

v Standard Platform In the Platform list. Creating volumes and data sources 47 . v Celerra v NetApp Version In the Version list. See v Documentum Adding Domino as a Primary Volume. the desktop agent must be installed on that desktop v Enterprise Vault and then pointed to the data v FileNet server.id. Documentum. select the Exchange appropriate version. you v NetApp must first create a Discovery v Exchange Accelerator site and an Enterprise Vault site. you must first v SharePoint upload at least one user. v Discovery Accelerator v For Desktop. Site In the Site list. Desktop will not appear v NewsGator in the Server type list. select the Enterprise Vault desired vault store for the volume. See Discovery v Domino Accelerator. Vault store In the Vault store list. you must v Windows Share Server type In the Server type list. select the specify the doc broker. Applicable Volume Field Name Required Action Special Notes Type v For Documentum. select the platform type. v Celerra v For Domino. but will v Chatter instead appear as an available server or volume. See (CIFS) server type. v NFS v2 and v3 v For Discovery Accelerator. select the Enterprise Vault appropriate site.

folders and/or documents. v When adding SharePoint volumes that contains spaces in the URL. see Special Note: Adding SharePoint Volumes. FileNet the FileNet server you would like to use for this configuration. communicate with Discovery v Discovery Accelerator Server configuration. Desktop will not appear in the Server type list. Note that the server must be v Exchange load-balanced at the IP or DNS v SharePoint level. v Windows Share Server In the Server text box. Mailbox server When configuring multiple client For Exchange primary volumes. enter the A Documentum repository contains Documentum name of the Documentum cabinets. v NetApp Exchange 2007 are supported. enter the this is the fully qualified domain (CIFS) fully qualified name of the server name where the OWA resides. Accelerator v For Domino. v Exchange Active Directory In the Active Directory server text This must be a fully qualified server box. enter the name of the Active Active Directory server. Accelerator v SharePoint v Exchange Protocol To use SSL. 48 Administration Guide . v Domino v For NetApp. and cabinets contain repository. enter the name of is the fully qualified domain name one or more mailbox servers. but will instead appear as an available server or volume. FileNet config Use the FileNet config list to select For more information. select the v Discovery predefined server from the list. see FileNet. this Exchange access servers. select the predefined server from the list. harvested reside. v NFS v2 and v3 where the volume is available for Multiple Client Access servers on mounting. Applicable Volume Field Name Required Action Special Notes Type v For Exchange primary volumes. where the mailbox(es) to be separated by a comma. the desktop agent must be installed on that desktop and then pointed to the data server. v Discovery Directory server. which were entered using the Configuration subtab in the Lotus Notes user administration area. select the v NewsGator appropriate user name. v For Celerra. v For Desktop. select the Protocol API client uses HTTP over SSL to check box. Accelerator v SharePoint v NewsGator Doc base In the Doc base text box.

enter For Domino. v For Exchange. v Exchange v SharePoint v Documentum v Discovery Accelerator v Domino v Celerra Windows Share v FileNet v NewsGator v Chatter Auth token In the Auth token text box. mount the defined volume. select the user name (CIFS) the logon ID used to connect and for the primary user ID. the creation of a FileNet primary volume. and v NFS v2 and v3 mounted. select the The object store must exist prior to FileNet desired object store. Domino. Applicable Volume Field Name Required Action Special Notes Type v Windows Share Connect as In the Connect as text box. v Domino v FileNet v Discovery Accelerator v NewsGator v Chatter v Windows Share Password In the Password text box. Object store In the Object store list. The user id mount the defined volume. Chatter volume. enter the password for (CIFS) password used to connect and the primary user ID. enter a friendly name v NetApp for the volume. v Exchange v When adding SharePoint volumes that contains spaces in v Enterprise Vault the URL. see Special Note: v Domino Adding SharePoint Volumes. NewsGator. v Celerra v FileNet v NewsGator v Chatter Creating volumes and data sources 49 . v Windows Share Volume In the Volume text box. must be configured on the System v Exchange Configuration screen under the v SharePoint Lotus Notes user administration v Documentum link. Auth tokens can be generated online on Salesforce. Enterprise Vault (CIFS) name or names of the volume to be (primary volume). enter The auth token must match the user Chatter the token used to authenticate the name used in the Connect as field. See Configuring Chatter Volumes. enter the Documentum. enter the For Domino.

Personal archives To enable the collection of personal This option pertains only to Exchange 2010 archives. v Documentum Harvest This is the Documentum harvest This option obtains the list of all option: known Domino users and their v Domino v To enable harvesting. It then harvests those Harvest all document versions. should begin. v Chatter v When adding SharePoint volumes that contains spaces in the URL. this field should v Exchange be left blank if you are harvesting all mailboxes. If you are v Celerra harvesting a single mailbox. select the Harvest mail journals option. select the Harvest all applications option. select NSFs. Folders In Folder. v NetApp v For Exchange. enter the name of the Discovery Accelerator once Accelerator Discovery Accelerator. v To harvest mail journals. Applicable Volume Field Name Required Action Special Notes Type Discovery In the Discovery Accelerator case This text box is populated from Discovery Accelerator case text box. personal archive check box. Virtual root In Virtual root. select the Harvest Exchange 2010 with SP1 applied. Archive In the Archive list. select either of the Exchange Mailboxes or Public folders options. a volume further down the (CIFS) enter the name of the initial directory tree rather than v NFS v2 and v3 directory from which the harvest selecting an entire volume. 50 Administration Guide . change the default For Exchange. v To harvest all applications. see Special Note: Adding SharePoint Volumes. select the Enterprise Vault archive to which this Enterprise Vault volume pertains. designation. enter v SharePoint the email address for that v Documentum mailbox. mailboxes unless it was pointed to a single mailbox using the initial These are the Domino harvest directory. select the Harvest mailboxes option. options: v To harvest mailboxes. connection is established. v This feature allows you to select v Windows Share Initial directory In the Initial directory text box. this option should be Exchange name to match the Exchange server changed to match the server designation.

IBM StoredIQ Platform tests to see if the volume can be accessed. (CIFS) select Validation. v SharePoint v Documentum v Celerra v Enterprise Vault v Discovery Accelerator v Domino v FileNet v NewsGator v Chatter v Desktop v SharePoint Subsites Select Recurse into subsites. the journal envelope Exchange envelope is removed. see Special Note: Adding SharePoint Volumes. v NFS v2 and v3 v NetApp v Exchange v SharePoint v Documentum v Celerra v Discovery Accelerator v Domino v Chatter Creating volumes and data sources 51 . Versions For more information. Versions Select Include all versions. v NetApp v Include content tagging and v Exchange full-text index. Remove journal When selected. IBM StoredIQ Platform supports SharePoint indexing versions from SharePoint. Applicable Volume Field Name Required Action Special Notes Type v Windows Share Index options Select either or both of the Index These options are selected by (CIFS) options check boxes. When selected (the default state). v Windows Share Validation To validate volume accessibility. v NFS v2 and v3 v Include system metadata for data objects within containers. default.

For example. The end directory is also part of volume v NFS v2 and v3 partitioning and is the last directory v NetApp harvested. that was specified). if an initial directory is defined. v Windows Share End directory In End directory. determine the end (CIFS) directory for the harvest. designate a start (CIFS) directory for the harvest. this v SharePoint regular expression helps restrict v Documentum the volume to one or more v Celerra Research Folders in the case. The start directory involves volume v NFS v2 and v3 partitioning in order to break up a v NetApp large volume. v Reset and synchronize access times on incremental harvests. in the case of v Celerra directories E–H. v Discovery Accelerator v Domino v FileNet v Chatter v Desktop v Windows Share Start directory In Start directory. select one of these (CIFS) options: v NFS v2 and v3 v Reset access times but do not synchronize them. v SharePoint v Domino v Celerra v FileNet v Chatter v Windows Share Access Times In Access times. 52 Administration Guide . v Exchange folders v For Discovery Accelerator. specify a sets of “first node” directories. Applicable Volume Field Name Required Action Special Notes Type v These directories are defined as v Windows Share Include In Include directories.) v Domino v Do not reset or synchronize v Celerra access times. (This is the v NetApp default setting. E would be the v FileNet Start directory and H would be the v Chatter End directory. v NetApp are considered part of the logical Include research volume. (CIFS) directories regular expression for included relative to the specified (or v NFS v2 and v3 directories for each harvest (if it or implied) starting directory. The start directory v SharePoint must be underneath the initial v Domino directory.

In the Server text box. This feature does not support redirection to other CAS/Exchange clusters or autodiscovery protocol. Add a Lotus Notes user by uploading its user ID file in Lotus Notes User Administration on the Administration > Configuration tab. select one of these The Scope harvests on these (CIFS) constraints options: volumes by extension option pertains only to these volume types: v NFS v2 and v3 v Only use __ connection process or v NetApp (es)—Specify a limit for the v Windows Share Constraints number of harvest connections to v NFS v2 and v3 v Exchange this volume. 5. and then click OK. If the server is also v Domino v NetApp being accessed for attribute and v Celerra v Celerra full-text searches. select Exchange. This v Discovery maximum number is set on the Accelerator system configuration tab. Complete the remaining fields for the primary volume. Applicable Volume Field Name Required Action Special Notes Type v Windows Share Volume In Constraints. To include Client Access Servers in Primary Exchange 2007 Volumes: 1. v Scope harvests on these volumes by extension—Include or exclude data objects based on extension. you may want to regulate the load on the server v Discovery Accelerator v SharePoint by limiting the harvester v Documentum processes. Configuring Exchange 2007 Client Access Server support The system supports the harvest of multiple Client Access Servers (CAS) when configuring Exchange 2007 primary volumes. v FileNet v Control the number of parallel v Chatter data object reads—Designate the v Desktop number of parallel data object reads. 4. add the user ID file for that user. To add Domino as a primary volume: 1. Go to Administration > Data sources > Volumes > Primary > Add primary volumes. Creating volumes and data sources 53 . v If you would like to harvest a user’s mailbox. Related tasks: “Creating primary volumes” on page 46 Adding Domino as a primary volume This topic provides procedural information regarding how to add Domino as a primary volume. In the Server type list. 2. 3. select 2007. 6. This server must be load-balanced at the IP or DNS level. The maximum number v Enterprise Vault of harvest processes is automatically shown. In the Mailbox server: text box. In the Version list. enter the name(s) of one or more mailbox servers. type the name of the Exchange server. separated by a comma and a space.

you define the volume by setting certain properties. if you enable versioning on a SharePoint volume. Related tasks: “Creating primary volumes” on page 46 Special note: adding SharePoint volumes This topic provides conceptual information that must be used when adding SharePoint volumes. To harvest all mail applications. Primary Volume Field Example Server shpt2010. It then harvests those mailboxes unless it was pointed to a single mailbox using the initial directory. Additionally.reglab5.reglab5. Server. 3. then you will need each user’s user ID file in order to decrypt a given user’s data.local Volume /sitestest/autoteamsite1 Initial directory Attribute Harvest Wiki Pages Library Performance considerations for using versioning When you add a primary volume. Volume. To harvest mailboxes. select the Harvest all applications option. which obtains the list of all known Domino users and their NSFs. v For each object. If a SharePoint volume is added. 2. such as mail\USERNAME. an additional API call must be made to get a list of all its versions. However.local/sitestest/autoteamsite1/Attribute Harvest WikiPages Library/ would require using the fields in the following table because of the spaces in the URL. If a single mailbox must be harvested. and Password. Point the volume to the Domino server. Connect as. add the administrator’s ID file. In addition to the required fields Server type. set the initial directory to be the path to the mailbox on the Domino server. there is a duplication of effort in processing them as well as the effort required to maintain an updated context for the version history of an object in the index. select the Harvest mailboxes option. Since most versions of any given object will share full-text content and attributes. and Initial directory fields in the Add volume dialog box. select the Harvest mail journals option. If you would like to harvest multiple mailboxes within one volume v definition. on the Domino server. which looks at all NSFs. the SharePoint volume with the URL http://shpt2010. if the SharePoint volume URL contains spaces. 5. For example. IBM StoredIQ Platform supports using the entire sites portion of a Sharepoint URL for the volume /sites/main_site/sub_site in the Volume field when adding a new SharePoint volume. you have the option of indexing different versions of data objects on that volume. 4. v If the mailboxes have encrypted emails or NSFs. 54 Administration Guide . including mail journals. then you must also utilize the Server. the API itself causes extra overhead in fetching data and metadata for older versions. To harvest all mail journals.

Click OK to save the volume. select Enterprise Vault. create Centera pools as described in Creating Centera Pools. 4. Whenever volume definitions are edited or modified. To configure an Enterprise Vault primary volume: 1. When defining the Enterprise Vault site. In the Server Type list. or. you should restart the system. you must configure Enterprise Vault sites so that certain configuration items can appear in the primary volume configuration lists. Retention volumes store data objects that have been placed under retention. as detailed in Adding a Retention Volume. and then select one of the following: v Add another volume on the same server v Add another volume on a different server v Finished adding volumes Related tasks: “Enterprise Vault” on page 37 “Creating primary volumes” on page 46 About Editing Volume Definitions This topic provides conceptual information regarding the editing of volume definitions. v To fetch attributes for the older versions of an object. Create retention volumes. This is the process for using retention volumes to store such data: 1. You may also want to define Retention Categories on the Enterprise Vault server. Configure your retention servers. DCOM configuration is a prerequisite. 3. If you are using Enterprise Vault. 2. Related concepts: “Creating volumes” on page 46 Creating volumes and data sources 55 . Related tasks: “Creating primary volumes” on page 46 “Restarting and rebooting IBM StoredIQ Platform” on page 15 Creating retention volumes This section provides both conceptual and procedural information regarding retention volumes. 3. Create management or retention classes. See Configuring Retention Servers. 2. Related tasks: “Creating primary volumes” on page 46 Configuring primary volumes using Enterprise Vault Prior to creating primary volumes using Enterprise Vault. if you are using Centera retention servers. an API call must be made for each attribute that needs to be indexed. Go to Administration > Data sources > Volumes > Primary > Add primary volumes. ensure you have defined Enterprise Vault Sites (see Discovery Accelerator). meaning that the object will be retained.

Click OK to save the volume. select the appropriate site. To add a retention volume: 1. 56 Administration Guide . select the platform type. Vault store In the Vault store list. select the Share (CIFS) server type. Dell DX Storage In the Dell DX Storage Cluster list. Go to Administration > Data sources > Volumes. Cluster select the appropriate Dell DX Storage Cluster. 3. Applicable Field Name Required Action Special Notes Volume Type v Windows Server type In the Server type list. complete the fields as described in this table. Note: Case-sensitivity rules apply. 2. v NFS v3 v Centera v Hitachi v IBM Information Archive v Dell DX Storage v Enterprise Vault v NetApp SnapLock v NetApp v FileNet v Standard Platform In the Platform list. v Centera Pool In the Pool list. “Creating volumes and data sources” on page 33 Adding a retention volume This topic provides procedural information regarding how to add a retention volume. Red asterisks within the user interface denote required fields. and then click Retention. select the StoredIQ pool profile name to provide access to a specific Centera pool Site In the Site list. select the desired vault store for the volume. Depending on the type of retention server you are adding.

enter the Share (CIFS) login ID. Archive v FileNet FileNet config In the FileNet config text box. enter Information the name of the node. enter the Share (CIFS) password for the login ID. v NFS v3 v Hitachi v IBM Information Archive v NetApp SnapLock v IBM Node name In the Node name text box. assign the Share (CIFS) server a name. select the The object store desired object store. Information enter the password for the node. Creating volumes and data sources 57 . v Windows Connect as In the Connect as text box. Archive v IBM Node password In the Node password text box. v IBM Information Archive v FileNet v Windows Password In the Password text box. Applicable Field Name Required Action Special Notes Volume Type v Windows Server In the Server text box. v IBM Information Archive v NetApp SnapLock v FileNet v FileNet Object store In the Object store list. must exist prior to the creation of a FileNet retention volume. enter the name of the FileNet connection. Archive v IBM Node port In the Node port text box. enter the Information node’s port number.

select the default retention category category. v NFS v3 v Centera v Hitachi v IBM Information Archive v Dell DX Storage v NetApp SnapLock v Windows Index options Select either or both of the Index These options Share (CIFS) options check boxes. Vault name enter the name of the matter archive. are selected by v NFS v3 v Include system metadata for data default. override Object creation Select the option to create an object FileNet as a StoredIQ document. v Centera v Include content tagging and v Hitachi full-text index. 58 Administration Guide . objects within containers. v Enterprise Description In the Description text box. enter a Vault description. Applicable Field Name Required Action Special Notes Volume Type v Windows Volume In the Volume text box. Default In the Default retention category retention list. Retention Select the Allow retention category category to be overridden on policy. v IBM Information Archive v NetApp SnapLock v Enterprise Matter archive In the Matter archive name text box. enter the Share (CIFS) name or names of the volume to be mounted.

Information and full-text searches. Applicable Field Name Required Action Special Notes Volume Type Note: For v Windows Constraints In Constraints. Enter the information described in Creating Retention Volumes. you v NFS v3 v Only use __ connection can select only process(es)—Specify a limit for the Only use __ v Centera the number of harvest connections connection v Hitachi to this volume. Related concepts: “Creating retention volumes” on page 55 Configuring Enterprise Vault retention volumes This topic provides procedural information regarding how to configure Enterprise Vault retention volumes. See Enterprise Vault. 7. 3. select Enterprise Vault. The maximum number of harvest processes is v FileNet automatically shown. Administrators can also configure discovery export volumes for managing harvest results from cycles of a discovery export policy. select the Enterprise Vault site you created. In the Server type list. If the server is process(es) v IBM also being accessed for attribute option. To configure an Enterprise Vault retention volume: 1. based on your server type. Creating volumes and data sources 59 . 4. Click Add retention volumes. Discovery export volumes contain the data produced from a policy. In thenSite list. Select one of the following: v Add another volume on the same server v Add another volume on a different server v Finished adding volumes Creating discovery export volumes This topic provides procedural information regarding how to create a discovery export volume. 2. select either or both FileNet retention Share (CIFS) of these options: volumes. Click OK to save the volume. v Control the number of parallel data object reads—Designate the number of parallel data object reads. 5. This maximum number is set on the system configuration tab. 6. you may Archive want to regulate the load on the v NetApp server by limiting the harvester SnapLock processes. which is kept so that it can be exported as a load file and uploaded into a legal review tool. Go to Administration > Data sources > Volumes > Retention.

Enter the information described in the table below. v Windows Volume In the Volume text box. and then click OK to save the volume. v3 v Windows Connect as In the Connect as text box. v3 v Windows Constraints To utilize Constraints. To create a discovery export volume: 1. select this option: Share v Only use __ connection process (CIFS) (es)—Specify a limit for the number of v NFS v2. The maximum number of harvest processes is automatically shown. and then click Add discovery export volumes. and then click Volumes. select the type of server. If the v3 server is also being accessed for attribute and full-text searches. Related concepts: “Creating retention volumes” on page 55 60 Administration Guide . Applicable Special Volume Field Name Required Action Notes Type v Windows Type Using the Type list. (CIFS) v NFS v2. Go to Administration > Data sources > Specify volumes. enter the name of the Share volume to be mounted. This maximum number is set on the system Configuration tab. v NFS v2. you may want to regulate the load on the server by limiting the harvester processes. 3. Red asterisks within the user interface denote required fields. enter the name of the Share server where the volume is available for (CIFS) mounting. enter the logon ID Share used to connect and mount the defined (CIFS) volume. Note: Case-sensitivity rules for each server type apply. Share (CIFS) v NFS v2. Click Discovery export. v Windows Password In the Password text box. v3 v Windows Server In the Server text box. 2. harvest connections to this volume. enter the password Share used to connect and mount the defined (CIFS) volume.

Creating system volumes This topic provides procedural information regarding how to create a system volume. you may want to regulate the load on the server by limiting the harvester processes. v3 v Windows Server In the Server text box. v3 v Windows Constraints To utilize Constraints. 3. Red asterisks within the user interface denote required fields. enter the Share (CIFS) name of the server where the volume is available for mounting. Select the System tab. Enter the information described in the table below. v NFS v2. and then click Add system volumes. select the type Share (CIFS) of server. Go to Administration > Data sources > Specify volumes > Volumes. Applicable Field Name Required Action Special Notes Volume Type v Windows Server type Using the Type list. enter the Share (CIFS) password used to connect and mount the defined volume. The maximum number of harvest processes is automatically shown. 2. System volumes support volume export and import. To add a system volume: 1. and then click OK to save the volume. When you export a volume. select this Share (CIFS) option: v NFS v2. When you import a volume. This maximum number is set on the system Configuration tab. v NFS v2. enter the Share (CIFS) name of the volume to be mounted. v3 v Windows Connect as In the Connect as text box. v3 v Only use __ connection process (es)—Specify a limit for the number of harvest connections to this volume. v Windows Volume In the Volume text box. v NFS v2. data is stored on the system volume. Creating volumes and data sources 61 . data is imported from the system volume. enter the Share (CIFS) logon ID used to connect and mount the defined volume. v Windows Password In the Password text box. If the server is also being accessed for attribute and full-text searches. Note: Case-sensitivity rules apply.

The exported data has to be made available to the import data server before it can be imported. The export process creates two files: a binary file and a metadata file. This may require you to physically move the exported data to the system volume of the import data server. which contains the exported data. Once the data is available. These jobs can be cancelled at any time while they are running. such as a headquarter's office data server. Click OK. 4. When one job completes. Alternately. A dialog appears to let you know that the data is being exported. click OK to return to the Volumes page. 2. These jobs are placed into their prospective queues. described in this table. under the Jobs in progress section of the dashboard. Select a volume of data to export from the list of volumes by clicking the export link in the far right-hand column. 5. Go to DSAdmin > Administration > Data sources > Volumes. 62 Administration Guide . it can be imported to a single location. To export a volume’s data to the system volume: 1. Discovery export and system volumes cannot be imported or exported. Related concepts: “Creating volumes” on page 46 “Configuring server platforms” on page 33 Exporting and importing volume data The system’s export and import volume capabilities allow metadata and full-text indexed data to be collected or exported from separate locations. cancelling one type of job does not cancel jobs in the other queue. Only primary and retention volume data can be exported or imported using the export/import feature. The target location of an export or the source location of an import is always the IBM StoredIQ Platform system volume. the next one automatically starts. such as data servers located in various offices of the enterprise. Licenses on the import appliances are enabled automatically if a feature of the imported volume requires it (such as Centera or Exchange licenses). To monitor the export progress. Because the export jobs and import jobs reside in separate queues. Export and import volume processes run as jobs in the background. click Stop this job. 3. The jobs are not restartable. click the Dashboard link. where selected files might be retained. Complete the Export volumes details dialog. and they are executed sequentially. These files' names contain the following information: v Data server name and IP address v Volume and server names v Time stamp The exported data consists of data from the selected volume and any related information that describes that data with the exception of volume-specific audits. Cancelling one import or export job also cancels all the jobs that come after the one cancelled. Volume data export to a system volume This topic provides procedural information regarding exporting volume data to a system volume. To cancel the export process.

If you have placed the exported data to another path. click Change path and enter the appropriate path. An imported volume looks. Note: When a volume with a licensed feature is imported into a data server that does not utilize licensing. The default volume) path is /exports. the license is imported along with the volume. To see the licensed features. See Import Audits. listing all of the volumes available for import. Go to Administration > Data sources > Volumes. Note: The job cannot be restarted. However. and then click either the Primary or Retention tab.) Volume data import to a system volume This topic provides procedural information regarding importing volume data to a system volume. The Import volumes page appears. v The imported volume can be reharvested as long as the data server has the proper network access and rights to the original source server and volume. By default. the data server searches for available volumes to import in the /imports directory of the system volume. just like a volume originally defined and harvested on the data server. The Import volumes page now displays the following information about the imported volumes. Click the Import volume link at the top of either the primary or the retention volume lists. Any action or relationship that is valid for a non-imported volume is valid for an imported volume. Description optional Enter a description of the exported data. Make sure that exported data file is present in the system volume of the import appliance. and is. 3. See the table below. acts. 2. Creating volumes and data sources 63 . Volume Name of the volume where the data resides. the import itself is audited. (Available only if the volume has full-text index. v The data viewer works only if the appliance has the proper network access and rights to the source server and volume. You can edit the export path. The specified location will be automatically created if necessary. with a few exceptions: v Logs and audit trails that capture the activity on the volume before the import are not available. Click OK. 4. Name Description Server Name of the server where the data resides. You must have access and permission on export servers and volumes if the file you want to view has been migrated to a secondary server at the time of the export. Export path (on system Where to save the data on the System volume. Export full-text index Select this option to export the volume’s full-text index. users will need to log out and then log back in to the data server. To import data from the system volume: 1. v The imported volume can be reharvested as long as the appliance has the proper network access and rights to the original source server and volume.

or click OK to return to the Manage volumes page. provided that the data server is connected to the gateway. To view import progress. click Stop this job. v Deleted volumes are removed from all volume lists.) 7. Note the following regarding deleted volumes: v Deleted volumes are removed from target sets. The job itself is not deleted. (The check box is active only if full-text index is available. 6. 8. The OK button is not enabled if the volume already exists and unless the Overwrite existing volume option is selected. SharePoint. Select the Overwrite existing volume check box to replace the existing data of the volume with the imported data. To cancel the import process. 64 Administration Guide . meaning that a job could contain no steps. under the Jobs in progress section of the dashboard. v Applicable object counts and sizes within IBM StoredIQ Administrator will adjust automatically. Name Description Server and volume Server name and volume name where the data physically resides Description A description added when the volume was exported Volume type Exchange. 9. select a volume to import by clicking the Import link in the last column on the right. 5. A dialog appears to let you know that the volume is being imported. steps that reference deleted volumes are implicitly removed. Select the Import full-text index check box to import the selected volume's full-text index. both from IBM StoredIQ Platform and IBM StoredIQ Administrator. click the Dashboard link in the dialog. From the list of volumes. and so on Category Primary or Retention Exported from Server name and IP address of the server from which the data was exported Export date The day and time the data was exported Total data objects Total number of data objects exported for the exported volume Contains full-text index Whether or not the full-text index option was chosen when the data was exported Related concepts: “Creating volumes” on page 46 Deleting volumes Administrators can delete volumes from the list of available data sources. v Within created jobs. Click OK.

v Object counts and sizes within user infosets will remain the same. the status of that workstation is set to uninstall in the background. File Net. it will automatically be removed from the Primary volume list. Windows Share SnapLock. the data is no longer available. StoredIQ imposes some policy limitations on volume types. Remember. Policy limitations for volume types Policy Type Source/Target Limitations Other Restrictions Copy from Centera. NFS SnapLock. Enterprise Vault (Discovery Accelerator). System. those user infosets were created at a specific point in time when this data source was still available. v No exceptions will be raised on previously executed actions. Documentum. IBM Information Archive. Note: If retention volumes such as Centera. which are identified in this table. Retention. NFS. Instead. v Users exploring a specific data source and any generated reports will no longer reference the deleted volume. if an infoset is copied that contained data objects from a volume that has been deleted. Hitachi. Instead. click OK. you will see the Under Management link. SharePoint. Windows Share. it will see that change in status and uninstall itself. no exception is raised. they cannot be deleted as IBM StoredIQ Platform is the source of record. 3. Click the tab of the volume type you would like to delete: Primary. removing it from the list of available volumes. For example. To delete a volume: 1. or Discovery export. The volume is deleted. Policy limitations for volume types This conceptual topic denotes which volume types have policy limitations. Table 1. When the desktop client next checks in. and so on contain data. Go to Administration > Data sources > Specify volumes > Volumes. Dell DX Storage. Exchange. however. and in the confirmation dialog. FileNet . Hitachi. v If you mark a desktop volume for deletion. Click Delete. NewsGator Creating volumes and data sources 65 . 2.

SharePoint. NFS. NFS SnapLock. NewsGator Discovery Export Windows Share. Dell DX Storage. Windows Share SnapLock. IBM Information Archive. NFS. Windows Share. Documentum. Enterprise Vault. SharePoint. Hitachi. Documentum. Hitachi. NFS (Retention) SnapLock. Dell DX Storage. SharePoint. Hitachi. IBM Information Archive. Exchange. NFS. NFS SnapLock. NFS from SnapLock. Hitachi. Windows Share SnapLock. Windows Share. Windows Share SnapLock. NFS. NFS Considered to Discovery Export volumes (category) not harvested Move from Centera. IBM Information Archive. Documentum. Documentum. NFS SnapLock. Dell DX Storage Move to Centera. NFS. SharePoint. Table 1. Enterprise Vault (Discovery Accelerator). Windows Share. Documentum. Windows Share SnapLock. IBM Information Archive. Windows Share. Dell DX Storage 66 Administration Guide . FileNet Discovery Export Centera. Windows Share. Dell DX Storage Delete Centera. Policy limitations for volume types (continued) Policy Type Source/Target Limitations Other Restrictions Copy to Primary Volume v CIFS v NFS v SharePoint v Documentum Retention Volume v Centera v NetApp SnapLock (CIFS/NFS) v CIFS Celerra FLR (CIFS/NFS) v Dell DX Object Store v CIFS v NFS v IBM FileNet v Documentum /w RPS v Hitachi HCAP v IBM Information Archive v Symantec Enterprise Vault Copy to Centera. Hitachi. IBM Information Archive. Windows Share SnapLock.

Table 1. Policy limitations for volume types (continued) Policy Type Source/Target Limitations Other Restrictions Modify Security Choose single volume (no volume sets) Related concepts: “Creating volumes” on page 46 Creating volumes and data sources 67 .

68 Administration Guide .

A harvest must be run before you can start searching for data objects or textual content.Harvesting data This section provides information regarding harvesting data. and removed data objects on your volumes/file servers. modified. You can maintain the accuracy of the metadata repository quickly and easily with incremental harvests. Most harvesting parameters are selected from the Configuration subtab (see Configuring Application Settings). you can specify the number of processes to use during a harvest. There are also several standard harvesting-related jobs that are provided in the IBM StoredIQ Platform system. Running an out-of-the-box Harvest every volume job indexes all data objects on all volumes. you can ensure that the vocabulary for all volumes is consistent and up to date. The separation of tasks gives Administrators the flexibility to schedule the harvest or the post-process loading to run at times that do not impact system performance for system users. and creating explorers in the harvest job. whether a harvest should continue where it left off if it has been interrupted. An incremental harvest indexes new. you can speed up subsequent harvests by only harvesting for data objects that have been changed or are new. Because the harvests are incremental. Incremental harvests Harvesting volumes takes time and taxes your organization's resources. 2013 69 . as well as many other parameters. v If configured. v Generating all reports for that volume. Once you have harvested a volume. These are examples of post-harvest activities: v Loading all metadata for a volume. 2001. v Computing all tags that are registered to a particular volume. v An incremental harvest only harvests the changes on the requested volumes These options are selected when you create a job for the harvest. With both of these features. be running queries. for example. Harvesting with and without post-processing You can separate harvesting activities into two steps: the initial harvest and harvest post-processing. Harvesting (indexing) is the process or task by which IBM StoredIQ Platform examines and classifies data in your network. it takes less time to update the metadata repository with the additional advantage of putting a lighter load on your systems than the original harvests. v A full harvest can be run on every volume or on individual volumes. © Copyright IBM Corp. An Administrator initiates a harvest by including a harvest step in a job. Understanding Harvests This conceptual topic will help you to understand the harvest types available. updating tags. who may.

Related tasks: “Configuring harvester settings” on page 26 Harvesting properties and libraries This topic provides conceptual information regarding SharePoint volumes and their permissions for mounting. This section provides possible system configurations that will allow the system to process the volumes’ data in the quickest manner possible. Lightweight harvest parameters The sections here denote only configuration changes that could be made in order to perform a lightweight harvest. you may want to obtain a high-level view of a substantial amount of data so that you can make better. The SharePoint volume needs to be mounted using administrative permissions. For example. To override this restriction.microsoft. Note: Harvesting NewsGator Volumes—Since NewsGator objects are really just events in a stream. more informed decisions about how you want to handle harvesting or other policies going forward. an incremental harvest of a NewsGator volume only fetches new events that have been added since the last harvest. By clearing this option. Determining volume configuration settings This topic denotes possible volume configuration changes that could be made in order to perform a lightweight harvest. then any of the user profile’s properties that SharePoint users marked as visible to the category other than Everyone will not be visible in results. a full harvest may be required. and objects that have not been designated as being visible to Everyone for user profiles. depending on your data needs. IBM StoredIQ Platform allows you to perform many types of harvests.com/en-us/library/ ee721057.aspx. While in-depth harvests are common. If the harvest is performed without administrative permissions. Performing a lightweight harvest This section describes lightweight harvest as well as providing possible system configuration suggestions. Prior to modifying volume details. volumes mounted without administrative permissions must use credentials that have full control on all SharePoint site collections hosted by the user profile service application. at the beginning of a deployment. libraries. the system 70 Administration Guide . In order to harvest users’ personal documents and information. you do not need to include content tagging and full-text indices. see http://technet. read Creating Primary Volumes. Note: Administrative permissions are required in order to harvest personal information. Volume Details When configuring data sources for a lightweight harvest. there are also instances where you need an overview of the data and a systemwide picture of files’ types and sizes. To cover gaps due to exceptions or to pick up deleted events.

Note: This must be done from the Configuration tab. Show Advanced Details In some cases. 4. v Constraints/Scope harvest by extension—These options allow you to limit the files harvested through connection processes. The system can then execute and complete harvests very quickly. By only harvesting the directory structures in which you are interested. To configure a volume for a lightweight harvest: 1. Click OK and then restart services. the number of files.PPT files. an example being .XLS. which can be beneficial from a time-saving perspective. and . There are two points to consider when skipping content processing: v You do not spend time harvesting unnecessary objects. file ownership. Harvesting data 71 . 1. the advanced settings are used to control what is harvested within the volume. you can add these file types to the list of files for which the content is not processed. In these instances. read Configuring Harvester Settings. v Start Directory/End Directory—This setting allows you to select a beginning and end range of directories that will be harvested. Note: This setting is relevant only for full-text harvests. parallel data objects. If you want to harvest only Microsoft Office files. not the entire content of those files. Related tasks: “Creating primary volumes” on page 46 Determining harvester configuration settings This topic denotes possible harvester configuration changes that could be made in order to perform a lightweight harvest. v Include Directory—If there is a subtree of the volume that you wish to harvest rather than the whole volume. 2. Verify that all of the Index options check boxes are cleared (some are selected by default). . click Add primary volumes or Edit to edit an existing volume.will index the files’ metadata. On the Primary volume list page. you may want to reduce the weight of a full-text harvest. you can simply adjust the processing involved with the various harvest configuration controls. In these instances. and so on. and you will obtain a large amount of information regarding file types. You may have a large number of files that are types for which you do not need the contents. 3. you can exercise some control over the harvest’s weight. Enter the start and end directories as required. Within volume configuration. Go to Administration > Data sources > Specify volumes > Volumes. the age of the files. Note: Prior to modifying the harvester settings. or scoping harvests by extension. then you can enter the directory here. This will eliminate the harvest of objects that you have already determined are not relevant to your project.DOC. For example.EXE files. the Scope harvest by extension setting allows you to limit the files that you harvest by using a set of extensions. Determine Skip Content Processing settings. you can constrain the harvest to .

that two different data objects could create the same hash. the Run binary processing when text processing fails check box should be cleared as this setting is only relevant for full-text harvests. v At a later date. You will still collect the metadata on the very large files. 1. you reduce the required processing to create the hash. reharvesting these skipped files. There may be instances where large quantities of data are contained in subdirectories. however. calculated number based on the content of the file. These locations are not specific to a volume. 4. Consider these options for controlling the impact of a full-text harvest on the system’s performance. Determine Numbers. In cases where there are a lot of very large files. Determine the hash settings.000. This will create additional work. 2. 2. Binary processing is additional processing that can be performed if the standard processing cannot index the contents of a file. A files hash is a unique. you can control what numbers are indexed by the system.000. so you can search for them and determine which files were missed due to the setting of this parameter. Note: Prior to modifying the full-text settings. If there are large quantities of spreadsheet files. see Configuring Full-text Index Settings. Determine Binary Processing. Determine Limits. and that data is not relevant to your harvest strategy. you could have a directory with a tree of source code or software archive that is not used as a companywide resource. but can instead be used for common directories across volumes. Determining full-text settings This topic denotes possible full-text index configuration changes that could be made in order to perform a lightweight harvest. For example. The default value is 50. v Maximum data object size—This setting is only relevant for full-text harvests. Note: Prior to modifying the hash settings. Limit the length of words to be harvested by selecting the Limit the length of words index to __ characters option. Determine which Locations to ignore. you have the option of viewing the content of the skipped files. but you can reduce this number in order to reduce the quantity of indexed words. 3. Determine Limits. Determining hash settings This topic denotes possible hash setting configuration changes that could be made in order to perform a lightweight harvest. see Configuring Hash Settings. In these cases. By selecting Partial data object content.000). This is a small but potential risk. The full-text settings are only valid if you have full-text processing enabled for a given volume. Be forewarned. you can eliminate these directories from the harvests by adding the directory to the Locations to ignore. you may want to eliminate processing those files by setting the Maximum data object size to a smaller number (the default value is 1. 72 Administration Guide . For lightweight harvests. This is only relevant for full-text harvests.

This job is located in the Library/Jobs folder. one-step job that harvests all primary and retention volumes.Configuring jobs This section provides procedural information about configuring jobs within IBM StoredIQ Platform. Types of IBM StoredIQ Platform jobs This topic lists and describes the types of IBM StoredIQ Platform jobs that are available. these and their locations in the interface are described in this table. While this job runs. This job is located in the Library/Jobs folder. StoredIQ Job Type Description Centera deleted files This is an unscheduled. This job is located in the Library/Jobs folder. There are several of out-of-the-box jobs included in IBM StoredIQ Platform. it must have exclusive. one-step job that harvests the synchronizer Hitachi volumes. one-step job that harvests the FileNet deleted files synchronizer retention volumes. 2001. The actions available at each step depend on the type of job being created. looking for files that files synchronizer require removal because the physical file has been deleted from the retention file system. This job is located in the Workspace/Templates/Jobs folder. Jobs consist of either a single step or a series of steps. Windows Share (CIFS)/NFS This is an unscheduled. looking for files that require removal because the physical file has been deleted from the retention file system. Database Compactor This is a scheduled job that helps to limit “bloat” (unnecessary storage usage) in the database. This job is located in the Library/Jobs folder. or scheduled to run at a designated future time and at regular intervals. looking for files that synchronizer require removal because the physical file has been deleted from the retention file system. uninterrupted access to the database. This job is located in the Library/Jobs folder. This job is located in the Library/Jobs folder. one-step job that harvests the retention volume deleted Windows Share/NFS retention volumes. Administrators can override this job by logging in and then proceed to use the system. They can be run at the time of creation. Hitachi deleted files This is an unscheduled. © Copyright IBM Corp. one-step job that harvests the volume deleted files Enterprise Vault retention volumes. Jobs start tasks such as harvests. Harvest every volume This is an unscheduled. FileNet retention volume This is an unscheduled. looking for files that require removal because the physical file has been deleted from the file system. Enterprise Vault retention This is an unscheduled. 2013 73 . one-step job that synchronizes the synchronizer deleted Centera files.

Click OK. – Run an incremental harvest (default). click Yes. On the View job page. System maintenance and This is a multistep job located in the Library/Jobs folder. select New > Job. Related concepts: “Configuring jobs” on page 73 Creating a job This topic provides procedural information regarding how to create a job. StoredIQ Job Type Description IBM Information Archive This is an unscheduled. and the job is created. 74 Administration Guide . v Harvest type—Specify the type of harvest: – Run a full harvest. select the appropriate folder. This job is located in the Library/Jobs folder. meaning that only files or data objects that have changed since the last harvest will indexed. Update Age Explorers This is a one-step job located in the Library/Jobs folder that recalculates these items: v Owner Explorer data for Access Date v Owner Explorer data for Modified Date v Created Date (API only) values Related concepts: “Configuring jobs” on page 73 Working with jobs Defining jobs consists of two parts: naming the job and adding steps to the job. one-step job that harvests the IBM deleted files synchronizer Information Archive volumes. In the Save in: list. configure the following options: v Harvest these volumes—Select a volume from the list. looking for files that require removal because the physical file has been deleted from the file system. 5. Enter a unique job name. This section details how to create and work with various IBM StoredIQ Platform jobs. on the Specify harvest and load options page. 2. meaning that all data objects on this volume will be indexed. 4. 3. The following tasks define how to create custom jobs. For Run harvest jobs. If you would like to view the job and add steps. The cleanup system is configured to run a system maintenance and clean up job once a day and includes the following: v Email users about reports v Email Administrators about reports v Delete old reports v Delete old harvests v Load indexes v Optimize full-text indexes. 6. From the Folders tab > Workspace folder. click Add step and select a step type from the list. To create a job: 1.

v Load indexes only—Select this option to load previously harvested data into indexes. 7. v Harvest and load scheduling—You can separate harvest and load processes to limit resource use. select New > Job. Select: – Load indexes when harvest completes. Enter the duration that the harvest should run in Run harvest for __. or other number data object as entered in the text box. and then select Minutes or Hours. 5. 6. 9. 9. 4. On the View job page. Related concepts: “Creating retention volumes” on page 55 Editing a job This topic provides procedural information regarding editing an existing job. 10. To create a job to discover retention volumes: 1. 7. 3. 8. Click OK.) v Run harvest only—Select this option if you plan to load harvested data into indexes at a later time. If you would like to view the job and add steps. Click OK. tenth. Note: Prior to creating a job to discover retention volumes. Enter the number of minutes or number of data objects. select the retention volume to be used for this job. To edit a job’s steps: Configuring jobs 75 . 8. From the Folders tab > Workspace folder. and the job is created. Enter a unique job name. third. This topic provides procedural information regarding how to create jobs to discover retention volumes. – Load indexes with next nightly system services job (This delays the index loading to run with the next system-services job after the harvest has completed. you also have these options: Related concepts: “Harvesting data” on page 69 Creating a job to discover retention volumes Jobs can be created with various types of steps. you must have added a retention volume. Select Discover Retention volumes. Harvest sampling—Select this option if you want to limit the harvest to a smaller sample. In the Save in: list. The system-services job is scheduled to run at midnight by default. Enter the number of data objects to be harvested in Only harvest __data objects. 2. click Yes. click Add step. In the Discover Retention volume list. This option skips every second. Click OK. select the appropriate folder. Harvest limits—Limit the harvest by time or total number of data objects. When creating a job.

allowing you to specify the time. In the Schedule area. and frequency for the job to run. a started job will be displayed as This job is running now. 6. Edit an existing step by clicking Edit. right-click the job and select Start job. The Job details page opens. 5. From the Folders tab > Workspace folder. Click OK to return to the Folders tab. 5. To start a job. You may want to add some time if you have not specified all of the job steps. If you select None. the job runs once. To start a job: 1. Click OK. 4. v From the Folders tab > Workspace folder. date. enter the date on which to run the job. 2. Remove an existing step by clicking Remove. click the job you would like to edit. click on the name of the job you would like to start. 3. select the folder in which you would like to save the job. or click Now to populate the time field with the current time. d. at the time and date provided. Each job must have a unique name if it is saved in the same folder as another job. The Job details page opens. In the Job name text box. 76 Administration Guide . To save a job as: 1. enter the time the job must start. specify how often the job must run. Change the order of existing steps by clicking Move up or Move down icons. Starting a job This topic provides procedural information regarding starting a job. verify that the job’s name is correct. do either of the following: v From the Folders tab > Workspace folder. c. and the Save job as dialog box appears. 3. Click Save as. b. Click Edit job details. The started job will display Running in the Status column. Using the options in the Frequency field. Saving a job This topic provides procedural information regarding saving a job. and the Edit job details dialog box appears. v In the Time: field. To edit the job steps: a. From the Folders tab > Workspace folder. or click Today to populate the date field with the current date. Add a step to by job by clicking Add step. Click OK. click on the name of the job you would like to save. Click OK to close the Save job as dialog box and to return to the Folders tab. 2. 1. and in the Job details page. In the Save in list. v In the Date: field. 4. click Start job. 2. 3.

Once a task is completed. Jobs. Monitoring processing You can track the system’s processing on your harvest/policy and discovery export tasks using the View cache details feature. Click OK. Click Jobs folder to open the job list. the appliance can resume collection at the point that it was interrupted. and then click OK. click the Volume cache tab. completed the desired changes. select Filter by. instead of starting over from the beginning of the task. Click the predefined job that you would like to edit. 3. Table 2. To monitor processing: From Administration > Dashboard. If a collection is interrupted. click the Discovery export cache tab. The View cache details page appears. 2. 3.. or to see discovery export job progress. To run a predefined job: 1.. Harvest/Volume Cache Details Column Description Values Name The name of the volume being harvested Start date The time the job started v Copy Type Type of job being executed v Harvest—full v Harvest—incremental Configuring jobs 77 . The appliance gathers data in increments and caches the data as it gathers it. 2. click Edit job details. 4. Deleting a job This topic provides procedural information regarding deleting a job. To delete a job: 1. Click the job name to open the job details. Go to Folders > Library. click View cache details. Alternately. To set the schedule. click Start job (selected jobs only) in the bottom right-hand area of the pane to start the job immediately. Click Delete in the lower-left hand corner of the screen. 5. in the Appliance status pane. To see harvest/policy progress. the job disappears from the list. Running a predefined job This topic provides procedural information regarding running a predefined job. Note: Note that information for a job is only available while the job is running. From the Folders tab > Workspace folder.

Caching—The volume cache is currently being created/updated by a harvest or policy. IBM StoredIQ Platform Discovery Export Cache Details Column Description Values Name Name of the volume being processed Start date Starting date/time for the process Type The type of file being Discovery export prepared for export State Status of the discovery Aborted—Discovery export export job policy has been cancelled or deleted by the user. Cached—Creation or update of volume cache is complete (harvest only). Harvest/Volume Cache Details (continued) Column Description Values v Caching—The volume State Status of the process cache is currently being created/updated by a harvest or policy v Cached—Creation or update of volume cache is complete (harvest only) v Loading—Volume cache contents are being successfully loaded into the volume cluster Full-text Indicates whether a full-text Yes or No harvest is being performed View audit link details Link to the harvest/policy audit page Table 3. Table 2. Loading—Volume cache contents are being successfully loaded into the volume cluster Full-text Indicates whether a full-text Yes or No harvest is being performed Related concepts: 78 Administration Guide .

look at the load averages in the Basic system information area. 4. “Configuring jobs” on page 73 Deleting a volume cache This topic provides procedural information regarding the deletion of a volume cache. Call technical support to report a job that does not complete. Click OK. and then compare it to that value denoted previously. on Appliance status > About appliance > View details > System services. then the harvest is running correctly. note the Total data objects encountered number. 2. 5. Call technical support to report that the harvest is stuck on files. Determining if a harvest is stuck The speed of a harvest is dependent on volume size and processing speed. Configuring jobs 79 . however. then go to Question 2: Is the load average up? Question 2: Is the load average up? a. Click Administration > Dashboard > Jobs in Progress to verify that your job continues to run. the harvest may be stuck. Note the new value the Total data objects encountered. Click Delete. To delete a volume cache: 1. v No—The job is not really running. harvests do occasionally become stuck and are unable to complete successfully. Go to Question 3: Did the job complete on the second pass? Question 3: Did the job complete on the second pass? v Yes—If the job completed successfully after it was restarted. A confirmation dialog appears. In Jobs in Progress. then the harvest is not stuck. v No—If the number of encountered objects remains the same. Use the procedures outlined here to troubleshoot the harvest process and determine if and when to contact technical support. meaning that the job must be restarted. 2. select the check box of the cache you would like to delete. v Yes—If the load averages number is up. Go to Question 1: Is the Total data objects encountered counter increasing? Question 1: Is the Total data objects encountered counter increasing? v Yes—If the number of encountered data objects continues to increase. To view load averages. v No—The job did not complete successfully. 3. To troubleshoot a harvest: 1. letting the harvest continue to run. Wait 15 minutes. From the volume cache or discovery export cache list.

80 Administration Guide .

v That users might notice a slight change in performance speed. IBM Desktop Data Collector will resume its work from the point at which it stopped. Desktop collection will not interfere with work processes. Service Pack 2 or later v Windows 7 32. Prior to using the IBM Desktop Data Collector. that desktop is available as a data source within the list of primary volumes.and 64-bit.Utilizing desktop collection When configuring desktop settings. Once the desktop client has been installed on a desktop and then connects and registers with the data server. the Administrator may want to notify end users that desktop collection is going to be performed and make them aware of the following: v That the desktop must be connected over the network during data collection. 2008 Installation requires administrative privileges on the desktop. allowing them to be used just as other types of added data sources. Desktop collection processes The following sections describe how to configure and perform desktop collection.and 64-bit v Windows Vista 32. The IBM Desktop Data Collector (desktop client or client) enables desktops as a volume type or data source. Note that all communications are outbound from the client. Restart. The appliance never pushes data or requests to the desktop. and Email Logs (which packages logs into single file and launches the email client so that the user can mail them to the IBM StoredIQ Platform Administrator). 2001. Additionally. 2013 81 . The IBM Desktop Data Collector is provided as a standard MSI file and is installed according to the typical method (such as Microsoft Systems Management Service (SMS)) used within your organization. The IBM Desktop Data Collector can collect PSTs and . you are enabling or disabling encryption within IBM StoredIQ Platform. v That certain actions can be taken from the tray icon: Right-click for About.and 64-bit v Windows server 2003. it should be noted that a desktop cannot be the target or destination of an action. Status. If the connection is interrupted. while the snippet support and the snippet step-up action are supported by IBM Desktop Data Collector.ZIP files as well as other data objects and is capable of removing itself once its work is completed. Related concepts: “Utilizing desktop collection” IBM Desktop Data Collector client installation The IBM Desktop Data Collector agent works with the following operating systems: v Windows XP 32. but that they can continue working normally. The IBM Desktop Data Collector pings the © Copyright IBM Corp.

Additionally.VBS formats can be used to pass client arguments to an . When the install is not silent.vbs. appliance about every 60 seconds. a text file.local /q 82 Administration Guide . the hostname and IP address of the IBM StoredIQ Platform must be supplied.msi NOTRAYICON=0 SERVERACTIONNODEADDRESS=clust017.local /q"Set WshShell = CreateObject("WScript.test. Optional v SERVERACTIONNODEPORT—Port number for the Agent on the Action node. Changing the setting to 1 forces the agent to run silently and not display a tray icon. If the installation will be performed manually by end users. the user is prompted for IP address or hostname. The link may be to any executable file format such as . you must provide this information to them using email.msi NOTRAYICON=1 SERVERACTIONNODEADDRESS=clust017. SERVERACTIONNODEADDRESS must be supplied as an argument. in which a . The . or . v NT Logon Script. – /quiet—install/uninstall runs silently. or another method. The user executing the link must have administrative privileges. IBM Desktop Data Collector installation methods During installation.Shell") WshShell. v MSI v Emailing links—Send a link within an email such as file:\\g:\group\install\ Client-install. and the Last Known Contact time statistic is updated approximately every 30 minutes. v NOTRAYICON—Specifies whether the agent displays the IBM Desktop Data Collector tray icon while running.VBS.MSI.BAT file or . Required v SERVERACTIONNODEADDRESS—IP address or hostname for the Action node.Shell") Set WshShell = Nothing Batch file msiexec /i G:\group\install\desktopclient.BAT/. This field must be entered accurately or manual correction will be required in the desktop config file. One can download the installer application from the application in the Configuration tab. Set WshShell = CreateObject("WScript.test. the Administrator can temporarily disable the client service on all desktops registered to the data server from the Configuration tab. .Run "%windir%\System32\msiexec. Also. The default is the value of this argument. The IBM Desktop Data Collector can be installed using the following methods.VBS script invokes msiexec.MSI. the IBM Desktop Data Collector checks for task assignments every five minutes. and should only be changed when the agent will connect on a different port that is subsequently mapped to 21000. Examples are given here: – /i—install – /x {7E9E08F1-571B-4888-AC08-CEA8A076F5F9}—uninstall the agent. The product code must be present.BAT. This method supports passing installation arguments as MSI properties.exe /i G:\group\install\desktopclient. When specifying this option. Mass Distribution Method (SMS) v The appliance id as part of the distribution configuration. Defaults to 21000.

0 or later. you must carefully review the infoset of affected data objects prior to executing a delete action. To configure Encrypted File System desktop collection: 1. noting the following points: v If the computer is not part of a domain and it is running any version of Windows earlier than 7. They are not transferred to the appliance or backed up to any other location. then the user name should simply be the user name.xml c:\Documents and Settings\<username>\Cookies\ and extension *. v If the computer is not part of a domain and it is running Windows 7.txt c:\Documents and Settings\<username>\Start Menu\Programs\ and extension *. v Anything under this directory: c:\Windows v Anything under this directory: Documents and Settings. In reviewing the returned list.lnk v Executable files Utilizing desktop collection 83 . see “Downloading the IBM Desktop Data Collector installer from the application” on page 31. Complete the procedure outlined in Using the Encrypted File-System Recovery Agent. then the user name should be the name of the PC and the domain.0. they are removed permanently. do not allow the following to be deleted. Restart the service and begin collection.msi NOTRAYICON=1 SERVERACTIONNODEADDRESS=address SERVERACTIONPORT=21000 /q Related concepts: “Desktop collection processes” on page 81 Configuring IBM Desktop Data Collector collection Desktop collection configuration includes the IBM Desktop Data Collector installer and Encrypted File System. with these extensions: c:\Documents and Settings\<username>\UserData\ and extension *. Consequently. Your organization may use custom applications or other files that you may not want to delete. For procedural information on downloading the IBM Desktop Data Collector installer. 2. Related concepts: “Utilizing desktop collection” on page 81 Using the “delete” policy with the IBM Desktop Data Collector: special notes When you use the IBM Desktop Data Collector to delete files from a desktop. launch the command-line interface and edit the msiexec parameters as follows: msiexec /i \\server\volume\DesktopClient. To enable stealth node for the IBM Desktop Data Collector. Related concepts: “Installing the IBM Desktop Data Collector in stealth mode” “Desktop collection processes” on page 81 Installing the IBM Desktop Data Collector in stealth mode The IBM Desktop Data Collector can also be installed in stealth mode. meaning that the IBM Desktop Data Collector icon does not appear in the system tray and the end user will not know that data is being collected.

pnf v Installers – *.dat – ntuser.sys – *.dat – index.log Some important system files can be automatically excluded from harvest operations.old – *. – *.ini – ntuser. Enabling the Harvester setting Determine whether data objects have NSRL digital signature excludes some well-known system files.exe – *.ocx v Drivers – *.inf – *.pol – ntuser.cat v These file names – desktop.msi – *.mst v Important data files – *.dat.dat – *.ini – *.dll – *. Related concepts: “Utilizing desktop collection” on page 81 84 Administration Guide .

2013 85 . moved. select New > New Folder. or deleted. In the Description: field. Note: These folders can be renamed. If you wish to open the folder. you cannot add folders to the Library folder. local. select Delete. you may want to create folders for each locale or function. Note: This folder cannot be renamed. The Create new folder dialog appears. Workspace folder The Workspace folder is a custom folder that reflects your use of the system.Using Folders This section provides both conceptual and procedural information regarding folders and their usage. use the list to select a place for the folder. 2001. or the like). Library folder The Library folder contains the Jobs folder. In the Name: field. 5. and in the Actions list. From the Folders tab. give a name that represents the folder’s purpose (legal matter. Workspace folder. it contains a folder entitled Templates. type a description for the folder. The Folders tab displays two types of folders: Library and Workspace. Click OK. 4. If you are using the system for IT purposes. Note that all custom folders must be placed in the Workspace or a Workspace subdirectory. and you also have the options of setting folder security. moved. or deleted. © Copyright IBM Corp. Related concepts: “Understanding folder types” Deleting a folder When deleting folders. do either of the following: a. To delete a folder: 1. In the Create in: field. Select the check box next to the folder you want to delete. From within the Folders tab. click OK in the dialog that appears. 2. 3. Creating a folder To create a folder: 1. note that only empty folders can be deleted. By default. Understanding folder types This topic contains conceptual information regarding both the Library and Workspace folders. business unit.

. 3. 4. choose the location for the copied item. select the new location from the list.. Right-click on the folder name and select Rename. b. pane. Related concepts: “Understanding folder types” on page 85 Moving a folder To move a folder: 1. 6. 2. type a description. use the list to select a location for the item. 2. Related concepts: “Understanding folder types” on page 85 Renaming a folder To rename a folder: 1. 4. In the Description: field. In the Description field. Click OK. In the Rename folder dialog. Related concepts: “Understanding folder types” on page 85 Saving items into different folders Jobs can be renamed and saved into other folders. 2. and in the Move items dialog. type a description. Click OK. In the Copy dialog. From the item’s editor pane. 3. click Save as. 2. click OK. Click OK. Select the check box next to the folder you want to move. 2. 5. and in the Actions list. Right-click on the folder name and select Move. In the Save in list. In the Save in: field. right-click the item you want to copy. Select Copy. Related concepts: “Understanding folder types” on page 85 Copying items to different folders To copy an item from one folder to another: 1. From within Folders > Workspace. Click Save. b. do either of the following: a. Note that you cannot reuse the same name for an item within a single folder. 86 Administration Guide . In the Save [item] as. assign a new name (if appropriate) in the Name field. From within the Folders tab. Right-click on a folder name and select Delete. type a name in the [Item] name: field. To save items into a different folder: 1. select Move. In the confirmation box that appears. 3. change the Name and/or the Description. 3. Workspace folder.

Related concepts: “Understanding folder types” on page 85 Using Folders 87 . 5.. Related concepts: “Understanding folder types” on page 85 Filtering items within the folder view To filter items within the folder view: 1. Click Save. From within Folders. 2. click Filter by.. Select the component you want to display.

88 Administration Guide .

date. Data objects can be skipped during a harvest for a variety of reasons such as the object being unavailable or a selected user option that excludes the data object from the harvest. Understanding harvest audits Harvest audits provide a summary of the harvest. and detailed results v Skipped data objects details Harvest Audit by Volume Fields and Descriptions Server Volume Harvest type Last harvested v This is the server name. © Copyright IBM Corp. duration. and average data object size. of the harvest’s duration. Average data object size v This is the average size of encountered data objects. or Incremental.Using audits and logs This section describes the audit and log categories in the IBM StoredIQ Platform system. given in Incomplete. including summary options. status: Complete or harvest speed. All skipped harvest audit data and other files that have not been processed can be downloaded for analysis. Harvest. The Harvest details page lists all skipped data objects based on file-system metadata level and content level. v Harvest audit by volume v Harvest audit by time v Harvest audit overview. v This is the volume name. fully processed. including status. 2013 89 . results options. ACL only. data objects that were data objects that were during processing. This section lists the different fields seen when harvesting audits. including descriptions of the various audit types as well as how to view and download details. previously processed. They can be viewed in two ways: by volume name or by the date and time of the last harvest. average harvest speed. Binary processed Harvest duration Status Average harvest speed v This is the number of v This is the length of time v This is the harvest’s v This is the average processed binaries. Total system data objects Data objects fully Data objects previously Processing exceptions v This is the total number processed processed v This is the number of of system data objects v This is the number of v This is the number of exceptions thrown encountered. v This indicates the type of v This is the date and time harvest performed: Full of the last harvest. terms of data objects processed per second. 2001.

Options are Complete or harvest. terms of data objects objects. Data objects previously Processing exceptions Binary processed Harvest duration processed v This is the total number v This is the total number v This is the length of time v This is the total number of encountered of processed binaries. that were fully processed. v This is the number of data skipped objects that were objects that were skipped previously processed. ACL only. Harvest Overview Detailed Results: Fields and Descriptions Skipped . Incomplete.previously Fully processed Skipped . Status Average harvest speed Average data object size v This is the harvest’s v This is the average v This is the average size status: Complete or harvest speed. configuration. Harvest duration Average harvest speed Average data object size v This is the length of time of the v This is the average harvest speed. Harvest Audit by Time Fields and Descriptions Harvest start Harvest type Total system data objects Data objects fully v This is the time and date v This indicates the type of v This is the total number processed at which the harvest was harvest performed: Full of system data objects v This is the total number started.user Skipped directories Content skipped . Harvest. 90 Administration Guide . given in terms of data objects encountered data objects. v This is the average size of harvest’s duration. Harvest Overview Summary Options: Fields and Descriptions Harvest type Harvest status Harvest date v This indicates the type of harvest v This indicates the harvest’s status.cannot access data processed v This is the number of fully object v This is the number of processed data objects. processed per second.user configuration v This is the number of data configuration v This is the number of data objects in skipped v This is the number of data objects that were skipped directories. encountered data objects. v This lists the date and time of the performed: Full Harvest. of system data objects processing exceptions. contained data objects. as they could not be accessed. or that were found. processed per second. objects where the content because of their user was skipped due to user configuration. of system data objects Incremental. given in of encountered data Incomplete. or Incremental. that were previously processed. ACL only. of the harvest’s duration. Skipped . Harvest Overview Results Options: Fields and Descriptions Total system data objects Total contained data objects Total data objects v This is the total number of system v This lists the total number of v This lists the total number of data objects that were found.

Harvest Overview Detailed Results: Fields and Descriptions Content type known. binary text extracted. 2. full Binary text extracted.previously processed. partial processing complete. processing has been processing has been completed. Content type unknown. The Harvest audit by volume page opens. binary text extracted. The harvest audit by time page lists all recent harvests for the chosen volume and includes details about each harvest. total data objects v Detailed results—Skipped . 4. but Content type known. duration. full processing complete. fully processed. content type known. You can also access this page by clicking the Last harvested time link on the Harvest audit by volume page. skipped directories. which lists recent harvests and includes details about them. content type known. status. average harvest speed. and total. content type known. click the harvest start time link to see the harvest overview page for the volume. To view harvest audits: 1. In the Harvest start column. partial processed processing complete processing complete v This is the number of data v This is the number of data v This is the number of data objects for which the objects for which the objects for which the content type is unknown binary text has been binary text has been and has not been extracted and full extracted and partial processed. error processing binary content. Using audits and logs 91 . In the Volume column. but cannot extract content. processing content. and average data object size. The Harvest overview page provides: v Summary—Harvest type. not processed. click the volume name link to see the harvest audit by time page for that particular volume. skipped . extracted. partial Content type known.user configuration. skipped - cannot access data object. content skipped . Go to Audit > Harvests > View all harvests. but content type is known. completed. date and time. objects for which an error was thrown while processing binary content.user configuration. 3. v Results—Total system data objects. total contained data objects. partial processing complete. but partial processing is an error was thrown while the content could not be complete. Related concepts: “Using audits and logs” on page 89 Viewing harvest audits This topic provides procedural information regarding how to view harvest audits. error-gathering ACLs. but error processing content. content type unknown. To view details on data objects. Error processing binary Total content v This lists the total number v This is the number of data of data objects. click the link next to the data objects under Detailed results. but processing complete error processing content cannot extract content v This is the number of data v This is the number of data v This is the number of data objects for which the objects for which the objects for which the content type is known and content type is known. not Binary text extracted.

Data objects skipped at the content level are based on attributes associated with the data object or its contents. including the number of data objects imported. Click OK. 92 Administration Guide . and reason skipped. A dialog informs you that the results are being prepared for download. v The skipped data object lists includes object name.user configuration or binary text extracted. A new dialog appears. Click the Download list in CSV format link on the upper left side of the page. prompting you to save the open or save the . you may want to download the data object's harvest audit list details for further analysis. and status. imported volume. 3. Related concepts: “Understanding harvest audits” on page 89 Downloading harvest list details This topic provides procedural information regarding how to download harvest list details. the time and date of the volume import. whether or not the imported volume overwrote an existing volume. Skipped Data Objects Results Details provides details about skipped data objects. Information in the downloaded CSV file includes: v Object name v System path v Container path v Message explaining why data object was skipped v Server name v Volume name Related concepts: “Understanding harvest audits” on page 89 Understanding import audits Volume-import audits provide information about the volume import. 2. and the total. click the active link next to the data objects under Detailed results. The volume name links to the Import details page.CSV file. A page named for the detailed result chosen (such as Skipped . path.previously processed. Imports by Volumes Details: Fields and Descriptions Volume Exported from Import date v This is the name of the imported v This is the source server of the v This is the date and time on which volume. all other results with more than zero results listed have clickable links that allow you to view and download results. fully processed. v With the exceptions of skipped . the IBM StoredIQ Platform system exported from. or had errors in processing). not fully processed. the import occurred. From the Harvest details page. full processing complete) appears. This section lists the different fields seen when viewing import audits. Data objects can be skipped at the file system metadata level or at the content level. If data objects were not harvested (such as skipped. To download harvest list details: 1.

Click a volume name link in the Volume column to view the audit details for that particular import. and click View all imports. including viewing logs. those logs are removed from the system. These actions include creating draft and published queries and tags. or downloading an event log. use the View all event logs list to select the day for which you would like to view an event log. clearing the current event log. You can view event logs for the current day or review saved logs from previous days. Viewing event logs To view the event log: 1. and any other action taken through the IBM StoredIQ Platform interface. which document actions that succeed and fail. Imports by Volumes Details: Fields and Descriptions Total data objects imported Overwrite existing Status v This is the total number of v If the import overwrote an existing v This is the status of the import: imported data objects. running policies. Every action taken by the system and its users is captured by the event logs. deleting objects. and then locate the Event log section on the dashboard. v Click the Audit tab and locate the Event logs section. Related reference: Appendix C. volume. which lists volume imports and import information. 2. Understanding event logs Every action taken by the system and its users is captured by the event logs. A detailed of list of log entries is provided in the event log messages. and up to 30 days worth of logs can be viewed through the interface. “Event log messages. Perform either of the following: v Click Administration > Dashboard. The current day’s log displays there by default. the status is Yes. Using audits and logs 93 . 2.” on page 133 Working with event logs This topic provides procedural information regarding how to work with event logs. If you select and clear a day of logs. subscribing to an event. Related concepts: “Using audits and logs” on page 89 Viewing volume import audit details This topic provides procedural information regarding how to view volume import audit details To view volume import audit details: 1. To view a previous day's log on the dashboard. The Imports by volume page opens. import did not overwrite an existing volume. If the Complete or Incomplete. the status is No. Go to Audit > Imports. configuring settings. publishing queries.

Clearing the current event log To clear the current event log: On the Administration > Dashboard. and by discovery export. 3. Policy Audit by Name: Fields and Descriptions Policy name Policy Status Number of times Most recent date v This is the policy v This is the policy’s executed executed name. click Subscribe to the right of the event. Click OK. select the Download link for saving the data to a text file. In Destination. In the Event log area. Understanding policy audits This topic provides conceptual information regarding how to view and understand policy audits. click Clear for the current view. and data object counts. Each log is listed by date in YYYY-MM-DD format. average speed. executed. and the Event log for today page opens. To the right of the event log to which you would like to subscribe. Note: You can also subscribe to an event on the Dashboard. 94 Administration Guide . Select a different day from the view event log from the list. They can be viewed by name. This section lists the different fields seen when harvesting audits. 3. Policy audits provide a detailed history of the policy. select the method by which you would like to be notified of this event log. Enter a name and select a location to save the file. This menu displays the event log dates for the past 30 days. total data objects. When viewing an event log. volume. click Subscribe. Select to save the file from the prompt. 2. 5. start and end dates with times. v This is the number v This is the date on of times that the which the policy policy was was last executed. If you select Email address. Subscribing to an event To subscribe to an event: 1. be certain to use commas to separate multiple email addresses. time. status. Click View all event logs. The Edit notification page appears. Go to Audit > Event logs. Downloading an event log To download an event log: 1. including type of action. 2. date last executed. 4.

failure. Total data objects Action type Avg. Policy Audit by Volume: Fields and Descriptions Volume Most recent date a policy Number of policies executed v This is the name of the was executed v This is the number of volume on which the v This is the most recent policies that were policy was executed. Policy Audit by Time: Fields and Descriptions Policy name Policy Status Start End v This is the policy v This is the policy’s v This is the time at v This is the time at name. Most recent load file status Most recent date executed v This is the status of the v This is the date of the most recent load file. per second. most recent Discovery export. warning. was last executed. other. other. place. Complete or been classified as a been classified as Incomplete. times the policy has run. date on which the policy executed. Using audits and logs 95 . actions/second v This is the total v This is the type of v This is the average number of data policy that took number of action objects. of times the run messages that have messages that have was executed. execution was execution was started. Success count Failure count Warning count Other count v This is the number v This is the number v This is the number v This is the number of processed of processed of processed of processed messages that have messages that have messages that have messages that have been classified as a been classified as a been classified as a been classified as success. Discovery Export Runs by Discovery Export: Fields and Descriptions Discovery export run Number of Success count Failure count v This is the name of executions v This is the number v This is the number the discovery v This is the number of processed of processed export run. status: Complete or which the policy’s which the policy’s Incomplete. completed. most recent policy execution. Policy Audit by Discovery Export: Fields and Descriptions Discovery export name Number runs Most recent export status v This is the name of the v This is the number of v This is the status of the Discovery export. warning. failure. Warning count Other count Total data objects Export status v This is the number v This is the number v This is the total v This is the status of processed of processed number of data of the export: messages that have messages that have objects. been classified as a been classified as a success.

Viewing policy audit details This topic provides procedural information regarding viewing policy audit details. Note: To view the list of data objects. Discovery Export Runs by Discovery Export: Fields and Descriptions Load file status v This is the status of the load file: Complete or Incomplete. To view more policy execution details. click on the [#] data objects link. 3. v Click Discovery export. and then click Name. IBM StoredIQ Platform provides more detailed information such as: v Source and destination settings 96 Administration Guide . Click a policy name to open the Policy executions by time page. The Policy audit by name page provides policy name and status. and the time and date of the most recent execution. v On the Policy audit by discovery export page. To create a report. Click a policy name to open the Policy execution results page. click the discovery export name to open the Discovery export runs by production page. which can be accessed by any of the above policy views. Note: To view the list of data objects. v On the Policy audit by time page. To view policy audit details: 1. To create a report. the number of times it has been executed. v Click Volume to open the policy audit by volume page. As you review audit results through these pages. To create a report. v Click on a policy name to open the Policy execution results page. This will generate a warning. Note: To view the list of data objects. click Create XML or Create PDF. you can continue clicking through to review various levels of information. The page details further information according to the incremental runs of the policy. click the [#] data objects link. v Click Time to see Audit by time page for the policy. v The copied file is renamed. click Create XML or Create PDF. As you continue browsing. 2. Note: A warning in a policy audit trail is a success with the following conditions: v If you copy an Exchange item such as re:. click the policy name to open the Policy execution results page. v Click on a volume link to go to the Policy audit by time page. from the volume and policy execution level down to the data objects. v Click a policy name to open the Policy executions by time page. v The file system to which you are copying does not accept characters in the file name. Go to Audit > Policies. click the [#] data objects link. click on the policy name in the execution summary page. click Create XML or Create PDF. not the :. the re is copied.

click the policy name to open the Policy execution results page. 4. 3. other count. the number of times it has been executed. the action type. its most recent time and date of execution. Related concepts: “Understanding policy audits” on page 94 Viewing a policy audit by discovery export To view a policy audit by discovery export: 1. Go to Audit > Policies. v Query (eitherIBM StoredIQ Platform or user-defined) v View metadata link—The view metadata page describes security details for source and destination locations of the policy action. Click a policy name to open the Policy execution by time page. click the discovery export name to open the Discovery export runs by production page. Using audits and logs 97 . 2. and the average number of actions per second. the total number of data objects. 2. The page details further information according to the incremental runs of the policy. Related concepts: “Understanding policy audits” on page 94 Viewing a policy audit by volume To view a policy audit by volume: 1. The Policy audit by page provides policy name. On the Policy audit by time page. Click a policy name to open the Policy executions by time page. Most attributes that appear depend upon the type of policy run and the options available in the policy editor. 3. Related concepts: “Understanding policy audits” on page 94 Viewing a policy audit by name To view a policy audit by name: 1. 2. Click Volume. and the time and date of the most recent execution. The Policy audit by time provides policy name. v Policy options—Details of the policy action—This section reflects the options selected when creating the policy. Related concepts: “Understanding policy audits” on page 94 Viewing a policy audit by time To view a policy audit by time: 1. Click Discovery export. Go to Audit > Policies. warning count. failure count. the execution’s start and end time. its status. Click Name. its success count. 3. and the number of policies that were executed. Click the policy name to open the Policy executions by results page. Click Time. On the Policy audit by discovery export page. Go to Audit > Policies. The Policy audit by name page provides policy name and status. 2.

specify the Policy v In the Find audits that match list. and then add name. audits. define their values. Execution Details. Total count. Related concepts: “Understanding policy audits” on page 94 Understanding the search audit feature With the search audit feature. 4. Execution details Policy audits can be searched using any of these execution details. select search criteria. Policy Audit Details: Fields and Descriptions Audit search by policy details Specify search criteria Audit search criteria v In this area. Data object details Policy audits can be searched using any of these data-object details. define their values. audits. 3. or All of the following. Action status. Source volume. select either Any of the following them to the list to search across all Source object name. Related concepts: “Understanding policy audits” on page 94 98 Administration Guide . the Policy state. Destination volume. v In this area. audits. specify the Source v In the Find audits that match list. v In this area. Action start select either Any of the following them to the list to search across all date. Policy details Policy audits can be searched using any of these details. select search criteria. Policy Audit Data Object Details: Fields and Descriptions Audit search by data object details Specify search criteria Audit search criteria v In this area. and then add volume. or Query name. object name. Warning count. Click a policy name to open the Policy executions by time page. select search criteria. and then add type. Failure count. Destination system path. Destination or All of the following. Policy Audit Execution Details: Fields and Descriptions Audit search by execution details Specify search criteria Audit search criteria v In this area. Source system path. you can search audit trails by entering either by Policy Details. and the select either Any of the following them to the list to search across all Action type. specify the Action v In the Find audits that match list. Success count. or All of the following. Click on a policy name to open the Policy execution results page. or Action result. Destination volume. Action end date. define their values. v In this area. or Data Object Details.

Failures. click CSV. To download the material in . In the Browse by options. produced in a previous run This applies to intermediate (discovery export only) and files archives produced during a discovery export policy. select Create PDF to generate a PDF or Create XML to generate an XML file of the results. click Data objects to see items that were responsive to the policy. Saving results from an audit You can save the results of policy executions into PDF and XML files. On the Policy execution results page. 3. shortcut (Windows Share) (Discovery export policy) NFS) Using audits and logs 99 . Go to Audit > Policies. 4. click Time. In the Results pane. 6. Policy audit warning messages Data objects can receive a warning during a policy action if they fail to do any of the following: Policy Audit Warnings Set directory attributes Reset time stamps Set attributes Set time stamps Set security descriptor Set access modes (Windows Set owner information Set group information (Windows Share) Share) (NFS) Set security permissions Create a link after a Find template to create a Extract text for the object migrate (Windows Share. and Other (discovery export policies only). Click the policy name. To save results from an audit trail: 1. Related concepts: “Understanding policy audits” on page 94 Policy audit messages A policy audit shows the number of data objects that have been processed during the policy execution. Policy audit success messages Data objects can receive a success message for these reasons: Policy Audit Success Messages Success Data object is a duplicate of Data object skipped but Data object is a duplicate [object name] will be loaded in load file. Processed data objects are divided into these categories: Success. The exporting of information appears as a running job on the dashboard until completed. 5. Warnings.CSV. Access the report through the inbox on the navigation page. 2. The information can be saved as PDF and XML files.

Note: A data object whose name was modified to respect file system rules on the destination volume will also appear in the warning category. Related concepts: “Understanding policy audits” on page 94 100 Administration Guide . v A data object is a non-responsive member of a container. cannot pipeline be deleted (retention server) Data object is a constituent of a container that already encountered failure (Discovery export policy) Other policy audit messages Data objects are categorized in the other category during a discovery export policy when: v A data object is a member that makes its container responsive. Data-Object Failures Failed to create target Source does not exist Failed to find a new name directory structure for the incoming object Target is a directory File copy failed Could not create target Error copying data to target Could not copy due to Could not delete source after network errors move Target disk is full Source equals target on a Insufficient permissions in copy or move general to perform an action All modify actions failed File timed out waiting in the File under retention. Policy audit failure messages Data objects can receive one of the following failures during a policy action.

2013 101 .0 Ami Draw SDW graphic all ANSI TXT text and markup 7. DIB. Format Extension Category Version Adobe Acrobat PDF graphic 2.0–7.5–2.0 9. You can also view SharePoint attributes.0–6.0 Japanese Adobe FrameMaker FMV graphic vector/raster through 5.0–14. organized by name and by category. Supported file types This section provides a comprehensive list of the file types that can be harvested and processed by IBM StoredIQ Platform.0 Binary Group 3 Fax graphic all Bitmap BMP. Supported file types by name This topic lists all supported file types by name.0 Graphics Adobe FrameMaker MIF word processing 3.0 Adobe Photoshop PSD graphic 4. RLE.1 3. graphic all WARP © Copyright IBM Corp.6 9.and 8-bit ASCII TXT text and markup 7. 2001. CUR.Appendix A.0 2002 2004 2005 AutoShade Rendering RND graphic 2.and 8-bit AutoCAD DWG CAD 2.0 Interchange Format Adobe Illustrator graphic through 7. ICO.

x–9.0 DEC WPS PLUS WPL word processing through 4.0 Windows X3 DataEase database 4.0 dBXL database 1.X dBase Database database through 5.0 4.5 102 Administration Guide .x–8.x header) Corel Presentations SHW presentation through 12.3 DEC WPS PLUS DX word processing through 4.0 Corel Clipart CMX graphic 5–6 Corel Draw CDR graphic 3.x Corel Draw (CDR with Tiff graphic 2.0 X3 Corel WordPerfect WPD word processing through 12.0 DOS command executable COM system Dynamic link library files DLL system EBCDIC text and markup all ENABLE word processing 3.Format Extension Category Version CALS Raster GP4 graphic Type I. II Comma Separated Values CSV spreadsheet Computer Graphics CGM graphic ANSI Metafile CALS NIST 3.1 DisplayWrite (2 and 3) IP word processing all DisplayWrite (4 and 5) word processing through 2.0 4.

Supported file types 103 .0 First Choice word processing through 3.5 ENABLE Spreadsheet SSF spreadsheet 3.0 FoxBase database 2.0 4.x 3.0 Framework word processing 3.x Harvard Graphics graphic all (Windows) Appendix A.0 4.0 4.0 Framework spreadsheet 3.5 Encapsulated Post–Script EPS graphic TIFF header (raster) Executable files EXE system First Choice database through 3.1 Framework database 3.Format Extension Category Version ENABLE database 3.0 First Choice spreadsheet through 3.0 GEM Bit Image IMG graphic all Graphics Interchange GIF graphic all Format Graphics Environment GEM VDI graphic bitmap and vector Manager Gzip GZ archive all Haansoft Hangul HWP word processing 1997 2002 Harvard Graphics (DOS) graphic 2.0 4.

0 2004 JustSystems Write word processing through 3.0 IBM Picture Interchange PIF graphic 1.0–13.0 Lotus 1-2-3 for SmartSuite spreadsheet 1997–Millennium 9.1 Spec Java class files CLASS system JPEG (not in TIFF for–mat) JFIF graphic all JPEG JPEG graphic all JustSystems Ichitaro JTD word processing 5.0 6.6 Lotus AMI Pro SAM word processing through 3.1 Legato Email Extender EMX email Lotus 1-2-3 WK4 spreadsheet through 5.0 Lotus 1-2-3 Charts 123 spreadsheet through 5.0 8.0 Legacy word processing through 1.0 Lotus 1-2-3 (OS/2) spreadsheet through 2.01 Initial Graphics Exchange IGES graphic 5.0 IBM FFT text and markup all IBM Graphics Data Format GDF graphic 1.0 Format IBM Revisable Form Text text and markup all IBM Writing Assistant word processing 1.1 104 Administration Guide .0 Kodak Flash Pix FPX graphic all Kodak Photo CD PCD graphic 1.Format Extension Category Version Hewlett-Packard Graphics HPGL graphic 2 Language HTML HTM text and markup through 3.

0 Lotus Notes NSF email Lotus Pic PIC graphic all Lotus Snapshot graphic all Lotus Symphony spreadsheet 1.Format Extension Category Version Lotus Freelance Graphics PRZ presentation through Millennium Lotus Freelance Graphics PRE presentation through 2.0 Lotus Word Pro LWP word processing 1996–9. These files can be harvested.0 1.0 (OS/2) Lotus Manuscript word processing 2.0 Micrografx Designer DRW graphic through 3. 6.0 MPEG-1 Audio layer 3 MP3 multimedia ID3 metadata only.0 Micrografx Draw DRW graphic through 4. MS Access MDB database through 2.1 Micrografx Designer DSF graphic Win95.6 LZA Self Extracting archive all Compress LZH Compress archive all Macintosh PICT1/2 PICT1/PICT1 graphic bitmap only MacPaint PNTG graphic n/a MacWrite II word processing 1. but there is no data in them that can be used in tags.1 2.0–1997 Appendix A.0 MS Binder archive 7. Supported file types 105 .1 Macromedia Flash SWF presentation text only MASS-11 word processing through 8.

0 1998 2001 MS Word (PC) DOC word processing through 6.0–2007 MS PowerPoint XML PPTX presentation MS Project MPP database 1998–2003 MS Windows XML DOCX word processing MS Word (Macintosh) DOC word processing 3.0 1998 2001 2004 MS Excel XML XLSX spreadsheet MS MultiPlan spreadsheet 4.0–4.0–2004 MS PowerPoint (Windows) PPT presentation 3.0 MS Excel (Macintosh) XLS spreadsheet 3.0–4.Format Extension Category Version MS Excel XLS spreadsheet 2.2–2007 MS Excel Charts spreadsheet 2.0 MS Word (Windows) DOC word processing through 2007 MS WordPad word processing all MS Works S30/S40 spreadsheet through 2.0 106 Administration Guide .0 MS Outlook Express EML email 1997–2003 MS Outlook Form Template OFT email 1997–2003 MS Outlook Message MSG email all MS Outlook Offline Folder OST email 1997–2003 MS Outlook Personal PST email 1997–2007 Folder MS PowerPoint (Macintosh) PPT presentation 4.x–7.0 MS Works WPS word processing through 4.

0 (Windows) MS Write word processing through 3.0 Novell Perfect Works word processing 2.0 (Draw) Novell WordPerfect word processing through 6.0 (Macintosh) MS Works Database (PC) database through 2.0 OpenOffice Writer SXW/ODT word processing 1.5 MultiMate 4.0 word processing through 4.0 OpenOffice Calc SXC/ODS spreadsheet 1.0 Appendix A.0 OS/2 PMMetafile Graphics MET graphic 3.1 Novell WordPerfect word processing 1.0 (Macintosh) Office Writer word processing 4.1 2.0 OpenOffice Draw graphic 1.0 Navy DIF word processing all Nota Bene word processing 3.0 MS Works Database database through 4.Format Extension Category Version MS Works (Macintosh) word processing through 2.0–6.0 MS Works Database database through 2. Supported file types 107 .02–3.0 OpenOffice Impress SXI/SXP/ODP presentation 1.1 2.0 Novell Perfect Works graphic 2.1 2.0 Novell Perfect Works spreadsheet 2.1 2.0 Mosaic Twin spreadsheet 2.

0 Progressive JPEG graphic n/a Q &A (database) database through 2.0 X3 R:BASE 5000 database through 3.0 Quattro Pro (Windows) spreadsheet through 12.Format Extension Category Version Paint Shop Pro 6 PSP graphic 5.0 PC-File+Letter word processing through 3.0 Paradox Database (PC) database through 4.0 Q & A (DOS) word processing 2.0 Paradox (Windows) database through 1.0 PFS: Write word processing A.0 Q & A (Windows) word processing 2. DCX graphic all PFS: Professional Plan spreadsheet 1.0–6.0 PC PaintBrush PCX.0 Quattro Pro (DOS) spreadsheet through 5.0 Q & A Write word processing 3. B.0 108 Administration Guide .1 Professional Write Plus word processing 1.0 R:BASE System V database 1.0 PC-File Letter word processing through 5.1 R:BASE (Personal) database 1.0 Portable Pixmap Utilities PPM graphic n/a PostScript File PS graphic level II Professional Write word processing through 2. C Portable Bitmap Utilities PBM graphic all Portable Greymap PGM graphic n/a Portable Network Graphics PNG graphic 1.

x 7.x 7.2 6.0 StarOffice Impress SXI/SXP/ODP presentation 5.02 Smart Ware II word processing 1.x 8.x 7. Supported file types 109 .x 7.02 Sprint word processing 1.2 Appendix A.0 StarOffice Calc SXC/ODS spreadsheet 5.0 Rich Text Format RTF text and markup all SAMNA Word IV word processing Smart Ware II database 1.02 Smart Ware II spreadsheet 1.2 6.0 Sun Raster Image RS graphic n/a Supercalc Spreadsheet spreadsheet 4.0 Text Mail (MIME) various email Total Word word processing 1.x 8.2 6.x 8.Format Extension Category Version RAR RAR archive Reflex Database database 2.0 StarOffice Draw graphic 5.0 StarOffice Writer SXW/ODT word processing 5.2 6.x 8.

0 VP Planner 3D spreadsheet 1.6 WBMP graphic n/a Windows Enhanced EMF graphic n/a Metafile Windows Metafile WMF graphic n/a Winzip ZIP archive WML text and markup 5.2 WordMARC word word processing through composer processor WordPerfect Graphics WPG.0 WordStar 2000 word processing through 3.0 WANG PC word processing through 2. 10 WordStar word processing through 7. WPG2 graphic through 2.1 Visio (preview) graphic 4 Visio 2003 graphic 5 2000 2002 Volkswriter word processing through 1.0.Format Extension Category Version Truevision Image TIFF graphic through 6 Truevision Targa TGA graphic 2 Unicode Text TXT text and markup all Unix TAR (tape archive) TAR archive n/a Unix Compressed Z archive n/a UUEncoding UUE archive n/a vCard word processing 2.0 X Bitmap XBM graphic x10 X Dump XWD graphic x10 110 Administration Guide . 7.

Supported file types 111 .Format Extension Category Version X Pixmap XPM graphic x10 XML (generic) XML text and markup XyWrite XY4 word processing through III Plus Yahoo! IM Archive archive ZIP ZIP archive PKWARE–2.04g Related reference: Appendix A.5–2. Category Format Extension Version Archive Gzip GZ all LZA Self Extracting all Compress LZH Compress all MS Binder 7.6 9.0 2002 2004 2005 Appendix A.” on page 101 Supported file types by category This topic lists supported file types by category.0–14.04g CAD AutoCAD DWG 2. “Supported file types.0–1997 RAR RAR Unix TAR (tape archive) TAR n/a Unix Compressed Z n/a UUEncoding UUE n/a Winzip ZIP Yahoo! IM Archive n/a ZIP ZIP PKWARE–2.

3 ENABLE 3.0 MS Access MDB through 2.0 (Windows) Paradox Database (PC) through 4.0 dBXL 1.0 Paradox Database through 1.Category Format Extension Version Database DataEase 4.x dBase Database through 5.0 (Windows) Q & A (database) through 2.0 4.0 Reflex Database 2.0 MS Works Database through 4.0 Smart Ware II 1.02 Email Legato Email Extender EMX Lotus Notes NSF MS Outlook Express EML 1997–2003 112 Administration Guide .0 MS Project MPP 1998–2003 MS Works Database through 2.0 (Macintosh) MS Works Database (PC) through 2.5 First Choice through 3.0 R:BASE 5000 through 3.0 R:BASE System V 1.1 R:BASE (personal) 1.0 FoxBase 2.1 Framework 3.0 4.

RLE.Category Format Extension Version MS Outlook Form Template OFT 1997–2003 MS Outlook Message MSG all MS Outlook Offline Folder OST 1997–2003 MS Outlook Personal PST 1997–2007 Folder Text Mail (MIME) various Graphic Adobe Acrobat PDF 2. all WARP CALS Raster GP4 Type I. ICO.x–9.x header) Encapsulated Post Script EPS TIFF header (raster) Appendix A.0 Adobe Photoshop PSD 4. DIB. II Computer Graphics CGM ANSI Metafile CALS NIST 3.0–7.1 3.0 9.0 Graphics Adobe Illustrator through 7.0 Corel Clipart CMX 5–6 Corel Draw CDR 3.0 Japanese Adobe FrameMaker FMV vector/raster–5.0 Binary Group 3 Fax all Bitmap BMP. CUR.x–8.0 Ami Draw SDW all AutoShade Rendering RND 2. Supported file types 113 .x Corel Draw (CDR with Tiff 2.

1 Spec JPEG (not in TIFF format) JFIF all JPEG JPEG all Kodak Flash PIX FPX all Kodak Photo CD PCD 1.0 IBM Picture Interchange PIF 1.0 (Draw) 114 Administration Guide .x 3.0 Micrografx Draw DRW through 4.0 Format Initial Graphics Exchange IGES 5.0 Novell Perfect Works 2.1 Micrografx Designer DSF Win95 6.0 Lotus Pic PIC all Lotus Snapshot all Macintosh PICT1/2 PICT1/PICT2 bitmap only MacPaint PNTG n/a Micrografx Designer DRW through 3.x Harvard Graphics all (Windows) Hewlett-Packard Graphics HPGL 2 Language IBM Graphics Data Format GDF 1.Category Format Extension Version GEM Bit Image IMG all Graphics Interchange GIF all Format Graphics Environment GEM VDI bitmap and vector Manager Harvard Graphics (DOS) 2.

0 PC PaintBrush PCX.0 OS/2 PM Metafile Graphics MET 3. DCX all Portable Bitmap Utilities PBM all Portable Greymap PGM n/a Portable Network Graphics PNG 1.x 8.Category Format Extension Version OpenOffice Draw 1.0–6.0 Portable Pixmap Utilities PPM n/a PostScript PS level II Progressive JPEG n/a StarOffice Draw 5.x 7.2 6.0 Paint Shop Pro 6 PSP 5.1 2. Supported file types 115 .0 Sun Raster Image RS n/a Truevision Image TIFF through 6 Truevision Targa TGA 2 Visio (preview) 4 Visio 2003 5 2000 2002 WBMP n/a Windows Enhanced EMF n/a Metafile Windows Metafile WMF n/a Appendix A.

0 7 10 X Bitmap XBM x10 X Dump XWD x10 x Pixmap XPM x10 Archive Gzip GZ all LZA Self Extracting all Compress LZH Compress all MS Binder 7.0 2002 2004 2005 Database DataEase 4.0–14.x dBase Database through 5.3 116 Administration Guide .0 dBXL 1. WPG2 through 2.Category Format Extension Version WordPerfect Graphics WPG.0–1997 RAR RAR Unix TAR (tape archive) TAR n/a Unix Compressed Z n/a UUEncoding UUE n/a Winzip ZIP Yahoo! IM Archive n/a ZIP ZIP PKWARE–2.6 9.04g CAD AutoCAD DWG 2.5–2.

0 R:BASE System V 1.0 (Windows) Paradox Database (PC) through 4.0 Smart Ware II 1.Category Format Extension Version ENABLE 3.0 MS Project MPP 1998–2003 MS Works Database through 2.0 MS Access MDB through 2.0 MS Works Database through 4.0 Paradox Database through 1.1 R:BASE (personal) 1.0 4.02 Email Legato Email Extender EMX Lotus Notes NSF MS Outlook Express EML 1997–2003 MS Outlook Form Template OFT 1997–2003 MS Outlook Message MSG all MS Outlook Offline Folder OST 1997–2003 Appendix A.0 4.1 Framework 3.0 R:BASE 5000 through 3. Supported file types 117 .0 FoxBase 2.0 (Windows) Q & A (database) through 2.5 First Choice through 3.0 Reflex Database 2.0 (Macintosh) MS Works Database (PC) through 2.

0–7. CUR.x header) Encapsulated Post Script EPS TIFF header (raster) GEM Bit Image IMG all Graphics Interchange GIF all Format 118 Administration Guide .x–8. DIB.x–9. II Computer Graphics CGM ANSI Metafile CALS NIST 3.0 Graphics Adobe Illustrator through 7.x Corel Draw (CDR with Tiff 2.1 3. all WARP CALS Raster GP4 Type I.0 Corel Clipart CMX 5–6 Corel Draw CDR 3.0 Binary Group 3 Fax all Bitmap BMP.0 Ami Draw SDW all AutoShade Rendering RND 2.0 Adobe Photoshop PSD 4.0 9.0 Japanese Adobe FrameMaker FMV vector/raster–5. RLE.Category Format Extension Version MS Outlook Personal PST 1997–2007 Folder Text Mail (MIME) various Graphic Adobe Acrobat PDF 2. ICO.

1 Micrografx Designer DSF Win95 6.x 3.1 2.0 (Draw) OpenOffice Draw 1.0 IBM Picture Interchange PIF 1.0 Format Initial Graphics Exchange IGES 5.Category Format Extension Version Graphics Environment GEM VDI bitmap and vector Manager Harvard Graphics (DOS) 2.0 Lotus Pic PIC all Lotus Snapshot all Macintosh PICT1/2 PICT1/PICT2 bitmap only MacPaint PNTG n/a Micrografx Designer DRW through 3.0 Appendix A.0 Micrografx Draw DRW through 4.0 Novell Perfect Works 2. Supported file types 119 .1 Spec JPEG (not in TIFF format) JFIF all JPEG JPEG all Kodak Flash PIX FPX all Kodac Photo CD PCD 1.x Harvard Graphics all (Windows) Hewlett-Packard Graphics HPGL 2 Language IBM Graphics Data Format GDF 1.

0 Sun Raster Image RS n/a Truevision Image TIFF through 6 Truevision Targa TGA 2 Visio (preview) 4 Visio 2003 5 2000 2002 WBMP n/a Windows Enhanced EMF n/a Metafile Windows Metafile WMF n/a WordPerfect Graphics WPG.0 7 10 120 Administration Guide .0 Paint Shop Pro 6 PSP 5. DCX all Portable Bitmap Utilities PBM all Portable Greymap PGM n/a Portable Network Graphics PNG 1.x 8.x 7.0 Portable Pixmap Utilities PPM n/a PostScript PS level II Progressive JPEG n/a StarOffice Draw 5.0 PC PaintBrush PCX.Category Format Extension Version OS/2 PM Metafile Graphics MET 3.2 6.0–6. WPG2 through 2.

0 4.0 (OS/2) Macromedia Flash SWF text only MS PowerPoint (Macintosh) PPT 4.0 Framework 3. Presentation Corel Presentations SHW through 12.0 4.0 Spreadsheet Comma Separated Values CSV ENABLE Spreadsheet SSF 3. Supported file types 121 .5 FIrst Choice through 3.0–2004 MS PowerPoint (Windows) PPT 3.1 2. These files can be harvested.Category Format Extension Version X Bitmap XBM x10 X Dump XWD x10 x Pixmap XPM x10 Multimedia MPEG-1 Audio layer 3 MP3 ID3 metadata only.2 6. but there is no data in them that can be used in tags.x 8.0 Appendix A.0–2007 MS PowerPoint XML PPTX OpenOffice Impress SXI/SXP/ODP 1.x 7.0 StarOffice Impress SXI/SXP/ODP 5.0 X3 Lotus Freelance Graphics PRZ through Millennium Lotus Freelance Graphics PRE through 2.

0 MS Excel (Macintosh) XLS 3.0 Lotus 1-2-3 Charts 123 through 5.0 OpenOffice Calc SXC/ODS 1.0 MS Excel XLS 2.0 1998 2001 2004 MS Excel XML XLSX MS MultiPlan 4.02 122 Administration Guide .0 Quattro Pro (Windows) through 12.Category Format Extension Version Lotus 1-2-3 WK4 through 5.2–2007 MS Excel Charts 2.0 MS Works S30/S40 through 2.1 2.0 X3 Smart Ware II 1.0–4.6 Lotus Symphony 1.5 Novell Perfect Works 2.1 2.0 Lotus 1-2-3 for SmartSuite 1997–9.x–7.0 1.0 Quattro Pro (DOS) through 5.0 Mosaic Twin 2.0 PFS: Professional Plan 1.0 Lotus 1-2-3 (OS/2) through 2.

and 8-bit EBCDIC text all HTML HTM through 3.x 7.0 Supercalc Spreadsheet 4.class DOS command executables .0–6.0 System Executable files .0 IBM FFT all IBM Revisable Form Text all Rich Text Format RTF all Unicode Text TXT all WML 5.Category Format Extension Version StarOffice Calc SXC/ODS 5.1 Appendix A.2 6.0 Windows X3 DEC WPS PLUS DX through 4.DLL Java class files .COM Text and markup ANSI TXT 7.EXE Dynamic link library files . Supported file types 123 .x 8.and 8-bit ASCII TXT 7.0 DEC WPS PLUS WPL through 4.2 XML (generic) XML Word processing Adobe FrameMaker MIF 3.0 VP Planner 3D 1.0 Interchange Format Corel WordPerfect WPD through 12.

5 First Choice through 3.0–4.0 Framework 3.0 4.0 4.0 ENABLE 3.0 Legacy through 1.0 1998 2001 MS Word (PC) DOC through 6.0 8.0 MS Windows XML DOCX MS Word (Macintosh) DOC 3.0–13.1 MASS-11 through 8.01 JustSystems Ichitaro JTD 5.1 Lotus Manuscript 2.0 MS Word (Windows) DOC through 2007 124 Administration Guide .0 Haansoft Hangul HWP 1997 2002 IBM Writing Assistant 1.0 Lotus Word Pro LWP 1996–9.6 MacWrite II 1.0 2004 JustSystems Write through 3.1 Lotus AMI Pro SAM through 3.Category Format Extension Version Display Write (2 and 3) IP all Display Write (4 and 5) through 2.0 6.

Category Format Extension Version

MS WordPad all versions

MS Works WPS through 4.0

MS Works (Macintosh) through 2.0

MS Write through 3.0

MultiMate 4.0 through 4.0

Navy DIF all versions

Nota Bene 3.0

Novell Perfect Works 2.0

Novell WordPerfect through 6.1

Novel WordPerfect 1.02–3.0
(Macintosh)

Office Writer 4.0–6.0

OpenOffice Writer SXW/ODT 1.1

2.0

PC-File Letter through 5.0

PC- File + Letter through 3.0

PFS: Write A

B

C

Professional Write through 2.1

Professional Write Plus 1.0

Q & A (DOS) 2.0

Q & A (Windows) 2.0

Q & A Write 3.0

SAMNA Word IV

Smart Ware II 1.02

Sprint 1.0

Appendix A. Supported file types 125

Category Format Extension Version

StarOffice Writer SXW/ODT 5.2, 6.x, 7.x, 8.0

Total Word 1.2

Related reference:
Appendix A, “Supported file types,” on page 101

SharePoint attributes
This section describes the various SharePoint data object types and their properties
currently supported by IBM StoredIQ Platform.

Supported SharePoint object types

These types of SharePoint objects are supported:

Supported SharePoint Supported SharePoint Supported SharePoint
Object Types Object Types Object Types

Blog posts and comments Discussion board Calendar

Tasks Project tasks Contacts

Wiki pages Issue tracker Announcements

Survey Links Document libraries

Picture libraries Records center

Notes regarding SharePoint object types

Calendar

Recurring calendar events are indexed as a single object in IBM StoredIQ Platform.
Each recurring calendar events will have multiple Event Date and End Date
attribute values, one pair per recurrence. For instance, if there is an event defined
for American Independence Day and is set to recur yearly, it will be indexed with
Event Dates 2010-07-04, 2011-07-04, 2012-07-04, and so on.

Survey

Only individual responses to a survey are indexed as system-level objects. Each
response is a given user's feedback to all questions in the survey, and each
question in the survey that was answered for a given response is indexed as an
attribute of the response in the IBM StoredIQ Platform index. The name of the
attribute is the string forming the question while the value is the reply entered.

Surveys have no full-text indexable body, and they are always indexed with
size=0.

126 Administration Guide

Hash computation

The hash of a full-text indexed object is generally computed using the full-text
indexable body of the object. However, in the case of SharePoint list item objects
(excluding documents and pictures), the full-text indexable body might be empty
or too simplistic, meaning that you could easily obtain duplicate items across
otherwise two completely different objects. For this reason, other attributes are
included in the hash computation algorithm.

These attributes are included while computing the hash for the SharePoint data
objects, excluding documents and pictures.
Table 4. Hash-computation attributes
Attributes Types
Generic attributes v Title (SharePoint)
v Content Type (SharePoint)
v Description (SharePoint)
Blog post attributes v Post category (SharePoint)
Wiki page attributes v Wiki page comment
Calendar event attributes v Event category (SharePoint)
v Event date (SharePoint)
v Event end date (SharePoint)
v Event location (SharePoint)
Task or project task attributes v Task start date (SharePoint)
v Task due date (SharePoint)
v Task assigned to (SharePoint)
Contact attributes v Contact full name (SharePoint)
v Contact email (SharePoint)
v Contact job title (SharePoint)
v Contact work address (SharePoint)
v Contact work phone (SharePoint)
v Contact home phone (SharePoint)
v Contact mobile phone (SharePoint)
Link attributes v Link URL (SharePoint)
Survey attributes All survey questions and answers in the response are
included in the hash.

Related reference:
Appendix A, “Supported file types,” on page 101

Appendix A. Supported file types 127

128 Administration Guide .

including private messages Secondary Volumes v Windows Share v NFS v2 and v3 v Centera* Retention Volumes v Windows Share v Enterprise Vault. Supported server platforms by volume type Table 5. and v Windows Share SnapLock)* 10.0 Windows NT 4. Supported server platforms and protocols This section lists the supported server platforms by volume type and the protocols for supported systems.0. (Celerra)* v NFS (NetApp)* v SharePoint v8. v IBM FileNet v NFS (NetApp v NFS v3 v8.3.2 and later 9.3.0x. 2013 129 .Appendix B. Supported system and protocol version Data Source Protocol System Version StoredIQ Version Notes File Servers Windows servers Windows Share V1. Supported server platforms by volume type Primary Volumes v Windows Share v NFS v2 and v3 v Exchange (2003.0. not tested © Copyright IBM Corp.4. v8. 9. 2001. Protocols for supported systems Table 6.0.0 v Chatter.4. v Domino for Email v Windows Share v NFS (Celerra)* SP2-2007.0 (NetApp)* v Symantec v IBM FileNet Discovery v NewsGator Accelerator. v Jive 5.3.2 Supported. and 10.2 Windows 2000 Windows 2003 Windows 2008 UNIX servers NFS V2/V3 All 4.0x.0x. 9. 2010)* v Enterprise Vault. and v Windows Share (2003/2007/2010)* 10.0 (NetApp v NFS (Celerra FLR)* v Centera* v Dell DX Object SnapLock)* v Hitachi HCAP Storage Platform v Windows Share v IBM Information (Celerra FLR)* Archive* Discovery Export v NFS v2 and v3 v Windows Share Volumes System Volumes v NFS v2 and v3 * Available through license only.0.

2 When back-up operator is set. and 10.1.2 Macintosh Windows Share V1. but does not communicate with Extender. Celerra does not reset access times.7. 9.0.2 130 Administration Guide .4.0.0. not tested NetApp Filer Windows Share V1. Vista 4.4. NFS V2/V3 DART 5.0 All 4.0 Windows XP. and 8. NetApp 6.0 or later 4.70 Vault/Discovery Web Services (Soap v. Supported system and protocol version (continued) Data Source Protocol System Version StoredIQ Version Notes EMC Celerra Windows Share V1.0 4. NFS V2/V3 ONTAP 7.2 Exchange Web Exchange 2007 4.0 Accelerator XML-RPC) Email Extender 4.4.x servers.4.0 ONTAP 7.x servers.4.0 Services (Soap XML-RPC) Exchange 2010 5.x support until proven otherwise.Table 6.4.0.4.x.x support until proven otherwise. 9.2 Supported.0 or later 4.x Email Archives Symantec Enterprise Discovery Accelerator Discovery Accelerator 4.5 or later 4.0.4 Notes Domino NRPC over TCP/IP Lotus Domino/Notes 5.4 Email 6. NetApp 6.4. Desktops Windows Windows Share V1.2 Tested against 7.2 Supported.5 or later 4.x.0 DART 5.4. 7. not tested Other NFS V2/V3 All 4. not tested Email Servers Exchange WebDav Exchange 2003 4.2 Supported.2 Tested against 7.0 StoredIQ understands the Email Extender format.0. Other Windows Share V1.6.

3.5 4.0 ONTAP 7.5.0 Enterprise Vault DCOM/RPC 8.1 HCA 1.2 Services (Soap XML-RPC) SharePoint SharePoint Web SharePoint 2007 4.4. Supported server platforms and protocols 131 . 6.4 Retention Servers Centera TCP/IP CentraStar 3.5 SharePoint SharePoint Web SharePoint 2003 4.0 Manager Hitachi HCAP HTTP/HTTPS 1.2 Document Management Documentum TCP/IP 5.0 DART 5.0.0 SP3 5.1 and 4.0.4.3.0 5.6.2 Tivoli Storage Archive TSM Server 5.4. 6.2 Unix/Linux NFS V2/V3 4.6.5 4.3.2.Table 6.5 4.0.4.4.0 later IBM Information TCP/IP TSM Client 5.1 and 4.8 4.4.0.1 4.0 Services (Soap XML-RPC) SharePoint 2010 5.1229 or later 6.0.6.6.0.3 NetApp SnapLock Windows Share V1.4.0 Appendix B.2 Dell DX Object HTTP (SCSP subset) 4.2 Hitachi Content Archiver Enterprise Social Solutions NewsGator HTTP REST 2.3. Supported system and protocol version (continued) Data Source Protocol System Version StoredIQ Version Notes NFS V2/V3 4.1.0 EMC Celerra w/FLR NFS V2/V3 DART 5.4 Storage Platform EMC Celerra w/FLR Windows Share V1.0 later NetApp SnapLock NFS V2/V3 ONTAP 7.2.6. 4.4.

132 Administration Guide .

(15002) Verify there is network connectivity between the data server and your volume. 2001. (15001) Platform still has For instance. 2013 133 . interrogators. network connectivity between the data server and your volume. permissions to a volume. Contact Customer Support. Check <share> <start–dir> on still has appropriate permissions and network server <server-name>. failure: <exception description> (17001) © Copyright IBM Corp. ERROR event log messages This topic contains a complete listing of all ERROR event-log message. ERROR 17001 Unhandled fatal exception Centera Harvester fatal Contact Customer Support. Make sure IBM StoredIQ harvested in a given job.Appendix C. Contact Customer Support. 'dataserver:/mnt/demo-A' (1357) has failed (9083) ERROR 9086 Unexpected error while Importing volume Contact Customer Support. Cannot kickstart Contact Customer Support. Event log messages This section contains a list of the messages that may appear in the Event Log of the IBM StoredIQ Platform Console. Event Type Number Reason Sample Message Customer Action ERROR 1001 Harvester was unable to Harvester could not Log into UTIL and restart open a socket for listening allocate listen port after the application server. ERROR 15002 Could not mount the Error mounting volume Make sure the data server volume. <number> attempts. record HarvestRecord for This message occurs due qa1:auto-A (15021) to a database error. Reported <reason>. Verify there is network issue. Restart the data server. in Centera Discovery. settings. all of the appropriate permissions to mounts fail due to a a volume. exporting a volume. to child processes. reasons for occurrence. ERROR 15021 Error saving harvest Failed to save Contact Customer Support. sample messages. (1001) ERROR 9083 Unexpected error while Exporting volume Contact Customer Support. INFO. WARN) and event number. Messages are sorted by type (ERROR. and any required customer action. importing a volume 'dataserver:/mnt/demo-A' (1357) failed (9086) ERROR 15001 No volumes are able to be No volumes harvested.

When that fails. This is likely due to a database error. ERROR 18002 SMB volume mount failed. Error:<database error description> (17012) ERROR 17501 Generic retention Generic retention Contact Customer Support. Windows Share Protocol Verify the name of the Check the share name. FEB_1 in pool jpool. discovery creates volume volumeset for sets associated with <server>:<share> primary volumes. ERROR 17506 Generic retention Error creating volume Contact Customer Support. IBM StoredIQ Platform send this message. discovery could not create <server>:<share:> discovered volume. ERROR 17505 Unable to query object Unable to determine object Contact Customer Support. Contact Customer Support. <share-name> on If this message persists. <17501> ERROR 17503 Generic retention Error creating/loading Contact Customer Support. Windows Share Protocol Make sure IBM StoredIQ Exception when Platform still has connecting to the server appropriate permissions to <server-name> : <reason>. <server-name> : <reason>. ERROR 18003 There is no volume Windows Share Protocol Contact Customer Support. a volume. ERROR 18001 SMB connection fails. a volume during Centera Volume This message occurs due Discovery Company_jpool_2009_ to a database error. discovery failed in a discovery fatal failure: catastrophic manner. count for a discovered count for <server>:<share> volume. This failure likely occurred due to database errors. Verify there is (18001) network connectivity between the data server and your volume. Exception while initializing the data object man–ager: <reason>. Event Type Number Reason Sample Message Customer Action ERROR 17012 Error while trying to create Unable to create Centera Contact Customer Support. (18003) 134 Administration Guide . then Contact Customer (18002) Support. manager. Exception when server and volume to connecting to the share make sure they are correct.

(19001) ERROR 19002 An unknown exception Interrogator. (19002) ERROR 19003 An exception occurred Interrogator. interrogator processing. (19005) initialization. Event log messages 135 . (19004) ERROR 19005 An exception occurred Viewer.process Contact Customer Support. during viewer .__init__: Exception Contact Customer Support. memory. space. <epoch>). during interrogator exception (<vol–umeid>. ERROR 19001 An exception occurred Interrogator. Consider turning off characters. processing.__init__ Contact Customer Support._run : Unknown Verify the user that threw an exception.__init__: Unknown Contact Customer Support. ERROR 18021 An unexpected error from Unable to fetch trailing Check to ensure the the server prevented the activity stream from NewsGator server has harvest to reach the end of NewsGator volume. Event Type Number Reason Sample Message Customer Action ERROR 18006 Grazer volume crawl Grazer. ERROR 18018 Start directory has escape Cannot graze the volume. occurred during exception: unknown. (19006) initialization. Appendix C. was interrupted. initialization. <epoch>): <reason>. contact where the current harvest Customer Support. occurred during exception (<volumeid>.<reason>. and so on). NewsGator data source (18021) It is very likely that this being harvested. occurred during viewer exception. error during walk. ERROR 19006 An unknown exception Viewer. Will sufficient resources (disk the activity stream on the retry in next harvest. server is configured to escape characters (18018) skip them. and the data root directory Núñez has escape character checking. The next error is transient.process Contact Customer Support. (19003) ERROR 19004 An unknown exception Interrogator. (18006) mounted the specified volume has permis–sions equivalent to your current backup solution. If this message continues.__init__ Contact Customer Support. during interrogator exception: <reason>. contact Customer Support. If the incremental harvest will error persists across attempt to pick up from multiple harvests. interrogator initialization.

password used for permissions and network (33003) mounting the volume are settings. If create a local mounting mount_point using the problem persists. (33010) accurate. ERROR 33004 Volume could not be Unmounting volume failed Reboot the data server. Event Type Number Reason Sample Message Customer Action ERROR 33003 Could not mount the Unable to mount the Verify user name and volume. primitive. Makedirs(). then <mount point>.threadSafe– contact Customer Support. Make sure the volume is accessible via one of our built-in protocols (NFS. Check volume: <error reason>. or Exchange). Make sure the volume is accessible via one of our built-in protocols (Windows Share)._delete(). Verify that the network is properly configured for the data server to reach the volume. Verify the data server has appropriate DNS settings to resolve the server name. then point for the volume. Verify the data server has appropriate DNS settings to resolve the server name. Verify that the network is properly configured for the data server to reach the volume. Cannot test the problem persists. deleting a Volume while working with HARVESTS_TABLE in Volume. ERROR 33005 Data server was unable to Unable to create Reboot the data server. Check the user data object for appropriate permissions to the volume. (33005) ERROR 33010 Failed to make SMB Mounting Windows Share Verify user name and connection to Windows volume failed with the password used for Share server. from mount point : the problem persists. ERROR 33011 Internal error. accurate. (33012) 136 Administration Guide . (33004) Contact Customer Support. (33011) ERROR 33012 Database problems when An exception occurred Contact Customer Support. error : <system error mounting the volume are message>. Problem Unable to open Reboot the data server. If unmounted. Check the user data object for appropriate permissions to the volume. Windows Share. mounted. then /proc/mounts if volume was already contact Customer Support. If accessing local /proc/mounts.

authenticate to the IBM Archive volume failed credentials and Information Archive with the error: Server permissions to the IBM retention volume. (33027) permissions to the FileNet volume and retry. authenticate to the Hitachi failed : Cannot connect to credentials and Archivas Content Archive HCAP share. <reason>. (33020) Information Archive volume and retry. last_harvest operation. If the error points to network issues with connectivity. (33014) ERROR 33018 An error occurred Mounting Exchange Server Verify user name and mounting the Exchange failed : <reason>. (33013) name. (33018) password used for share. Verify the data server has appropriate DNS settings to resolve the server name. Event Type Number Reason Sample Message Customer Action ERROR 33013 No volume set was found Unable to load volume set Contact Customer Support. and authenticate to the retention volume failed : credentials. Event log messages 137 . ERROR 33020 Failed to connect and Mounting IBM Information Ensure the connectivity. ERROR 33014 System could not An error occurred while Contact Customer Support. unreachable. ERROR 33019 Failed to connect and Mounting HCAP volume Ensure the connectivity. ERROR 33022 Failed to connect to the Mounting Discovery Verify the information Discovery Accelerator Accelerator volume failed used to add the volume using the information for with the error: insufficient and ensure all details have the Volume. volume and retry. address them and retry. Verify that the network is properly configured for the data server to reach the share. (33019) permissions to the Hitachi server. mounting the share are accurate. Check for appropriate permissions to the share. of space (34002) destination and try again. and IBM FileNet server failed. Appendix C. determine when this performing the volume was last harvested. Make sure the share is accessible. ERROR 34002 Could not complete the Copy Action aborted as Verify there is space copy action because the the target disk has run out available on your policy target disk was full. ERROR 33027 The attempt to connect Mounting IBM FileNet Ensure the connectivity. permissions to review been entered correctly CaseOne (33022) before retrying. for the given volumes set by its name.

(34034) volume for the policy and retry. profile used. (34026) recalled to and run the HSM action again. permissions to the target be aborted. of space. try running the policy again. COM:SHARE. ERROR 34034 The target volume for the Copy objects failed. (34020) the volume has been defined. ERROR 34015 The policy audit could not Error Deleting Policy Contact Customer Support.COMPANY. unable Ensure the connectivity. ERROR 34030 Discovery export policy is Production Run action Create sufficient space on aborted since it detected aborted because the target target disk and run the target disk is full. and check if the proper permissions have been provided. ERROR 41004 The job terminated <job-name> ended Try to run the job again. disk has run out of space. then run another harvest before executing your policy. Event Type Number Reason Sample Message Customer Action ERROR 34009 Could not complete the Move Action aborted as Verify there is space move action due to full the target disk has run out available on your policy target disk. contact Customer Support. If abnormally. be deleted for some Audit: <error message> reason. discovery export policy (34030) again. The policy will QA1. ERROR 34021 The move to Centera Move to Centera failed as Check permissions on the action could not be we do not have read/write access profile provided for executed because of permissions on the access the Centera pool on which insufficient permissions. (34020) the volume has been defined. (41004) it fails again. policy could not be to mount volume: login credentials and mounted. 138 Administration Guide . unexpectedly. ERROR 34026 HSM action cannot recall a HSM Stub action aborted Verify there is space file because the primary because the primary disk available on the volume volume does not have has run out of space that the file is being enough free space. (34009) destination. profile used. (34016) ERROR 34020 The copy to Centera action Copy to Centera failed as Check permissions on the could not be executed we do not have read/write access profile provided for because of insufficient permissions on the access the Centera pool on which permissions. When the harvest completes. and check if the proper permissions have been provided.

Contact Customer Support. Errors errors. occurred. (42027) Appendix C. Make sure the directory. Event log messages 139 . dir:<target-directory. Event Type Number Reason Sample Message Customer Action ERROR 41007 Job has failed. terminated abnormally. (42009) ERROR 42017 An unexpected error Delete data objects Contact Customer Support. to see why it failed and refer to that message ID to pinpoint the error. Make sure the directory. (42006) ERROR 42007 The move action was Move data objects failed. occurred. run because of parameter run. [Job name] has failed Look at previous messages (41007). Check permissions on the unable to create a target unable to create target target. ERROR 42001 The copy action could not Copy data objects did not Contact Customer Support. Contact Customer Support. Check permissions on the unable to create a target unable to create target target. not run because of Attribute verification parameter errors. occurred. (42004) ERROR 42006 The move action could not Move data objects did not Contact Customer Support. occurred. run because of parameter run. ERROR 42004 An unexpected error Copy data objects Contact Customer Support. Errors errors. (42025) ERROR 42027 An unexpected error Policy terminated Contact Customer Support. permissions that are name>. failed. (42007) configured to mount the target volume have write access to the volume. occurred:<error- description>. (42002) configured to mount the target volume have write access to the volume. (42001) ERROR 42002 The copy action was Copy data objects failed. abnormally. terminated abnormally. (42017) ERROR 42025 The policy action could Policy cannot execute. occurred:<error- description>. permissions that are directory-name>. dir:<target. terminated abnormally. ERROR 42009 An unexpected error Move data objects Contact Customer Support.

ERROR 45802 A full-text index is already Time allocated to gain Contact Customer Support. process failed. forme. unable to create permission and re-execute target dir: production/10. ERROR 42059 Illegal set of parame–ters Production Run on objects Contact Customer Support. invalid or not supported. A Transaction of client: Contact Customer Support.client. failed. was terminated (Copying native objects) abnormally. bar' (45806) 140 Administration Guide . failed on volume the index is most likely <volume-name> (42088) still usable for queries. ERROR 45806 The query expression is Failed to parse 'dog pre\3 Revise your full-text query. exclusive access to in-memory index for volume= 1357 has expired (45802) ERROR 45803 The index for the specified Index '/deepfs/full–text/ No user intervention is volume does not exist. (42062) ERROR 42088 The full-text optimization Full-text optimization Contact Customer Support.com_FINDEX_ initiated or was closed QUEUE_ early. 1357_1172515222_3_ 2 is not the writer (45804) ERROR 45805 The query has not been Query ID: 123 does not No user intervention is started or has expired The exist (45805) required. This message may occur volume_1357' not found under normal conditions. could not run because of synchronization of an unexpected error. being modified.r is a programming error. the latter is normal. occurred: The following parameters are missing: action_limit. Errors policy. however. <server-name>: <volume-name> failed fatally. terminated abnormally. volume_index/ required. Event Type Number Reason Sample Message Customer Action ERROR 42050 The data synchronizer Content Data Synchronizer Contact Customer Support. (42059) ERROR 42060 Discovery export policy Production Run on objects Verify the Discovery failed to create target (Copying native objects) export volume has write directory for the export. (42060) ERROR 42062 Discovery export policy Production Run on objects Contact Customer Support. policy. (45803) ERROR 45804 Programming error. transaction was never node. passed to discovery export did not run.

ERROR 45810 Programming error. ERROR 45814 The query expression is Query: 'a* b* c* d* e*' is Refine your full-text query. _1357_1172515222_3_2 is already active (45807) ERROR 45808 A transaction has never No transaction for client: No user intervention is been started or has node. expression. a query. too complex (45814) ERROR 45815 The file that is being Java heap exhausted while Check the skipped file list indexed is too large or the indexing node with ID: in the audit log for files query expression is too '10f4179cd5ff22f2a6b that failed to load due to complex. (45812). The system expired. Expected: 1357 Received:2468 (45810) ERROR 45812 A File I/O error occurred Failed to write disk Try your query again. Event Type Number Reason Sample Message Customer Action ERROR 45807 Programming error.tgz file after a incomplete backup image. Contact Customer Support for additional assistance if necessary. while per–sisting full-text while backing up fulltext data into a . failed full-text backup. (46023) share. transaction has already node. Invalid volumeId.client. permissions. Contact Customer Support. Reason: <reason>. The engine has 79a1bc3aef247fd94ccff' their sizes. memory.com_FINDEX_QUEUE been started for the client. query '<query-name>' on volume '<server-and-volume> (47002) ERROR 47101 An error occurred during Cannot process full-text Restart services and the query of a full-text expression (Failed to read contact Customer Support. A Client: Contact Customer Support. QUEUE_1357_1172515222_3_2 handles this condition (45808) internally. ERROR 46024 Unhandled fatal exception Exception <excep–tion> Contact Customer Support. partial . ERROR 46023 Tar command failed while Failed to back up full–text Check disk space and persisting full-text data to data for server:share.client. (46025) ERROR 47002 Synchronization failed on Synchronization failed for Contact Customer Support. while accessing index data. Event log messages 141 .com_FINDEX_ required. too long. Revise your temporarily run out of (45815) query expression and retry. data for server:share (46024) ERROR 46025 Was not able to delete Failed to unlink Check permissions. Windows Share or NFS Reason: <reason>.tgz file. from disk (45812) (47101) Appendix C.

ERROR 47212 Interrogator crashed while Harvester 1 Does not exist. Aborting.com (47214) ERROR 50011 The DDL/DML files Database version control Contact Customer Support. (50011) versioning were not found in the expected location on the data server. Contact Customer Support. disk space is almost full. for the upgrade and (50020) cannot proceed with the upgrade. required for the database SQL file not found. ERROR 50021 Indicates that the full Database backup failed.nowhere. ERROR 61003 Discovery export policy Production policy failed to failed to mount volume. In most cases. space. ERROR 47214 SNMP notification sender Unable to resolve host Check spelling and DNS is unable to resolve the name setup. the system crashes current file will be missing (47212) on the same file or type of from the volume cluster. Event Type Number Reason Sample Message Customer Action ERROR 47203 No more database Database connections Contact Customer Support. (that is. ERROR 50018 Indicates that the Database restore is Contact Customer Support. mount volume. pre-upgrade database unsuccessful. however. and additional storage is required. (%d) In rare cases. (50018) was attempted as a result of a database upgrade failure. nomachine. ERROR 50020 Indicates the current Versions do not match! Contact Customer Support. files). this message can indicate a program error leaking disk space. The Action taken : restart. database requirements do Expected current database not meet those specified version: <dbversion>. Aborting. threshold. database backup failed (50021) when attempting a data-object level database backup. Contact restoration failed. which Customer Support. connections are available. trap host name. exhausted (512/511) (47203) ERROR 47207 User is running out of disk Disk usage exceeds Contact Customer Support. contact Customer Support. If the problem persists processing a file. (61003) 142 Administration Guide .

Event log messages 143 . (61006) export run and run the target disk is full. parameter of the Database engine may need to be increased. reasons for occurrence. but incomplete. to connect to the gateway to the gateway cannot be over an extended period of established. disk. and any required customer action. services. time.” on page 133 INFO event log messages This topic contains a complete listing of all INFO event-log messages. Load unexpectedly. ERROR 61006 The discovery export load Production load file Free up space on the target file generation was generation interrupted. but files may be produced post-processing may be correctly. The load files may be produced. connections" configuration database. sample messages. If your encounter issues. ERROR 68001 The gateway and data Gateway connection failed Update your data server to server must be on the due to unsupported data the same build number as same version in order to server version. Related reference: Appendix C. name> cannot be inferred specified query. (61005) post-processing actions like updating audit trails and generating report files may not have completed. because no condition for it has been defined (9001). Appendix C. file generation fails generation failed. contact Customer Support. ERROR 68003 The data server has failed The data-server connection Contact Customer Support. the gateway and restart connect. Event Type Number Reason Sample Message Customer Action INFO 9001 No conditions have been Harvester: Query <query Add conditions to the added to a query. ERROR 80002 The system failed to open Failed to connect to the The "maximum database a connection to the database (80002). “Event log messages. policy once more. Contact Customer Support. void the discovery interrupted because the Target disk full. Event Type Number Reason Sample Message Customer Action ERROR 61005 The discovery export load Production load file Contact Customer Support.

expression or other condition error (9002). metadata. done in <number> steps required. 'dataserver:/mnt/demo-A' required. 'Company Data required. the volume cache. INFO 9003 Volume Harvest has Volume statistics No user intervention is completed and explorers computation started (9003). required. INFO 9084 The volume export Exporting volume 'data No user intervention is finished. tagged values and full-text index were successfully loaded (9069). Server:/mnt/demo-A' started (9013). Volume 'data server: No user intervention is /mnt/demo-A': System required. (1357) completed (9087) 144 Administration Guide . INFO 9012 Indicates the end of Dump of Volume cache(s) No user intervention is dumping the content of completed (9012). INFO 9005 Query membership Query inference will be No user intervention is calculations started. System metadata and No user intervention is tagged values were required. required. required. successfully loaded for volume 'server:volume' (9067). required. are being calculated. INFO 9013 Indicates the beginning of Postprocessing for volume No user intervention is the load process. INFO 9067 Indicates load progress. (9005). (9007). server:/mnt/demo-A' required. (1357) completed (9084) INFO 9087 The volume import Importing volume No user intervention is finished. INFO 9004 Explorer calculations have Volume statistics No user intervention is completed. name> cannot be inferred expressions are properly because of regular formed. INFO 9006 Query membership Query inference step No user intervention is calculations progress <number> done (9006). Event Type Number Reason Sample Message Customer Action INFO 9002 One or more conditions in Harvester: Query < query Verify that regular a query were incorrect. information. INFO 9007 Query membership Query inference completed No user intervention is calculations completed. computation completed required. INFO 9069 Indicates load progress. (9004).

Synthetic required. harvest now. server:volume. recommended (15020). incremental. the volume load can now now. (15018) INFO 15019 User stops harvest process. Full harvest dpfsvr:jhaide-A has instead of an incremental should run instead of changed. Proceeding with load. (15009) INFO 15012 The policy running on the Volume <volume> on No user intervention is volume has completed and server <server> is free required. INFO 15008 The volume load step has Post processing skipped No user intervention is been skipped. (15017) INFO 15018 Harvest size and/or time Harvest limit reached on No user intervention is limit is reached. A full harvest is harvest. required. (15012) INFO 15013 The configured time limit Harvest time limit reached No user intervention is on a harvest was reached. Appendix C. (15019) INFO 15020 Harvest vocabulary has Vocabulary for Full harvest should be run changed. Ending required. <server>:<volume>. Event Type Number Reason Sample Message Customer Action INFO 9091 The load process has been Load aborted due to user No user intervention is aborted by the user. for volume server:vol required. (15013) INFO 15014 The configured object Object count limit reached No user intervention is count limit on a harvest for server:share. (15008) INFO 15009 The volume load step has Harvest skipped for No user intervention is been executed but the volume required. per user for volume required. harvest now. dpfsvr:vol1. skipped. Ending required. per user request. INFO 15022 The user is trying to Permission-only harvest: No action needed as the execute an ACL-only permission checks not volume is skipped. (15014) INFO 15017 Check box selected for Deferring post processing No user intervention is nightly load job. harvest step has been <server>:<vol–ume>. request. harvest on a volume that supported for is not a Windows Share or <server>:<share> SharePoint volume. was reached. deletes will not be computed. Event log messages 145 . Harvest stopped by user No user intervention is while processing volume required. request (9091). proceed. Rest of volumes will be skipped. for server:share.

<server>:<share> should have been discovered. discovery step. <server>:<share> INFO 17508 Generic retention No new items discovered. reached the preconfigured starting new vol–ume limit. count) reached for generic reached for retention discovery. harvest on a volume that has no associated user list. has not been assigned a user list. Ending this run. in volume set <autodis–covered volume set name>. 146 Administration Guide . new volume QAPOOL:_QAPOOL_2009_JAN_1 (17003) INFO 17004 A Centera Discovery Object limit reached for No user intervention is auto-created volume has _QAPOOL_2009_JAN_1. INFO 17002 Sent when Centera Centera External Iterator : No user intervention is Discovery sends the query Starting to populate using required. discovery step. starting a new one. A Cen–tera cluster Centera for over 5 may be overloaded. (17002) INFO 17003 Sent when Centera <servername>: Created No user intervention is Discovery autocreates a new volume required. (17007) INFO 17009 Configured time limit Centera Discovery : time No user intervention is reached for Centera limit for discovery required. volume. limit for discovery reached. (17010) INFO 17507 Limit (time or object Retention discovery limit Contact Customer Support. No user intervention is discovery found no new Post-processing skipped required unless the user is items for this master for volume certain that new items volume. Event Type Number Reason Sample Message Customer Action INFO 15023 The user is trying to Permission-only harvest: No action needed as the execute an ACL-only volume <server>:<share> volume is skipped. INFO 17509 Generic retention Created new discovered No user intervention is discovery created a new volume <server>:<share> required. (17009) INFO 17010 Configured object count Centera Discovery: No user intervention is limit reached for Centera configured item count required. minutes. reached. to the Centera server pool QAPOOL. items returned from down. (17004) INFO 17007 Pending data return from Centera Harvester: No Check if a Centera node is Centera. Still waiting. Ending this run. required.

(18004) required. <audit id> <policy name> required. <start time> (34014) INFO 34015 A policy audit was Deleted Policy Audit # No user intervention is deleted. Walker._processFile: No user intervention is Grazer Stopped. No user intervention is with or without success. Rerun jobs after reboot or services on the controller Stopping outstanding jobs. (34031) processed. INFO 41006 Rebooting or restarting Service shutdown. all jobs to stop. (34008) INFO 34014 A policy audit was Deleting Policy Audit # No user intervention is deleted. objects processed by delete required. (34004) INFO 34008 Marks current progress of <volume>: <count> data No user intervention is a move action. by matching the start (18016) directory regular expression. restart if you want the jobs or compute node causes (41006) to complete. (41001) No user intervention is manually or was required. <audit id> <policy name> required. INFO 18016 Displays the list of top Choosing top level No user intervention is level directories selected directories: <directories> required. (41003) required. Walker._processFile: No user intervention is Grazer Closed. INFO 41002 The user stopped a job <jobname> stopped at user No user intervention is that was running. INFO 41001 A job was started either <jobname> started. INFO 34001 Marks current progress of <volume>: <count> data No user intervention is a copy action. Event log messages 147 . action. (18005) required. objects processed by copy required. data objects processed by required. (34001) INFO 34004 Marks current progress of <volume>: <count> data No user intervention is a delete action. INFO 41003 A job completed normally <jobname> completed. every 10000 objects production action. scheduled. request (41002) required. Appendix C. INFO 18005 Grazer queue was closed. Event Type Number Reason Sample Message Customer Action INFO 18004 Job was stopped. objects processed by move required. action. <start time> (34015) INFO 34031 Progress update of the Winserver:topshare : 30000 No user intervention is discovery export policy. Displays at the beginning of a harvest. action.

(42033) required. (42010) INFO 42018 The action completed or Copy data objects No user intervention is was aborted. INFO 42005 The action completed or Copy complete: <number> No user intervention is was aborted. INFO 42048 Reports that the Content Data Synchronizer No user intervention is synchronizer is skipping a skipping required. Shows results data objects required. copied. of copy action. volume if synchronization <server-name>:<volume- is determined not to be name> as it does not need required. Shows results complete: <number> data required. with the GUI button.<number> collisions found. Shows results required. of report action. Shows results data objects required. moved. (42018) INFO 42020 Third-party application Centera Deleted Files No user intervention is deleted clips on discovered Synchronizer complete: required. with long-running jobs. Shows results (42032). No user intervention is was aborted. synchronization. required. of deleted action.<number> collisions found. INFO 42033 The synchronizer started Content Data Synchronizer No user intervention is automatically or manually started. INFO 42028 The action completed or Policy completed (42028). (42005) INFO 42010 The action completed or Move complete: <number> No user intervention is was aborted. of move action. complete. objects copied. of policy action. synchronization of a for volume volume <server-name>:<volume- name> 148 Administration Guide . Centera volumes. so that it doesn’t conflict activity. INFO 42032 The action completed or <report name> completed No user intervention is was aborted. 10000 objects passed. (42048) INFO 42049 Reports that the Content Data Synchronizer No user intervention is synchronizer has started starting synchronization required. (42024) required. Event Type Number Reason Sample Message Customer Action INFO 41008 Database compactor Database compactor was Set the database (vacuum) job can not run not run because other jobs compactor's job schedule while there is database are active (41008). 2 objects were missing from the Centera cluster.<number> collisions found. (42020) INFO 42024 The synchronizer Content Data Synchronizer No user intervention is completed normally.

25 duplicates found. Any parts of the overall process add their own log entries. INFO 46003 The backup process Backup Process Finished. is now starting. com–pleted: 2003 data objects copied. is now done (42065) waiting. data server San Jose Office (42074) INFO 46001 The backup process has Backup Process Started. (42067) required. required. data server successfully. INFO 42063 Report on completion of Production Run on objects No user intervention is discovery export policy (Copying native objects) required. volumes to be loaded before continuing. XML format. Any selected (46001) required. No user intervention is begun. volume. that was Proceeding with execution No user intervention is waiting for participant of <policy-name>. attempting all the necessary tasks. successfully. This may take a few minutes. No user intervention is successfully completed (46003) required. Appendix C. INFO 42074 A query or tag has been Successfully sent query No user intervention is replicated to a member 'Custodian: Joe' to member required. Event log messages 149 . required. run with the (42066) corresponding audit trail. One or more (46002) backup types did not occur. of resources. and will begin execution. Event Type Number Reason Sample Message Customer Action INFO 42053 The policy. INFO 42067 Discovery export policy is Production Run producing No user intervention is preparing the audit trail in Audit Trail XML. execution phase. INFO 42066 A new discovery export New run number 10 Note the new run number run has started. (42063) INFO 42065 A discovery export policy Proceeding with execution No user intervention is that was held up for want of 'Production case One'. INFO 46002 The backup process did Backup Process Failed: Check your backup not complete all its tasks <error-description>. started for production in order to tie the current Production Case 23221. backups in the system configuration screen will be run if necessary.

overall backup process. needed to run and succeeded. for the System needed to run but did not Configuration backup. required. as part of the backup failed. contact Customer Support. required. for the Harvested Volume needed to run but did not Data backup. as part of the backup failed. 150 Administration Guide . as part of the finished. (46004) volume. Look at the setup overall backup process. as part of the backup finished. If succeed. (46005) required. as part of the backup finished. INFO 46011 The System Configuration System Configuration No user intervention is backup. skipped. (46008) required. overall backup process. overall backup process. Event Type Number Reason Sample Message Customer Action INFO 46004 The Application Data Application Data backup Check your backup backup. needed to run and succeeded. as part of the not configured. overall backup process. contact Customer Support. backups continue to fail. INFO 46008 The Harvested Volume Harvested Volume Data No user intervention is Data backup. If backups succeed. as part of the failed. continue to fail. INFO 46006 The Application Data Application Data backup No user intervention is backup. Contact Customer Support. Look at the setup overall backup process. INFO 46012 The System Configuration System Configuration No user intervention is backup. overall backup process. skipped. (46012) was not configured. continue to fail. required. (46006) was not configured. (46009) was not configured INFO 46010 The System Configuration System Configuration Check your backup backup. (46007) volume. skipped. INFO 46007 The Harvested Volume Harvested Volume Data Check your backup Data backup. as part of the backup not configured. for the Application Data needed to run but did not backup. INFO 46009 The Harvested Volume Harvested Volume Data No user intervention is Data backup. needed to run and succeeded. (46011) required. Look at the setup overall backup process. (46010) volume. If backups succeed. overall backup process. INFO 46005 The Application Data Application Data backup No user intervention is backup. as part of the backup not configured.

(47213) required. needed to run but for the Audit Trail backup. required. finished. full-text data for required. Event log messages 151 . did not succeed. This by the administrator required. Query cities was created No user intervention is This includes any object by the administrator required. account (60002). including the deletion of volumes. (46020) required. Appendix C. skipped. type on the data server. (46021) INFO 46022 Full-text data was Successfully backed up No user intervention is successfully backed up. on the data server. including the updating of volumes. as Policy Audit Trail backup No user intervention is part of the overall backup finished. Event Type Number Reason Sample Message Customer Action INFO 46013 The Audit Trail backup. Query cities was deleted No user intervention is This includes any object by The administrator required. type on the data server. includes any object type account (60001). INFO 60001 The user updates an object Query cities was updated No user intervention is on the system. (46013) volume. configured. required. INFO 60003 The user deletes an object. process was not (46015) configured. account (60003). failed. process. INFO 60002 The user creates an object. as Policy Audit Trail backup Check your backup part of the overall backup failed. server:share (46022) INFO 47213 Interrogator was Harvester 1 is now No user intervention is successfully restarted. If back–ups continue to fail. INFO 46014 The Audit Trail backup. failed: <specific error> (46019) INFO 46020 Volume cluster backup Indexed Data backup No user intervention is finished. including the creation of volumes. running. INFO 46015 The Audit Trail backup. INFO 46019 Volume cluster backup Indexed Data backup Contact Customer Support. (46014) required. needed to run and succeeded. skipped. Contact Customer Support. as Policy Audit Trail backup No user intervention is part of the overall backup not configured. INFO 46021 Volume is not configured Indexed Data backup not No user intervention is for indexed data backups. Look at the setup process.

query. (1002) will be skipped. (61002) required.” on page 133 WARN event log messages This topic contains a complete listing of all WARN event-log messages. reasons for occurrence. The data interrogator died : <data Customer Support. A new process will be created to replace it. (61001) required. object it was processing object name>. and any required customer action. or administrator account tag. administrator account (60004). the Volume to be harvested. Event Type Number Reason Sample Message Customer Action INFO 60004 The user publishes a Query cities draft was No user intervention is full-text query set or a published by the required. INFO 65000 The log file has finished Log file download No user intervention is downloading complete (65000) required. If that started. Contact Customer been problems accessing Support. WARN 1003 Interrogator child process Interrogator terminated Try to re-add the volume did not properly get before accessing data being harvested. Query tagging for cities No user intervention is This includes a published class was started by the required. INFO 61002 Concordance discovery Load file(s) ready for No user intervention is export is ready to upload upload. INFO 60005 The user tags an object. the load file(s). sample messages. the load file(s). Event Type Number Reason Sample Message Customer Action WARN 1002 An Interrogator process Processing could not be Classify the document died because of an completed on object. query. a draft query. INFO 61001 Concordance discovery Preparing for upload of No user intervention is export is now preparing load file(s). “Event log messages. 152 Administration Guide . INFO 60006 A user restarted services Application services restart No user intervention is on the data server. for all data servers was required. Related reference: Appendix C. manually and Contact unknown error. There may have objects. requested by the administrator account (60006). (60005). (1003) fails.

Event log messages 153 . (15004) completion. check mail server correctly. component after a run failure on abort. Please vacuum the database. Contact Customer Support. WARN 8001 The database needs to be The Database is Run the Database vacuumed. data server is permitted to relay on the configured SMTP server. WARN 15005 There was a component [<component>] Run Contact Customer Support. loaded. A new process will be created to replace it. was terminated because it completed on object. Appendix C. but on server <server name> is the test for mount failed. run the Database maintenance task using the Console interface (8001) WARN 9068 Tagged values were System metadata and Contact Customer Support. but full-text index tagged values were loaded loading failed. (15003) WARN 15004 A component cleanup [<component>] Cleanup Contact Customer Support. was no longer responding. (15005) WARN 15006 Cleanup failed for [<component>] Cleanup Contact Customer Support. (1004) processing will be skipped. The mail server user <email address>. but loading the full-text index failed (9068) WARN 9070 Tagged values and full-text Loading system metadata. index loading failed. WARN 6001 A user email could not be Failed to send an email to Verify that your SMTP sent. server is configured settings are incorrect. (15006) failure. appeared to succeed. failure on stop or failure on stop. run failure. interrogator killed : <data The data object it was object name>. failure. Event Type Number Reason Sample Message Customer Action WARN 1004 Interrogator child process Processing was not Contact Customer Support. Make sure the IP configuration settings address configured for the (6001). Skipping. approaching an maintenance task to operational limit. tagged values and the full-text index failed for volume 'server:volume' (9070) WARN 15003 The volume mount Volume <volume name> Contact Customer Support. successfully for volume 'server:volume'. not mounted.

used by another job. Event Type Number Reason Sample Message Customer Action WARN 15007 A component timed out Component Try your action again. harvest reached. be executed. Windows Share retention volume. with load. WARN 17502 Generic retention Volume <server>:<share> No user intervention is discovery is already appears to have another required as the next step. discovery is run on any is not supported for volume other than a discovery. Skipping Volume limit. Volume v1 on server s1 (15016) WARN 17008 Query ran to discover Centera External Iter–ator : Contact Customer Support. v1 on server s1. (17008) WARN 17011 Running discovery on the Pool Jpool appears to have Make sure that two jobs same pool in parallel is another discovery running. Skipping. if any. [<component>] this error continues. unresponsive. if any are in queue. contact Customer Support. Skipping object limit. (15010) harvest has completed. WARN 15011 A volume cannot be Volume <volume-name> No user intervention is harvested if it is being on server <server-name> is required. 1 (15015) WARN 15016 Configured harvest object Object count limit for Reconfigure harvest data count limit reached. Skipping. (15011) WARN 15015 Configured harvest time Time limit for harvest Reconfigure harvest time limit reached. (17011). same time that discover the same pool. reached. harvest will continue when Waiting before proceeding the job has completed. (15007) WARN 15010 The same volume cannot Volume <volume-name> No user intervention is be harvested in parallel. You may wish to The harvest will be already being harvested. Skipping. on server <server-name> is required. WARN 17504 Sent when a retention Volume <server>:<share> Contact Customer Support. within the job will volume. verify that the volume skipped and the next one. If and needs to be stopped. started. are not running at the not allowed. Skipping. The being used by another job. running for this master discovery running. 154 Administration Guide . Centera items terminated Centera Query terminated unexpectedly. autostopping trig–gered. unexpectedly (<error descrip–tion>).

occurred reading the extensions. A subsequent harvests._walktree: OSError Make sure the appliance processing of data object . not present or not readable file: <filename>. Event log messages 155 .<reason>. WARN 18015 NFS initialization warning NIS Mapping not User name and group that NIS is not available.company. (18009) WARN 18010 The skipdirs file is either Unable to open skip–dirs Contact Customer Support. (18010) WARN 18011 An error occurred reading Grazer. Instead of performed instead. Appendix C. a full harvest will be executed. Contact Customer Support._run: couldn't read Contact Customer Support._walktree: Contact Customer Support. <path>. from the database. NewsGator data source full harvest will be contact Customer Support._run: couldn't read Contact Customer Support. (18011) WARN 18012 An unknown error Grazer. Verify there is network connectivity between the appliance and your volume. (18019) performing an incremental harvest. Event Type Number Reason Sample Message Customer Action WARN 18007 Directory listing or Walker. the known extensions list extensions . Volume has Folder Review Check Review permissions permission on all folders. available. on the folder. failed to load.com: volume. skip directories as configured.<path><rea–son> (18007) still has appropriate failed in Grazer. Please check that your NIS server is available and properly configured in the data server. Grazer Timed Out. CaseOne due to user–name used to add the insufficient permissions. WARN 18017 A folder in an Enterprise Skipping Folder (ID=3) in If all folders were expected Vault case being harvested volume to be harvested in the was skipped because of evdiscac–cel. WARN 18008 Unknown error occurred Walker. (18015) names may be inaccurate. permissions to a volume. (18012) known extensions list from the database. Cannot by root. verify that the insufficient permissions. processing an object. while processing data Unknown exception - object or listing directory. (18008) WARN 18009 Grazer timed out Walker._processFile: Contact Customer Support. (18017) WARN 18019 The checkpoint saved from Unable to load checkpoint If the message repeats in the last harvest of the for NewsGator volume.

NewsGator data source volume. disk full. These connections will drop off after they time out. and a few resources on the server might be tied up for a short while. volume. WARN 33017 System encountered an An error occurred while Contact Customer Support. Some connections may be left open on the IBM Information Archive server until they are timed out. Session teardown failed. No user intervention required. Event Type Number Reason Sample Message Customer Action WARN 18020 The checkpoint noted for Unable to save checkpoint If the message repeats in the current harvest of the for NewsGator harvest of subsequent harvests. skipping copy : available on your policy <source volume> to destination and try again. (33021) Information Archive volume failed. (34003) 156 Administration Guide . (33023) completely. (18020) contact Customer Support. (33028) FileNet volume failed. could not be saved. WARN 33016 System could not unmount Windows Share Protocol Server administrators may this volume. see connections left (33016) hanging for a predefined period of time. WARN 34003 Skipped a copy data object Copy action error :. WARN 33023 A connection to the HTTP Connection None Discovery Accelerator tear-down to Discovery could not be torn down Accelerator failed. The next incremental harvest of the data source will not be able to pick up from this checkpoint.Target Please verify there is space because disk full error. Some connections may be left open on the FileNet server until they have timed out. <target volume>. WARN 33028 The tear-down operation IBM FileNet tear-down None of the connection to a operation failed. error while trying to figure retrieving the query out what query use this instances pointing to a volume. (33017) WARN 33021 The teardown operation of IBM Information Archive None the connection to a IBM tear-down failed.

and based on the configured execution. The sample able to map to an existing provides the most connection to a secondary common one. (34036) WARN 42003 The job containing this Copy data objects stopped No user intervention is action is stopped by the at user request. server.txt. (34029) WARN 34032 The policy being executed No volumes in scope for Check policy query and has no volumes in scope policy. The policy will not be executed. please try running the policy again. please run another harvest before executing your policy. <source volume> to destination. skipping discovery discovery export policy of an object. (34032) re-execute policy. because Hashing is audit trail. share-1/saved/years. volume(s) in scope. WARN 34033 Celerra data mover error. After verifying <target volume>. Data mover returned Consult the Celerra There are a large number NO_MATCHING_CONNECTION Administrator Manual. before executing the policy. cannot be executed. WARN 34029 Discovery export policy Discovery export Run Create sufficient space on detects the target disk is action error: Target disk target disk and run full and skips production full. WARN 34036 The policy has no source The policy has no source Confirm that the query volume(s) in scope. query and scoping. (34033) WARN 34035 If the global hash setting Copy objects : Target hash If target hashes need to be for the system is set to not will not be computed computed for the policy compute data object hash. Appendix C. (42003) required. skipping copy : available on your policy error. user. Event Type Number Reason Sample Message Customer Action WARN 34010 Skipped a move data Move action error :. Wait used by the policy has one meaning that the policy for the query to update or more volumes in scope. turn on the no hash can be computed disabled for system.pdf to production/10/ docu–ments/1/0x0866e 5d6c898d9ffdbea720b0 90a6f46d3058605. Event log messages 157 . export: again. global hash setting before for the target objects (34035) executing the policy. the : The specified actual one is listed in the OFFLINE_PATH was not error message. of possible causes. Skipping policy scoping configuration.Target Please verify there is space object because disk full disk full. Upon harvest completion. during a policy action. (34010) space is available.

user. the user. have loaded. WARN 42016 The job containing this Delete data objects No user intervention is action is stopped by the stopped at user request. Execution should begin as export run is in progress. request. (42052) WARN 42061 Discovery export policy Discovery export run on No user intervention is was stopped by user. conflicting discovery One' is in progress. (42068) it creates. same time. Note that policy may not appropriate permissions unable to set permissions be able to set appropriate on the target directory. (42051) WARN 42052 Policies will wait to One or more volume(s) No user intervention is execute until after volumes needed by policy required. being in the query. if those <policy-name> are being volumes are participants to loaded. write permissions and re-execute. If this is not policy may not have acceptable. WARN 42035 When the job containing Set security for data No user intervention is this action is stopped by objects stopped at user required. verify that appro–priate permissions target volume has proper set. Skipping. (42061) WARN 42064 Discovery export policy A Discovery export run The discovery export execution has been related to policy policy execution is held up delayed because a 'Discovery export case for required resources. (42035) WARN 42051 Two instances of the same Policy <policy-name> is No user intervention is policy cannot run at the already running. (42016) WARN 42026 The job containing this Policy stopped at user No user intervention is action is stopped by the request. Waiting for the the policy by virtue of bulk load(s) to finish. Waiting for it to finish. required. WARN 42068 Policy failed to set Copy objects warning. user. Event Type Number Reason Sample Message Customer Action WARN 42008 The job containing this Move data objects stopped No user intervention is action is stopped by the at user request. on target directory: permissions on the objects Objects created from the share-1/saved. objects (Copying native required. 158 Administration Guide . (42026) required. required. user. (42008) required. objects) stopped at user request. soon as resource becomes (42064) available.

modified objects. Contact Customer Support. If there are other full-text index. will be reproduced on a new run. the discovery export itself the last harvest. against it. (61003) completes. normal level again required. and an indication of process restarts and connections not being cleared. Appendix C. WARN 61003 One of the load files Failed to mount Some of the load files will cannot be uploaded transaction cache dump be missing after the because the compute node '/deepfs/postgres/ discovery export could not be accessed to production_cache'. run low on database usage seems excessive connections.185 manager. Discovery or perform an incremental is defined to act on the export run X will skip harvest on the source original file/email archive.17. a genuine break-in attempt. the system works on those and retries this volume later. This could be failing) to SSH into the port 57982. the warning lets the user know that modified objects will still be skipped.18. WARN 46026 Volume is being harvested Volume volume:share is in Rerun backup when or policies are running use. This is (512/415) (47202) abnormal. harvest” option is selected to act on members of either use a discovery for a discovery export containers. These load files obtain. (47125) either a mistyped data server. (42069) volume(s). Event log messages 159 . (46026) backed up. as opposed to their members. it is only valid if on objects modified after original file/email archive. If problem persists across runs. WARN 47215 Someone internally or SSHD: Failed password for Contact your local IT externally is trying (and root from 172. (512/100) (47201) WARN 47202 The system is starting to Database connections Contact Customer Support. password by a legitimate user or in the worst case scenario. If this is not the case. and cannot act export acting only on the policy. WARN 47201 Database connections are Database connections at No user intervention is down to a healthy level. Will retry full-text indexes to be later. Event Type Number Reason Sample Message Customer Action WARN 42069 If the “Copy data objects Discovery export If the modified objects modified since last DAT_Export is configured need to be acted upon. Unable to back up volume is not in use.

“Event log messages. Note: If multiple dumps fail. If the error persists. Event Type Number Reason Sample Message Customer Action WARN 61004 Warns the user that one of Transaction Cache Dump Run the discovery export the transaction cache failed with error . encountered an error. Related reference: Appendix C. and you cannot find any the case of a discovery (61004) cluster/data server export run. the discovery export will contact Customer Support. In creation of load file. fail to produce one of the load files. this means that configuration issues. there will be one warning per failed dump. policy that saw the error dump processes Validation failed during again.” on page 133 160 Administration Guide .

This information could include technical inaccuracies or typographical errors. MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. © Copyright IBM Corp. However. program. program.A. For license inquiries regarding double-byte character set (DBCS) information. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Chuo-ku Tokyo 103-8510. program. it is the user's responsibility to evaluate and verify the operation of any non-IBM product.A. 19-21. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. Nihonbashi-Hakozakicho.S. or service is not intended to state or imply that only that IBM product. IBM may have patents or pending patent applications covering subject matter described in this document. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. BUT NOT LIMITED TO. Some states do not allow disclaimer of express or implied warranties in certain transactions. Japan The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND. therefore. 2001.Notices This information was developed for products and services offered in the U. Any reference to an IBM product. contact the IBM Intellectual Property Department in your country or send inquiries. this statement may not apply to you. to: Intellectual Property Licensing Legal and Intellectual Property Law IBM Japan Ltd. or service. or features discussed in this document in other countries. these changes will be incorporated in new editions of the publication. EITHER EXPRESS OR IMPLIED. THE IMPLIED WARRANTIES OF NON-INFRINGEMENT. or service that does not infringe any IBM intellectual property right may be used instead. You can send license inquiries. IBM may not offer the products. 2013 161 . program. NY 10504-1785 U. INCLUDING. or service may be used. Any functionally equivalent product. The furnishing of this document does not grant you any license to these patents. Changes are periodically made to the information herein. in writing. to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk. Consult your local IBM representative for information on the products and services currently available in your area. in writing. services.S.

You may copy. brands. the results obtained in other operating environments may vary significantly. IBM shall not be liable for any damages arising out of your use of the sample programs. Each copy or any portion of these sample programs or any derivative work. for the purposes of developing.A. including in some cases. compatibility or any other claims related to non-IBM products. This information contains examples of data and reports used in daily business operations. using. the examples include the names of individuals. Any performance data contained herein was determined in a controlled environment. therefore. CA 95141-1003 U. Furthermore. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. must include a copyright notice as follows: 162 Administration Guide . should contact: IBM Corporation J46A/G4 555 Bailey Avenue San Jose. Users of this document should verify the applicable data for their specific environment. payment of a fee. IBM International Program License Agreement or any equivalent agreement between us. and products. without warranty of any kind. These examples have not been thoroughly tested under all conditions. Therefore. The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement. COPYRIGHT LICENSE: This information contains sample application programs in source language. Such information may be available. some measurements may have been estimated through extrapolation.S. subject to appropriate terms and conditions. modify. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. IBM has not tested those products and cannot confirm the accuracy of performance. and distribute these sample programs in any form without payment to IBM. Information concerning non-IBM products was obtained from the suppliers of those products. cannot guarantee or imply reliability. their published announcements or other publicly available sources. marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. The sample programs are provided "AS IS". companies. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. Actual results may vary. To illustrate them as completely as possible. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged. which illustrate programming techniques on various operating platforms. IBM. or function of these programs. serviceability.

© Copyright IBM Corp. the photographs and color illustrations may not appear. and service names may be trademarks or service marks of others. you should seek your own legal advice about any laws applicable to such data collection. to tailor interactions with the end user or for other purposes. Adobe. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at http://www.com® are trademarks or registered trademarks of International Business Machines Corp.shtml. Microsoft and Windows are trademarks of Microsoft Corporation in the United States. specific information about this offering’s use of cookies is set forth below. Trademark IBM.© (your company name) (year). Java™ and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. the IBM logo. or both. 2011. product. and/or other countries. In many cases no personally identifiable information is collected by the Software Offerings.ibm. For more information about the use of various technologies. See IBM’s Privacy Policy at http://www. UNIX is a registered trademark of The Open Group in the United States and other countries. other countries. other countries. including software as a service solutions. including cookies. and ibm. Linux is a registered trademark of Linus Torvalds in the United States. Sample Programs. This Software Offering does not use cookies or other technologies to collect personally identifiable information.com/legal/copytrade. Some of our Software Offerings can help enable you to collect personally identifiable information. to help improve the end user experience.ibm. Portions of this code are derived from IBM Corp. Other product and service names might be trademarks of IBM or other companies.. and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States. Privacy policy considerations IBM Software products. registered in many jurisdictions worldwide. including any requirements for notice and consent.com/privacy and IBM’s Online Privacy Statement at http://www. the Adobe logo. or both.com/privacy/details the Notices 163 . If this Software Offering uses cookies to collect personally identifiable information. Other company. (“Software Offerings”) may use cookies or other technologies to collect product usage information.ibm. If you are viewing this information softcopy. If the configurations deployed for this Software Offering provide you as customer the ability to collect personally identifiable information from end users via cookies and other technologies. for these purposes. PostScript.

Web Beacons and Other Technologies” and the “IBM Software Products and Software-as-a-Service Privacy Statement” at http://www.ibm.com/software/info/product-privacy. section entitled “Cookies. 164 Administration Guide .

configuring 31 harvester settings 71 full 69 desktop volume.Index A emails hashing 30 Hitachi HCAP 45 action 2 Encrypted File System admin account changing 23 configuring 83 encrypted file-system user I audit settings IBM Desktop Data Collector adding 32 configuring 30 installation 81 Enterprise Vault audit trail installation in stealth mode 83 configuring 37 saving results from 99 installation methods 82 configuring primary volume 55 audit trails using delete policy 83 configuring site 39 searching 98 IBM Desktop Data Collector installer 31 retention volumes 42 audits downloading 83 event harvest 89 IBM Information Archive subscribing to 94 policy 94 configuring retention server 43 event log IBM StoredIQ Data Workbench clearing current 94 supporting 4 downloading 94 C viewing 93 IBM StoredIQ Platform checking status 13 Centera event logs 93 rebooting 15 advanced retention 40 exceptions 2 IBM StoredIQ Policy Manager Centera pool 41 Exchange about 5 creating 41 servers 34 import audits 92 clients incremental harvests 69 upgrading for registered items workstations 32 F saving into different folders 86 failure messages policy audit 100 D FileNet J DA Gateway settings bootstrapping into a domain 45 job configuring 18 folder creating 74 data object types 30 copying items to different 86 deleting 77 Data Workbench creating 85 discovery retention volumes 75 about 3 deleting 85 editing 75 potential uses of 3 filtering items within view 87 predefined 77 supporting 1 Library 85 saving as 76 DCOM traffic moving 86 starting 76 allowing through Windows renaming 86 jobs firewall 43 Workspace 85 available types 73 Dell DX Storage Cluster full-text index settings 28 creating 41 configuring 28 desktop service disabling 31 L desktop services H lightweight harvest full-text settings 72 enabling 31 harvest desktop settings. 2001. deleting 65 hash settings 72 incremental 69 Discovery Accelerator 38 parameters 70 lightweight 70 configuring customer settings 39 performing 70 post-processing 69 required permissions 38 volume configuration 71 troubleshooting 79 discovery export volume logging in harvest audits 89 creating 60 to Web application 23 viewing 91 Documentum logging out harvest list servers 37 to Web application 23 downloading 92 harvester settings 26 harvesting E about 69 M eDiscovery harvests mail settings about 4 incremental 69 configuring 20 potential uses of 4 hash computation messages prerequisites for using 5 SharePoint 127 event log 133 hash settings 30 © Copyright IBM Corp. 2013 165 .

N SNMP settings configuring 20 NetApp SnapLock 44 success messages NewsGator policy audit 99 required privileges 40 supported file types NFS by category 111 servers 34 by name 101 notifications supported system protocols 129 configuring 21 system time and date NSF files setting 22 importing from Lotus Notes 25 system volume adding 61 O OCR image processing 26 U OCR processing 27 user Optical Character Recognition 27 creating new 23 editing 24 user account P deleting 24 policy audit locking 24 viewing by discovery export 97 unlocking 24 viewing by name 97 viewing by time 97 viewing by volume 97 V viewing details 96 volume policy audit failure messages 100 adding primary 46 policy audit success messages 99 discovery export 60 policy audit warning messages 99 system 61 policy audits 94 volume cache predefined job deleting 79 running 77 volume data processing exporting 62 monitoring 77 exporting to a system volume 62 importing 62 importing to a system volume 63 R volume definitions recovery agent 32 editing 55 remote DCOM volume import audit enabling 42 viewing 93 retention volume volume indexing 33 adding 56 volume-import audits 92 Enterprise Vault 59 volume. deleting 65 retention volumes 55 volumes Enterprise Vault 42 policy limitations by type 65 retention 55 S search depth W volume indexing 33 warning messages server platforms policy audit 99 supported by volume type 129 Windows authentication services enabling integration on Exchange restarting 15 servers 35 SharePoint Windows Share alternate-access mappings 36 server 34 Privileges 36 privileges for social data 36 Secure Connection 35 servers 35 SharePoint objects supported types 126 SharePoint volume performance considerations with versioning 54 166 Administration Guide .

.

5725M86 SC27-5692-00 . 5725M85. Product Number: 5725M84.