Tivoli Operations Planning and Control

IBM

Tracker Agents for AIX, UNIX**, VMS**, and OS/390 Open Edition
Version 2 Release 3

SH19-4484-02

Tivoli Operations Planning and Control

IBM

Tracker Agents for AIX, UNIX**, VMS**, and OS/390 Open Edition
Version 2 Release 3

SH19-4484-02

Note Before using this information and the product it supports, be sure to read the general information under “Notices” on page ix. ISO 9001 Certification This product was developed using an ISO 9001 certified quality system. Certification has been awarded by the Italian quality system certification group, CSQ (Certification No. CISQ/CSQ 9150.IBM7). CSQ is a member of the mutually recognized organization of European assessors, ITQS, which assesses and certifies quality systems in the field of information technology enterprises.

Third Edition (December 1999)
This is a major revision of, and obsoletes, SH19-4484-01. This edition applies to Version 2 Release 3 Modification Level 0 of Tivoli Operations Planning and Control, Program Number 5697-OPC, and to all subsequent releases and modifications until otherwise indicated in new editions or technical newsletters. See the “Summary of Tivoli OPC Version 2 Release 3 Enhancements” on page xv for the changes made to this manual. Technical changes or additions to the text to describe the Tivoli Job Scheduling Console Support are indicated by a vertical line to the left of the change. Make sure you are using the correct edition for the level of the product. Order publications through your IBM representative or the IBM branch office serving your locality. Publications are not stocked at the address below. IBM welcomes your comments. A form for readers' comments appears at the back of this publication. If the form has been removed, address your comments to: Tivoli OPC Information Development Rome Tivoli Laboratory IBM Italy S.p.A. Via Sciangai, 53 00144 Rome Italy Fax Number (+39) 06 5966 2077 Internet ID: ROMERCF at VNET.IBM.COM When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. © Copyright International Business Machines Corporation 1995, 1999. All rights reserved. Note to U.S. Government Users — Documentation related to restricted rights — Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.

Contents
Notices . Trademarks
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ix ix xi xi xii xii xiii xiii xiv

Preface . . . . . . . . . . . Who Should Read This Book How This Book Is Organized Tivoli OPC Publications . . Tivoli OPC Online Books Online Message Facility Other Publications . . . . .

Summary of Tivoli OPC Version 2 Release 3 Enhancements Job Scheduling Console . . . . . . . . . . . . . . . . . . . . . . . Catalog Management — Data Availability . . . . . . . . . . . . . OS/390 Workload Manager Support . . . . . . . . . . . . . . . . OS/390 Automatic Restart Manager Support . . . . . . . . . . . Program Interface (PIF) Enhancements . . . . . . . . . . . . . . Enhancements for Non-OS/390 Tracker Agents . . . . . . . . . Usability Enhancements . . . . . . . . . . . . . . . . . . . . . . . New and Changed Installation Exits . . . . . . . . . . . . . . . . New and Changed Initialization Statements . . . . . . . . . . . Version 2 Release 2 Summary . . . . . . . . . . . . . . . . . . . Version 2 Release 1 Summary . . . . . . . . . . . . . . . . . . . Chapter 1. Overview . . . . . . . Product Features . . . . . . . . . . Controllers . . . . . . . . . . . . Tracker Agents . . . . . . . . . The Installation Process . . . . . . Using the Latest Install Information

xv xv xv xv xvi xvi xvi xvi xvii xvii xviii xxii 1 1 1 2 2 3 5 5 6 7 7 8 8 9 10 10 10 11 12 12 13 14 15 15

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 2. Things to Be Done with the Tivoli OPC Controller Loading the Tracker Agent Software to MVS . . . . . . . . . . . . Installing the Tracker Agent Software Using SMP/E . . . . . . Applying Tracker Agent Maintenance . . . . . . . . . . . . . . . Loading Tracker Agent Enabler Software to MVS . . . . . . . . . Installing the Enabler for the Tracker Agent Using SMP/E . . Applying Maintenance for Tracker Agent Enabler . . . . . . . . Tivoli OPC Controller Initialization Statements . . . . . . . . . . . ROUTOPTS Example . . . . . . . . . . . . . . . . . . . . . . . . JTOPTS Example . . . . . . . . . . . . . . . . . . . . . . . . . . OPCOPTS Example . . . . . . . . . . . . . . . . . . . . . . . . Chapter 3. Planning your Tracker Agent Installation Creating a User Group and User IDs . . . . . . . . . . Creating the User Group . . . . . . . . . . . . . . . . Creating the User ID . . . . . . . . . . . . . . . . . . Adding New Users to the opc Group . . . . . . . . . Kernel Considerations . . . . . . . . . . . . . . . . . . . Planning Your Directory Layouts . . . . . . . . . . . . .
© Copyright IBM Corp. 1995, 1999

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

iii

Verifying Host and Service Names . . . . . . . . . . . . . . . . . . Verifying the TCP/IP Environment (AIX Only) . . . . . . . . . . . Verifying that TCP/IP Is Operational . . . . . . . . . . . . . . . Verifying the Connection to the Controller Machine . . . . . . Verifying the Connection from the Controller Machine . . . . . Verifying the Network Routing . . . . . . . . . . . . . . . . . . . Verifying the TCP/IP Environment (HP-UX Only) . . . . . . . . . Verifying the Connection to the Controller Machine . . . . . . Verifying the Connection from the Controller Machine . . . . . Verifying the TCP/IP Environment (Sun Solaris and SunOS only) . . . . . . . . . . . . . . . . . . . . . . . Verifying the Gateway Verifying that TCP/IP Is Operational . . . . . . . . . . . . . . . Verifying the Connection to the Controller Machine . . . . . . Verifying the Connection from the Controller Machine . . . . . Verifying the Network Routing . . . . . . . . . . . . . . . . . . . Verifying the TCP/IP Environment (OS/390 only) . . . . . . . . . Verifying that TCP/IP Is Operational . . . . . . . . . . . . . . . Verifying the Connection to the Controller Machine . . . . . . Verifying the Connection from the Controller Machine . . . . . Chapter 4. Installing and Customizing the Tracker Agent Download the Tracker Agent Files from the Controller System AIX Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . HP-UX Only . . . . . . . . . . . . . . . . . . . . . . . . . . . Sun Solaris Only . . . . . . . . . . . . . . . . . . . . . . . . SunOS Only . . . . . . . . . . . . . . . . . . . . . . . . . . . SGI IRIX Only . . . . . . . . . . . . . . . . . . . . . . . . . . Digital UNIX Only . . . . . . . . . . . . . . . . . . . . . . . . Digital OpenVMS VAX/Alpha Only . . . . . . . . . . . . . . OS/390 Open Edition Only . . . . . . . . . . . . . . . . . . . Creating Links between the Directories . . . . . . . . . . . . . Customizing the Configuration Parameter File . . . . . . . . . Home Directory . . . . . . . . . . . . . . . . . . . . . . . . . Customizing the Directories . . . . . . . . . . . . . . . . . . Customizing File Permissions . . . . . . . . . . . . . . . . . Restrictions and Dependencies on System Software . . . . . NFS Restrictions . . . . . . . . . . . . . . . . . . . . . . . . Number of Processes per User . . . . . . . . . . . . . . . . Coordinating Clock Values . . . . . . . . . . . . . . . . . . . Chapter 5. Operation . . . . . . . . . . . . . . Running Scripts for Tivoli OPC . . . . . . . . . . Storing Scripts . . . . . . . . . . . . . . . . . . Writing Scripts . . . . . . . . . . . . . . . . . . Determining the Shell that Scripts Run Under Specifying a User ID . . . . . . . . . . . . . . Getting Output from Scripts . . . . . . . . . . Testing for Errors from Commands . . . . . . . Specifying the Path . . . . . . . . . . . . . . . Controlling the Tracker Agent . . . . . . . . . . . Starting the Tracker Agent . . . . . . . . . . . Checking Tracker Status . . . . . . . . . . . . Shutting Down the Tracker Agent . . . . . . . Dealing with Temporary and Log Files . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17 19 21 22 22 22 23 24 24 25 25 25 26 26 26 27 27 27 27 29 29 29 34 36 38 40 42 44 48 48 48 49 55 56 57 57 57 58 59 59 59 59 59 60 60 61 62 62 62 65 65 65

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

|

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

iv

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Checking Disk Space . . . . . . . . . . Restarting after an Abnormal Termination

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

66 66 69 70 71 71 72 72 73 74 75 75 75 75 75 75 75 76 76 76 77 93 93 93 94 94 95 96 96 97 97 97 98 98 99 101 101 101 102 103 103 104 104 104 105 107 108 108

Chapter 6. Diagnosing Problems . . . . . . . . . . . Exit Codes . . . . . . . . . . . . . . . . . . . . . . . . . . General Troubleshooting . . . . . . . . . . . . . . . . . Dealing with a Hung Tracker Agent . . . . . . . . . . Checking Files in the Log and Temporary Directories Trace Files . . . . . . . . . . . . . . . . . . . . . . . . Checking the Configuration Parameter File . . . . . Checking File Permissions . . . . . . . . . . . . . . . Checking the Tracker User ID . . . . . . . . . . . . . Checking the NFS File System . . . . . . . . . . . . Checking the NIS Master . . . . . . . . . . . . . . . . Checking the name server . . . . . . . . . . . . . . . Checking Duplicate Port Definitions . . . . . . . . . . Defining local_ipaddr if Multiple Interfaces . . . . . . Fixing Problems with Symbolic Links . . . . . . . . . Resetting the Tracker Agent . . . . . . . . . . . . . . Checking IPC Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tuning and Performance Appendix A. Messages

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Appendix B. Utilities and Samples Utility Programs and Scripts . . . . . eqqstart . . . . . . . . . . . . . . . eqqverify . . . . . . . . . . . . . . eqqstop . . . . . . . . . . . . . . . eqqfm . . . . . . . . . . . . . . . . eqqdelete . . . . . . . . . . . . . . eqqview . . . . . . . . . . . . . . . eqqinit . . . . . . . . . . . . . . . . eqqclean . . . . . . . . . . . . . . eqqperm . . . . . . . . . . . . . . eqqcv80p . . . . . . . . . . . . . . eqqshow . . . . . . . . . . . . . . Samples . . . . . . . . . . . . . . . .

|

Appendix C. Enabling the Pulse Functions . . . . . . . Setting Up the Controller Machine and OS/390 OE System Setting Up an AIX System . . . . . . . . . . . . . . . . . . Setting Up an HP System . . . . . . . . . . . . . . . . . . . Setting Up a SunOS System . . . . . . . . . . . . . . . . . Recompiling the Kernel . . . . . . . . . . . . . . . . . . . Setting Up a Sun Solaris System . . . . . . . . . . . . . . Setting Up a MIPS ABI System . . . . . . . . . . . . . . . . Setting Up a Digital OpenVMS System . . . . . . . . . . . Setting Up a Digital UNIX System . . . . . . . . . . . . . . Appendix D. Using LoadLeveler Sample LoadLeveler script . . . . Restrictions . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Contents

v

Appendix E. EBCDIC and ASCII Codepage Tables

. . . . . . . . . . . . . . . . . . . . . .

109 111 111 111 113 113 113 115 115 115 117 117 117

Appendix F. Machine and Program Requirements for AIX Systems Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . .

Appendix G. Machine and Program Requirements for HP-UX Systems Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix H. Machine and Program Requirements for Solaris Systems Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix I. Machine and Program Requirements for SunOS Systems Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. .

. . .

. . . .

Appendix J. Machine and Program Requirements for Digital OpenVMS Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | | Appendix K. Machine and Program Requirements for Silicon Graphics IRIX Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix L. Machine and Program Requirements for Digital UNIX Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . . . Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix M. Machine and Program Requirements for OS/390 Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . Software Requirements . . . . . . . . . . . . . . . . . . . . . . .

. . .

119 119 119

. . .

121 121 121 123 123 123 125 125 125 127 129 141

. . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

Appendix N. Applying Tracker Maintenance on Non-AIX Machines Glossary Index

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

vi

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Figures
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. An Extract from the /etc/hosts File . . . . . . . . . . . . . Example Configuration Showing Port Numbers . . . . . An Extract from the /etc/services File on UNIX Tracker 1 An Extract from the /etc/services File on UNIX Tracker 2 Ports in the UNIX Tracker 1 Configuration File . . . . . Ports in the UNIX Tracker 2 Configuration File . . . . . Checking the Tracker Files on Solaris . . . . . . . . . . Checking the Tracker Files on SunOS . . . . . . . . . . Checking the Tracker Files on SGI IRIX . . . . . . . . . Checking the Tracker Files on Digital UNIX . . . . . . . Keyword Syntax . . . . . . . . . . . . . . . . . . . . . . . Example of a Configuration File . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17 18 18 19 19 19 37 39 41 43 51 55

Tables
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. The INCLUDE Statement . . . . . . . . . . . . . . . . . . . The INIT Statement . . . . . . . . . . . . . . . . . . . . . . Changes to Installation Exits . . . . . . . . . . . . . . . . . Stages in the Installation Process . . . . . . . . . . . . . . Tracker Agent Libraries Loaded by SMP/E . . . . . . . . . Enabler Libraries Loaded by SMP/E for the Tracker Agent Planning for Users . . . . . . . . . . . . . . . . . . . . . . . Tracker Agent Directory Structure . . . . . . . . . . . . . . The Tracker Agent Installation Process . . . . . . . . . . . Symptoms and Required Actions for Common Problems Values of Tracker Agent flags . . . . . . . . . . . . . . . . Codepage Compatibility . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xxiii xxiv xxv . 2 . 5 . 7 11 15 29 69 94 109

© Copyright IBM Corp. 1995, 1999

vii

viii

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Notices
References in this publication to IBM products, programs, or services do not imply that IBM intends to make these available in all countries in which IBM operates. Any reference to an IBM product, program, or service is not intended to state or imply that only IBM’s product, program, or service may be used. Subject to IBM's valid intellectual property or other legally protectable rights, any functionally equivalent product, program, or service may be used instead of the IBM product, program, or service. The evaluation and verification of operation in conjunction with other products, except those expressly designated by IBM, is the user’s responsibility. IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact: IBM Corporation P.O. Box 12195 3039 Cornwallis Research Triangle Park, NC 27709-2195 U.S.A. Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee.

Trademarks
The following terms in this publication are trademarks of Tivoli Systems or IBM Corporation in the United States or other countries or both:
AIX AS/400 IBM MVS/ESA OS/2 OS/400 Tivoli TME 10 AIX/6000 BookManager LoadLeveler OPC OS/390 Scalable POWERparallel Systems TME

In Denmark, Tivoli is a trademark licensed from Kjøbenhavns Sommer - Tivoli A/S

© Copyright IBM Corp. 1995, 1999

ix

Microsoft, Windows, Windows NT, and the Windows logo are trademarks or registered trademarks of Microsoft Corporation. UNIX is a registered trademark in the United States and other countries licensed exclusively through X/Open Company Limited. C-bus is a trademark of Corollary, Inc. | Java and all Java-based trademarks or logos are trademarks of Sun Microsystems, Inc. PC Direct is a trademark of Ziff Communications Company and is used by IBM Corporation under license. ActionMedia, LANDesk, MMX, Pentium, and ProShare are trademarks or registered trademarks of Intel Corporation in the United States and other countries. Other company, product, and service names which may be denoted by a double asterisk (**), may be trademarks or service marks of others. | HP-UX IRIX Network File System Network Information System NFS NIS ORACLE Solaris SPARC Sun SunOS Hewlett-Packard Corp. Silicon Graphics, Inc. Sun Microsystems Inc. Sun Microsystems Inc. Sun Microsystems Inc. Sun Microsystems Inc. Oracle Corp. Sun Microsystems Inc. SPARC International Inc. Sun Microsystems Inc. Sun Microsystems Inc.

x

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Preface
This book covers installation tasks, component logic, operation, and problem determination for these features of Tivoli OPC. OPC OPC OPC OPC OPC OPC OPC OPC Tracker Tracker Tracker Tracker Tracker Tracker Tracker Tracker Agent Agent Agent Agent Agent Agent Agent Agent for for for for for for for for AIX Digital OpenVMS** Digital UNIX** HP-UX** Silicon Graphics IRIX** Sun Solaris** SunOS** OS/390 Open Edition

These features differ only slightly in their installation and operation, and are therefore described in the same book. Differences are clearly marked, like this: OPC Tracker Agent for Sun Solaris only Use the EQQTXSEN load module

OPC Tracker Agent for SunOS only Use the EQQTXUEN load module

Installation is the task of making a program ready to do useful work. This task includes adding the materials on the IBM distribution tape to your system, preparing, and maintaining the program.

Who Should Read This Book
This book is intended for those who are responsible for Tracker Agent system administration. The role of the system administrator normally includes the following tasks: Installing the Tracker Agent Setting up the configuration files Maintaining the Tracker Agent Performing initial problem determination. In order to perform the tasks described in this book, the Tracker Agent system administrator must be an experienced user of UNIX commands and be familiar with system management techniques used in the operating environment. Knowledge of networking will also be helpful.

© Copyright IBM Corp. 1995, 1999

xi

How This Book Is Organized
Read Chapter 1, “Overview” to understand the relationship between the Tracker Agent and the controller products.Chapter 2, “Things to Be Done with the Tivoli OPC Controller” describes the installation steps on the controller systems.Chapter 3, “Planning your Tracker Agent Installation” and Chapter 4, “Installing and Customizing the Tracker Agent” describe the installation steps on the UNIX workstation. Chapter 5, “Operation” describes day-to-day tracker operation, and Chapter 6, “Diagnosing Problems” describes problem determination and data collection information for diagnosing suspected problems. The appendixes describe the messages, utilities, samples, setup for enabling the pulse functionality (the KEEPALIVE option), and prerequisites for the Tracker Agent.

Tivoli OPC Publications
This book is part of an extensive Tivoli OPC library. These books can help you use Tivoli OPC more effectively:
Task Evaluating Tivoli OPC Evaluating Tracker Agents Planning Tivoli OPC Understanding Tivoli OPC Learning Tivoli OPC concepts and terminology Publication Order number GH19-4370 GH19-4371 GH19-4373 GH19-4372 SH19-4481 GC32-0402 GI10-9233 SH19-4480 SH19-4379 SH19-4380 SH19-4376 SH19-4377 SH19-4482 SH19-4378 GH19-4374 LY19-6405 SH19-4484

General Fact Sheet Tracker Agent Features Fact Sheet Licensed Program Specifications General Information Getting Started with Tivoli OPC Tivoli Job Scheduling Console Guide for OPC Users Tivoli Job Scheduling Console Release Notes Messages and Codes Installation Guide Customization and Tuning Planning and Scheduling the Workload Controlling and Monitoring the Workload Workload Monitor/2 User’s Guide Programming Interfaces Quick Reference Diagnosis Guide and Reference Tracker Agents for AIX, UNIX, VMS, OS/390 Open Edition Installation and Operation Tracker Agents for OS/2 and Windows NT Installation and Operation

| |

Using the Java GUI Using the Java GUI Interpreting messages and codes Installing Tivoli OPC Customizing and tuning Tivoli OPC Planning and scheduling the workload Controlling and monitoring the current plan Using Workload Monitor/2 Writing application programs Quick reference Diagnosing failures Controlling the AIX, UNIX**, VMS, OS/390 Open Edition workload Controlling the OS/2 and NT workload

SH19-4483

xii

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Task Controlling the OS/400 workload

Publication

Order number SH19-4485

Tracker Agent for OS/400 Installation and Operation

A Master Index, SH19-4375, is published for the Tivoli OPC library.

Maximizing Your OPC Throughput, SG24-2130, contains useful information for tuning the OPC installation.

Tivoli OPC Online Books
All the books in the Tivoli OPC library, except the licensed publications, are available in displayable softcopy form on CD-ROM in the following Softcopy Collection Kit: OS/390, SK2T-6700 You can read the softcopy books on CD-ROMs using these IBM licensed programs: BookManager READ/2 (program number 5601-454) BookManager READ/DOS (program number 5601-453) BookManager READ/6000 (program number 5765-086) All the BookManager programs need a personal computer equipped with a CD-ROM disk drive (capable of reading disks formatted in the ISO 9660 standard) and a matching adapter and cable. For additional hardware and software information, refer to the documentation for the specific BookManager product you are using. Updates to books between releases are provided in softcopy only.

Online Message Facility
The Online Message Facility (OMF) is an OS/2 program that provides online access to information from BookManager softcopy books. It helps you diagnose problems without interrupting your work. You can retrieve the description of a message by clicking on a message number in a Communications Manager emulator window. Additional information about OMF is available on the Messages and Codes CD-ROM.

Preface

xiii

Other Publications
You might find these publications useful when you install the Tracker Agent:
Short title Publication Order number GC28-1654 GC28-1479 GC28-1653 GC28-1473 SC28-1107 SC28-1302 GC28-1108 SC31-6088 SC23-2300 SC23-2195 GC23-2201 GC23-2203 GC23-2367 GC31-8474 GC31-5108 GC31-5109 GC31-5147 SK3T-3566

JCL Reference JCL User's Guide SMP/E Reference SMP/E User's Guide SMP/E Messages MVS TCP/IP User's Guide AIX TCP/IP User's Guide Quick Start Guide Task Index and Glossary AIX Communications AIX Commands Tivoli GEM Installation and User's Guide Tivoli GEM Application Policy Manager User's Guide Tivoli GEM Instrumentation Guide SAP R/3 User's Guide Maestro Supplemental Documentation Set

MVS JCL Reference MVS SP5 JCL Reference MVS JCL User's Guide MVS SP5 JCL User's Guide System Modification Program Extended Reference System Modification Program Extended User's Guide System Modification Program Extended Messages and Codes IBM Transmission Control Protocol/Internet Protocol for MVS: User's Guide AIX Operating System TCP/IP User's Guide IBM RISC System/6000 Quick Start Guide Task Index and Glossary for IBM RISC System/6000 AIX Communications Concepts and Procedures for IBM RISC System/6000 AIX Commands Reference for IBM RISC System/6000 Tivoli Global Enterprise Manager: Installation and User's Guide Tivoli Global Enterprise Manager: Application Policy Manager User's Guide Tivoli Global Enterprise Manager: Instrumentation Guide SAP R/3 User's Guide Unison Maestro Supplemental Documentation Set

Before you install the Tracker Agent, you should be familiar with the procedures for installing software and system administration on the target operating environment. This information is provided by Hewlett-Packard Corporation, for HP-UX systems, by Sun Microsystems Inc., for Sun** systems, and by IBM, for AIX systems.

xiv

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Summary of Tivoli OPC Version 2 Release 3 Enhancements

Job Scheduling Console
The new Tivoli Job Scheduling Console (JSC) is a Java-based, client/server application. The key advantages of the JSC are the ability to perform administration and operation tasks in a graphical manner and the ability to access multiple OPC controllers from a single console. The JSC can: Display lists of objects already defined to OPC, from the database and from the current plan, by using flexible filtering criteria Work with application descriptions including jobs and their dependencies, time restrictions (input arrival time, deadline, duration), and run cycles Work with special resource and workstation definitions Modify occurrences, workstation status, and special resource information from the current plan. The JSC retains the OPC security model. Each data access request is validated by the controller as it is done currently for ISPF users. The JSC is a real-time interface with OPC and can be used concurrently with the ISPF interface. It is available for various UNIX platforms, Windows NT, and Windows 98. The OPC Connector, which is a backend component supporting the JSC, is available for various UNIX platforms and Windows NT.

Catalog Management — Data Availability
The new Catalog Management – Data Availability feature improves OPC performance for job restart and job log retrieval functions. Job runtime information, for example, the sysout datasets, is maintained locally on the tracked system. The controller retrieves this information only when needed for catalog management actions, eliminating the network and processing overhead associated with the transmission of superfluous data. The runtime information at the tracked system is managed by a new component, the OPC Data Store. Using the OPC Data Store, OPC Tracker processes are bypassed and are dedicated to the time-critical job submission and tracking tasks. A new feature is provided to selectively determine how long job runtime information is kept in the Data Store. This new feature is especially useful when a joblog archiving product is used concurrently with OPC.

OS/390 Workload Manager Support
OS/390 Workload Manager, when used in goal mode, provides a new, policy-based management of deadlines for critical jobs. Some CPU-type operations can now be marked as critical in OPC. When such a critical operation is late, according to the specified policy, OPC interfaces with Workload Manager to move the associated job to a higher performance service class. Thus the job receives appropriate additional system resource to reduce or eliminate the delay. Several policies are available to

© Copyright IBM Corp. 1995, 1999

xv

decide when a job is late, considering characteristics such as duration, deadline time, and latest start time.

OS/390 Automatic Restart Manager Support
OS/390 Automatic Restart Manager increases the availability of OPC components. In the event of program failure, OPC components, for example, the Controller, the OS/390 Tracker and the Server can now be restarted automatically by the Automatic Restart Manager.

Program Interface (PIF) Enhancements
The Program Interface (PIF) has been extended to increase the flexibility of OPC, allowing users to have extended access to OPC data from other application programs. Tivoli OPC Version 2 Release 3 significantly enhances the ability to access current plan data from the PIF by providing: Full support for special resources data Read access to special resource usage information for operations The ability to modify the workstation open intervals The ability to modify the successor information for an operation. New resource codes have been added to the Program Interface (PIF): CPOPSRU CPSUC CSR CSRCOM IVL Current plan operation segment with information for the operation in relation to a special resource Current plan successor segment Current plan special resources Current plan special resource common segment Current plan workstation interval segment

Enhancements for Non-OS/390 Tracker Agents
The OPC Tracker Agents for non-OS/390 platforms have been enhanced: A new version of the OPC Tracker Agent for OpenVMS is available. This new version runs in the native OpenVMS environment, thus removing the requirement to install the POSIX shell. The security features for the UNIX OPC Tracker Agents have been enhanced. Stricter file permissions are now used for temporary work files. The installation process of the OPC Tracker Agent for OS/390 UNIX System Services has been simplified.

Usability Enhancements
New features increase the overall usability of the product, thus increasing user productivity: OPC can perform variable substitution within inline procedures, thus increasing the flexibility of the job setup feature. It is possible to customize OPC so that

xvi

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

jobs are submitted also when variables are not defined in the OPC variable tables. This means that, when variables are substituted outside OPC, duplicate variable definitions are avoided. During Catalog Management actions, OPC can delete datasets with an expiration date. A new Modify command (JSUACT) has been provided to start or stop the job submission function. This feature enables automation products, for example, Tivoli NetView to have control over the OPC job submission activity. The Mass Update utility has been enhanced with a new sample job This downloads all the applications belonging to a group in a sequential file for use as input to the Batch Loader utility, thus easing the management of group applications from the batch administration. The sample library now contains the DSECT sections for the Program Interface (PIF) data areas. This eases the process of writing PIF applications and the migration of existing PIF applications to new OPC releases.

New and Changed Installation Exits
User exit EQQUX001 has three new parameters: NEWREC NEWJCL USDREC Number of JCL lines in new JCLAREA New JCLAREA Number of JCL lines used in new JCLAREA

User exit EQQUX007 has the new extended status (PEXSTAT) as part of its parameters set. The Job Submission exit (installation exit 1) now allows changes to the size of JCL being processed. This enhancement gives users more flexibility to customize their operating environment. The Operation Status Change exit (installation exit 7) has been enhanced to receive extended status information. This means that full status information is available within this exit to allow more detailed processing. The samples set has two new samples: EQQCMX01 and EQQCMX05.

New and Changed Initialization Statements
Two initialization statements have been added to enhance the JCL variable substitution: VARFAIL If VARFAIL is specified, JCL variable substitution error is bypassed for the specified types and variables are left unresolved in the submitted JCL. Specifies whether or not the variables must be resolved also in the inline procedures.

VARPROC

Three initialization statements have been added to handle the OPC Data Store options:

Summary of Tivoli OPC Version 2 Release 3 Enhancements

xvii

FLOPTS

Defines the options for the FL (Fetch Job Log) task. A Controller uses this statement when OPCOPTS DSTTASK (YES) is specified. Specifies options for the OPC Data Store. Specifies options for the Data Store batch utilities and the clean up subtask.

DSTOPTS DSTUTIL

Parameters have been added to, or changed in, the JOBOPTS statement so as to handle the new Data Store options: JOBLOGRETRIEVAL A new value DELAYEDST has been added to this keyword for specifying that the job log is to be retrieved by means of the OPC Data Store. DSTCLASS DSTFILTER A new parameter to define the reserved held class that is to be used by the OPC Data Store associated with this tracker. A new parameter to specify if the job-completion checker (JCC) requeues to the reserved Data Store classes only the sysouts belonging to these classes.

Parameters have been added to, or changed in, the OPCOPTS statement so as to handle the new catalog management functions: DSTTASK JCCTASK Specifies whether or not the OPC Data Store is to be used. A new DST value has been added to specify if the JCC function is not needed, but the Data Store is used.

A parameter has been added to the OPCOPTS and the SERVOPTS statements: ARM Activates automatic restart (via the Automatic Restart Manager) of a failed OPC component.

A parameter has been added to the OPCOPTS statement for the Workload Manager (WLM) support: WLM Defines the WLM options, that is, the generic profile for a critical job. The profile contains the WLM service class and policy.

Version 2 Release 2 Summary
Instrumentation for Tivoli Global Enterprise Manager Tivoli Global Enterprise Manager (GEM) is the industry's first solution for unifying the management of cross-platform business applications that run businesses and make them competitive. Tivoli GEM helps you to manage strategic applications from a unique business systems perspective, focusing your IT resources on keeping these systems working properly and productively. Tivoli OPC has been enhanced to support the Job Scheduling Business System of the Tivoli GEM Systems Management Business System. From the Tivoli GEM console, which provides a single point of management, a Tivoli OPC user has complete control of all the Tivoli OPC components, regardless of the platform on which they run. In more detail, the Tivoli OPC instrumentation for Tivoli GEM enables you to do the following:

xviii

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Show all the Tivoli OPC components, including controllers, stand-by controllers, OS/390 trackers, AS/400 tracker agents, TCP/IP connected tracker agents. Show the different links between the above components. This provides, at a glance, a check on the health of the connections. For example, an OS/390 tracker might be running but might have no connection to the controller. For each component, manage a set of status parameters (monitors) specific to that component. These monitors might report the status of some vital OPC controller data sets such as database, current plan, and long-term plan) Manage this set of monitors graphically. You can: – Ask for value of the monitor – Be notified when the value of the monitor changes – Associate a severity (such as normal, warning, severe, or critical) with each monitor value Start or stop Tivoli OPC trackers without logging them on. Know at a glance, in a sysplex environment, which is the active controller and which the stand-by. Execute commands on Tivoli OPC components, from a single point of control, regardless of the platform and operating system used for that component. SAP R/3 support Tivoli OPC has been enhanced to exploit the Extended Agent technology of the Tivoli Workload Scheduler product. This technology enables Tivoli OPC to interface with a number of third party applications that can perform scheduling. By using this technology, you can now start and track a SAP R/3 job from Tivoli OPC. You can also retrieve and display the job log at the Tivoli OPC controller. This function requires the Tivoli OPC Tracker Agent for one of the following platforms: AIX Digital UNIX Sun Solaris Windows NT HP–UX TCP/IP communication improvements The TCP/IP communication component that enables the controller to communicate with the TCP/IP connected tracker agents has been restructured to use the standard TCP/IP C–Socket interface. This change enables Tivoli OPC for the latest OS/390 releases and provides for the use of the standard TCP/IP features, such as the KEEPALIVE option. Catalog management enhancements The logic that Tivoli OPC uses when determining which catalog management actions to perform has been extended to manage the following situations: Some steps in a job are not executed, but are flushed. The datasets referred to in those steps are ignored by the catalog management function.

Summary of Tivoli OPC Version 2 Release 3 Enhancements

xix

A dataset referred to with disposition NEW in one step is also referred to in other steps. Logic to determine the action to perform in these cases has been added to the Catalog Management function. Dataset Delete function (EQQDELDS) improvements The Dataset Delete function has been enhanced to determine the correct action when a dataset referred to with disposition NEW in one step is also referred to in other steps. Logic to determine the correct action to perform in these cases has been added to the Dataset Delete function. The Dataset Delete function has also been improved to do the following: Delete datasets for which an expiration date was specified. Issue diagnostic information when the IDCAMS DELETE command or the DFHSM ARCHDEL command fails to delete a dataset. Current plan occurrence limit removal The maximum number of occurrences in the current plan has been increased from 32767 to 9999999. This enhancement enables you to manage the current plan more flexibly when you have large workloads. Operations in AD limit removal You can now define up to 255 operations in each Application Description. This enhancement provides for more flexibility in the definition of the workload. AD and OI consistency check The consistency between the Application Description and the Operator Instruction OPC databases is now enforced by OPC. For instance, whenever an operation is deleted the associated operator instructions is also deleted. Some usability enhancements have also been implemented in the Application Description dialogs when defining operator instructions. For instance, you can now also access temporary operator instructions. JCL editing from Application Description dialogs You can now customize the Tivoli OPC dialogs so that a library management application used in the customer's environment to manage the production jobs can be invoked from the Application Description OPC dialogs, thus increasing user productivity. New row commands have been added to invoke such an application from the Operation List panel while working with an Application Description. OPC Control Language tool The OPC Control Language (OCL) tool enables you to access and manipulate Tivoli OPC data by using a REXX-like language. Several macro-functions are made available that perform, in a single action, what would require several invocations of the OPC Program Interface functions. The OCL tool acts as an extension to the REXX language processor. Therefore, normal REXX statements can be coded together with OCL statements. This tool runs in a batch TSO session. Tracker agents New Tracker Agents are provided to control the workload on: Digital UNIX OS/390 Open Edition

xx

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

SmartBatch coexistence Tivoli OPC has been extensively tested to make sure that all the features continue to work correctly when the production workload is under SmartBatch control. Other enhancements to functions EQQZSUBX 16 MB limit removal: because it is no longer necessary to move the JCL buffer below the 16 MB line before submitting it to JES2 or JES3, this processing has been removed from Tivoli OPC. To improve the robustness of Tivoli OPC, the STIMERM macro is now invoked, wherever the STIMER macro was previously invoked. Tivoli OPC Job-Submit user exit (EQQUX001) has been improved by adding two new parameters: WorkstationID and ErrorCode. When ErrorCode is set, Tivoli OPC will not submit the job. Tivoli OPC Operation-Status-Change user exit (EQQUX007) has been improved by adding the procstep name to the JOBAREA parameter. This enhancement provides for fully automated problem management. Debugging aids for performance problems: new statistics are now produced by Tivoli OPC to trace all the activities performed during the job submission process. These statistics are especially useful when you tune your systems to maximize job throughput in Tivoli OPC. You can dynamically activate and deactivate these statistics by means of new MODIFY commands. New and changed installation exits User exit EQQUX001 has two new parameters: RETCO WSNAME The error code The workstation name of submission process

User Exit EQQUX007 has a new field in the JobArea called procedure step name. Changes to commands The following modify commands have been added: CPQSTA EVELIM EVESTA GENSTA HB JCLDBG QUELEN STATIM STATUS WSASTA Activates or deactivates CP lock statistic messaging Sets a new value for the EVELIM keyword of the JTOPTS statement Activates or deactivates EVENT statistic messaging Activates or deactivates GS task statistic messaging Issues a heartbeat message for an OPC controller or for all trackers connected to that controller Activates or deactivates the JCL debugging trace Sets a new value for the QUEUELEN keyword of the JTOPTS statement Sets a new value for the STATIM keyword of the JTOPTS statement Returns status information about the OPC controller and the tracker agents connected to it. Activates or deactivates WSA task statistic messaging

New and changed initialization statements The following values have been added to the STATMSG keyword of the JTOPTS statement:

Summary of Tivoli OPC Version 2 Release 3 Enhancements

xxi

EVELIM STATIM WSATASK

Makes customizable the event number criterion for statistic messaging. Uses an interval time criterion to issue statistics messaging. Activates new statistics for WSA task.

The following new values have been added to the SUBFAILACTION keyword of the JTOPTS statement: XC, XE and XR To specify how OPC must handle values returned by the Job Submission Exit (EQQUX001) for the RETCO parameter. A new keyword has been added to the BATCHOPT statement: MAXOCCNUM Set the maximum number of occurrences in the current plan for the daily planning function.

A new keyword has been added to the JTOPTS statement: MAXOCCNUM Set the maximum number of occurrences in the Current Plan for the dialog, ETT, Automatic Recovery and PIF functions.

Changes to programming interfaces The OPC Programming Interface (PIF) has been extended as follows: A new subsegment has been added to the Workstation record called the Workstation Access Method Information (WSAM). A new keyword, ADOICHK, has been added to the OPTIONS request to activate the consistency check between Application Description and Operator Instruction records.

Version 2 Release 1 Summary
Tivoli OPC Version 2 Release 1 became generally available in March 1997. Major enhancements compared to OPC/ESA Release 3.1 are described in the following sections. Tracker agents New Tracker Agents are provided to control the workload on: Digital OpenVMS Pyramid MIPS ABI Shared parm library in Sysplex environment MVS controllers and trackers can share common controller and tracker initialization statements and started task JCLs, making it easier to install many OPC subsystems inside an MVS/ESA sysplex environment. Controller configuration in Sysplex environment Tivoli OPC support of MVS/ESA sysplex (base and parallel) has been extended to enable any one of many cloned controllers on different MVS images to switch from standby to active status. An OPC controller is started on each member of the XCF group. The first potential controller that becomes active is the active controller and the others are standby controllers. If the active controller fails, a standby controller running in another MVS/ESA image of the sysplex environment takes over automatically as the active controller.

xxii

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Single system image This enhancement allows OPC TSO dialog users and PIF users to be on a different MVS/ESA image from the OPC controller. Dialog users and PIF applications can also be on MVS systems outside the sysplex where the controller runs. Remote communication is based on APPC. Extended dialog filter The dialog filter has been extended to allow more specific search arguments and to define the interpretation of wildcard characters. Reparsing of NOERROR New operator commands allow the operator to dynamically update the NOERROR table using the NOERROR initialization statements defined by the OPC PARMLIB member, and to read the statements from a member of the EQQPARM DD concatenated libraries. In addition a new initialization statement allows the inclusion of NOERROR statements from members of the EQQPARM DD concatenated libraries. PIF extension Program Interface has been greatly extended to support almost all OPC database record types. Job tracking log This enhancement provides to user exit 11 job tracking records on which an effective disaster recovery procedure can be based. The customer through exit 11 receives job tracking records, and can send this data to a remote controller that, in case of failure of the active controller, will take over as controller. GMT clock support improvement The GMTOFFSET keyword in the OPCOPTS statement lets the user define an offset between the GMT time set in the MVS system and the actual GMT time. The OPC controller uses the GMT clock to validate an OPC Tracker Agent trying to connect; this improvement addresses the need of some users to have the MVS GMT clock independent of the actual GMT time, while keeping the ability to use Tracker Agents. Batch command interface tool A batch command interface tool is supplied to perform most of the actions supported by the PIF interface by means of a batch command interface. New and changed initialization statements Initialization statements have been added and changed in Tivoli OPC Version 2. The following sections summarize the differences. The INCLUDE statement Added in Tivoli OPC Version 2, the INCLUDE statement lets you reduce the size of the parameter library member that contains the OPCOPTS and JTOPTS statements and reduce the associated maintenance activities.
Table 1. The INCLUDE Statement
Keyword NOERROR Short description Specifies to read NOERROR information from other members of the EQQPARM library.

Summary of Tivoli OPC Version 2 Release 3 Enhancements

xxiii

The INIT statement Added in OPC/ESA Release 3.1, the INIT statement lets you define run-time options for processing requests from a PIF application. These settings override the values set by the INTFOPTS statement in EQQPARM. The statement is defined in a second parameter file that is identified by the EQQYPARM DD statement in the JCL procedure of the PIF application. In Tivoli OPC Version 2 the LUNAME keyword has been added.
Table 2. The INIT Statement
Keyword CWBASE HIGHDATE LUNAME SUBSYS TRACE Short description Specifies the origin for the century window used by the PIF application Specifies the high date presented to the PIF application in valid-to fields Specifies a server or controller LU name for the PIF application Identifies the Tivoli OPC subsystem controller Specifies the level of trace information to write to the diagnostic file.

Changes to commands These modify commands have been added: NEWNOERR NOERRMEM Requests that the NOERROR statements be reprocessed. (member) Requests that the NOERROR information be read from the specified member. The MODIFY command has been extended to accept stop and start of the server started tasks: F ssname, P=SERV S ssname, P=SERV Changes to programming interfaces The Programming Interface is extended as follows: UPDATE is supported for calendars, periods, workstations, and all workstations closed. BROWSE and UPDATE are supported for ETT and special resources. The LIST request has been extended to support a new keyword, MATCHTYP, to specify whether generic search arguments (* and % are to be treated as normal characters. A new keyword, ADVERS, has been added to the OPTIONS request, to activate the support of AD versioning. New and changed installation exits Table 3 on page xxv summarizes the changes to installation exits in Tivoli OPC Version 2.

xxiv

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Table 3. Changes to Installation Exits
Exit name EQQUX001 EQQUX011 Short description of change Tivoli OPC Version 2 now also supports the addressing modes RMODE(24) and AMODE(31). Sample job tracking log write exit.

Messages Messages have been changed, deleted, and added in Tivoli OPC Version 2. Refer to Tivoli OPC Messages and Codes for the complete message text and descriptions. Note that in Version 2 the message text and explanations refer to the product as OPC/ESA.

Summary of Tivoli OPC Version 2 Release 3 Enhancements

xxv

xxvi

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Chapter 1. Overview
This chapter introduces the Tracker Agents and their relationship to Tivoli OPC. If you are not familiar with the product terminology or functions, read Tivoli OPC General Information.

Product Features
Tivoli OPC is a workload management tool for: Managing your workload on a variety of platforms from a single point of control. Running jobs on the right day at the right time. Starting jobs in the correct order. Resolving complex dependencies between jobs. Taking into account business days and holidays across divisions, states, and countries. Providing plans for the future workload to help you manage peak processing (year-end work, for example). Optimizing hardware resources by allocating work to specific machines. Initiating automatic recovery actions in the event of hardware or software failure. Maintaining logs of work that has run—available for viewing or post-processing.

Controllers
The controller is the focal point of your configuration. It uses the information in the database to determine which jobs to run, when they should run, and where they should run. You need the Tivoli OPC controller if you want to run the controller on an MVS system. You must install at least one controller for your production systems. You can use the controller to provide a single, consistent, control point for submitting and tracking your UNIX workload.

© Copyright IBM Corp. 1995, 1999

1

Overview

Tracker Agents
Many installations run business applications on a variety of UNIX platforms. Tivoli OPC provides Tracker Agents that can manage your workload in several operating system environments. At the time of publication, UNIX Tracker Agents for HP-UX, SunOS, Sun Solaris, Silicon Graphics IRIX, Digital OpenVMS, Digital UNIX, AIX, and OS/390 operating systems are available. There is also a Tracker Agent for OS/400, and the base tracker for OS/390. OPC Tracker Agents are functionally equivalent to the base OS/390 tracker, with the exception that automatic dataset cleanup and the job-completion checker (JCC) functions are not provided for non-MVS platforms. This means you can use Tivoli OPC functions like automatic recovery, and automatic job tailoring for your non-MVS workload. Refer to Tivoli OPC Planning and Scheduling the Workload for details about the functions provided by Tivoli OPC. IBM is committed to provide a comprehensive workload management solution. The availability of Tracker Agents for other platforms will not necessarily coincide with the availability of a new release or version of the product. Contact your IBM representative to obtain a list of the operating environments that have a Tracker Agent, or send a note to one of the electronic addresses listed at the front of this book.

|

The Installation Process
To understand the flow of the installation process, read through this book before you begin to install a Tracker Agent.
Table 4. Stages in the Installation Process
Stage 1 2 3 4 5 Description Read the documentation that comes with the distribution media. Load the software for the Tracker Agent to the controller machine. Modify the controller parameters to specify the necessary Tracker Agents. Plan your installation and create user IDs. Install and customize the Tracker Agent on each UNIX machine. For more information ... See “Using the Latest Install Information” on page 3. See “Loading the Tracker Agent Software to MVS” on page 5. See “Tivoli OPC Controller Initialization Statements” on page 9. See Chapter 3, “Planning your Tracker Agent Installation” on page 11. See Chapter 4, “Installing and Customizing the Tracker Agent” on page 29.

2

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Overview

Using the Latest Install Information
This book complements the Tivoli OPC Program Directory, which covers how to add the materials on the IBM distribution tape to your system. The Program Directory comes with the feature installation tape. It describes all of the installation materials and gives installation instructions specific to the feature number. If any differences exist between this book and the Program Directory, use the information in the Program Directory.

Chapter 1. Overview

3

4

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Chapter 2. Things to Be Done with the Tivoli OPC Controller
This chapter describes the installation tasks on MVS. These topics are included: Loading the Tracker Agent software to MVS Loading the Tracker Agent enabler software to MVS Tivoli OPC controller initialization statements. The Tracker Agent software must be installed on an MVS system using the System Modification Program Extended (SMP/E) program product before it can be downloaded and installed on the UNIX machines that you want to connect to the Tivoli OPC controller. For more information on the installation procedure on MVS, refer to Tivoli OPC Installation Guide. For more information on the Tivoli OPC initialization statements, refer to Tivoli OPC Customization and Tuning.

Loading the Tracker Agent Software to MVS
To load the Tracker Agent software to MVS, process the software distribution tape using SMP/E. This creates or updates the necessary disk-resident libraries on your system. Table 5 describes the datasets that are created or updated by SMP/E.
Table 5. Tracker Agent Libraries Loaded by SMP/E
Distribution SMP/E ddname Target SMP/E ddname SEQQExxx (load) (see note) Description

| |

AEQQExxx (load) (see note)

Compressed installp image (AIX), update image (HP-UX), dstream file (Sun Solaris), tar file (SunOS, Silicon Graphics IRIX, OS/390, Digital UNIX), or vmsinstall file (Digital OpenVMS).

Note: Here xxx is the NLS enabling string. ENU is the English one.

© Copyright IBM Corp. 1995, 1999

5

Tivoli OPC Controllers

Installing the Tracker Agent Software Using SMP/E
The following example shows the JCL that transfers data sets from tape to disk using SMP/E. If you need more information about how to use SMP/E, refer to SMP/E Reference. SMP/E JCL example
//INSTALL JOB STATEMENT PARAMETERS //RECEIVE EXEC SMPPROC //SMPPTFIN DD DSN=SMPMCS, // DISP=SHR, // UNIT=unit, // VOL=SER=volser, // LABEL=(1,SL) //SMPCNTL DD SET BOUNDARY(GLOBAL) OPTIONS(OPCOPT). RECEIVE SYSMODS SELECT(fmid). / //APPLY EXEC SMPPROC // -------------------------------------------// TARGET LIBRARIES // -------------------------------------------//SEQQEENU DD DSN=OPCESA.INST.SEQQEENU,DISP=SHR // // -------------------------------------------// SMP CONTROL FILE // -------------------------------------------//SMPCNTL DD SET BOUNDARY(TZONOPC) OPTIONS(OPCOPT). APPLY JCLINREPORT SELECT(fmid) RETRY(YES). / //ACCEPT EXEC SMPPROC // // -------------------------------------------// DISTRIBUTION LIBRARIES // -------------------------------------------//AEQQEENU DD DSN=OPCESA.INST.AEQQEENU,DISP=SHR //SMPCNTL DD SET BOUNDARY(DZONOPC) OPTIONS(OPCOPT). ACCEPT JCLINREPORT SELECT(fmid) RETRY(YES). /

|

Refer to the Program Directory for information about unit type, volume serial, and Tracker Agent FMID. You can use the names provided in the example or create your own names that follow your naming conventions.

6

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Tivoli OPC Controllers

Applying Tracker Agent Maintenance
When you have loaded the Tracker Agent software, apply any recommended maintenance described in the PSP.

Loading Tracker Agent Enabler Software to MVS
To load Tivoli OPC enabler software for the Tracker Agent, process the software distribution tape using SMP/E. This creates or updates the necessary disk-resident libraries on your system. Table 6 describes the data sets that are created or updated by SMP/E.
Table 6. Enabler Libraries Loaded by SMP/E for the Tracker Agent
Distribution SMP/E ddname AEQQMOD0 (object) Target SMP/E ddname SEQQLMD0 (load) Description

| |

Load module (EQQTXTEN for AIX, EQQTXHEN for HP-UX, EQQTXSEN for Sun Solaris, EQQTXUEN for SunOS, EQQTXDEN for Digital OpenVMS or Digital UNIX, EQQTXPEN for Silicon Graphics IRIX), EQQTXOED for OS/390

Chapter 2. Things to Be Done with the Tivoli OPC Controller

7

Tivoli OPC Controllers

Installing the Enabler for the Tracker Agent Using SMP/E
The following example shows the JCL that transfers data sets from tape to disk using SMP/E. You must install enabler software in a library that is accessible to the controller. If you need more information about how to use SMP/E, refer to SMP/E Reference. SMP/E JCL example
//INSTALL JOB STATEMENT PARAMETERS //RECEIVE EXEC SMPPROC //SMPPTFIN DD DSN=SMPMCS, // DISP=SHR, // UNIT=unit, // VOL=SER=volser, // LABEL=(1,SL) //SMPCNTL DD SET BOUNDARY(GLOBAL) OPTIONS(OPCOPT). RECEIVE SYSMODS SELECT(fmid). / //APPLY EXEC SMPPROC // // -------------------------------------------// TARGET LIBRARIES // -------------------------------------------//SEQQLMD DD DSN=OPCESA.INST.SEQQLMD ,DISP=SHR // // -------------------------------------------// SMP CONTROL FILE // -------------------------------------------//SMPCNTL DD SET BOUNDARY(TZONOPC) OPTIONS(OPCOPT). APPLY JCLINREPORT SELECT(fmid) RETRY(YES). / //ACCEPT EXEC SMPPROC // -------------------------------------------// DISTRIBUTION LIBRARIES // -------------------------------------------//AEQQMOD DD DSN=OPCESA.INST.AEQQMOD ,DISP=SHR //SMPCNTL DD SET BOUNDARY(DZONOPC) OPTIONS(OPCOPT). ACCEPT JCLINREPORT SELECT(fmid) RETRY(YES). /

Refer to the Program Directory for information about unit type, volume serial, and enabler FMID information. You can use the names provided in the example or create your own names that follow your naming conventions.

Applying Maintenance for Tracker Agent Enabler
When you have loaded the enabler software for the Tracker Agent, apply any recommended maintenance described in the PSP.

8

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Tivoli OPC Controllers

Tivoli OPC Controller Initialization Statements
The Tivoli OPC controller ROUTOPTS initialization statement defines the Tivoli OPC configuration. Update the statement to include the UNIX machines on which you will run work scheduled by Tivoli OPC. Review these parameters of ROUTOPTS and set values according to your configuration: CODEPAGE(host system codepage|IBM-037) This keyword specifies the name of the host codepage. The value is used by Tracker Agents running on operating environments that use the ASCII character set to convert submitted input data to ASCII. Up to 8 characters can be defined. The default value, IBM-037, defines the EBCDIC codepage for U.S. English, Portuguese, and Canadian French. TCP(destination,...,destination) This keyword specifies the network addresses of all TCP/IP-connected Tracker Agents able to communicate with the controller for job-tracking purposes. Each destination consists of a destination name and an IP address separated by a colon (name:nnn.nnn.nnn.nnn). The name is 1–8 alphanumeric characters where the first character is alphabetic. The IP address consists of 4 numeric values separated by periods. Each value is in the range 1 to 255; leading zeros are not required. If the keyword is not defined, support for TCP/IP-connected Tracker Agents will not be activated. TCPIPID(TCP/IP ID|TCPIP) This keyword identifies the name of the TCP/IP address space on the MVS system where the controller is started. If you do not specify this keyword, the default value TCPIP is used. TCPIPPORT(TCP/IP port|424) This keyword defines the TCP/IP port number used by the controller. See “Verifying Host and Service Names” on page 17 for details about port numbers. The number is 1–5 numeric characters. The controller reserved port number is 424. This keyword must be specified if the TCP keyword has been defined and the default port number is not used; for example, if you start more than one controller on the same MVS image that uses TCP/IP or if the port number is already in use. If you use a port number less than 1024, you must run the tracker as root. TCPTIMEOUT(TCP/IP time-out interval|5) This keyword specifies the time interval within which the controller expects a TCP/IP-connected Tracker Agent to respond to a submit request. If the Tracker Agent does not respond in two consecutive intervals, the session is terminated and workstations that reference the destination are set offline. When the tracker becomes active again, or if the time-out was caused by poor network response, the session will be reestablished automatically. The time-out processing comes into effect after the controller and the Tracker Agent have synchronized at startup. Specify a number of minutes from 1 to 60, or specify 0 if you do not require time-out processing. The default time-out interval is 5 minutes.

Chapter 2. Things to Be Done with the Tivoli OPC Controller

9

Tivoli OPC Controllers

ROUTOPTS Example
ROUTOPTS TCP(MYAIX:9.52.5 .16,YOURAIX:9.52.5 .13) TCPIPID(TCP1) CODEPAGE(IBM-278) 1 2 3

1

Tivoli OPC communicates with AIX machines running the Tracker Agent. The communication method is TCP/IP. Operations that specify a computer workstation with destination MYAIX are transmitted to IP address 9.52.50.16 for execution. Operations that specify workstations with a destination of YOURAIX are directed to IP address 9.52.50.13. TCP1 is the name of the TCP/IP address space on the MVS system where the controller is started. The codepage used on the MVS system where the controller is started is IBM-278, the codepage for Swedish and Finnish.

2 3

JTOPTS Example
JTOPTS WSFAILURE(LEAVE,REROUTE,IMMED) WSOFFLINE(LEAVE,REROUTE,IMMED) HIGHRC( ) 1 2 3

1 2 3

Actions to be taken when a workstation failure occurs. Actions to be taken when a workstation offline situation occurs. The highest return code generated in a job without causing the operation to abend is 0. Note: The default is 4. For non-MVS tracker agents, specify 0. You can also specify this return code for each operation in the AUTOMATIC OPTIONS section of the Application Description dialog.

OPCOPTS Example
OPCOPTS CAT MGT(YES) STORELOG(ALL) 1 2

1 2

Catalog management must be active. All job logs that are retrieved immediately are stored.

10

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Chapter 3. Planning your Tracker Agent Installation
This chapter gives you an overview of how to install, configure, and customize the Tracker Agents. The Tracker Agent is flexible and lets you define a configuration to suit your needs. If your enterprise has a large number of networked UNIX machines, you should run the Tracker Agent on several machines. For availability, it is recommended that you start the tracker on at least two machines, and specify more than one host for each job. When the first destination (workstation) is unavailable, the controller can send the job to the next one available. How you install the Tracker Agent depends on your configuration. If you have Network File System** (NFS) or Network Information System** (NIS**) installed, you will be able to carry out most of the installation from one machine; otherwise, you will need to install the Tracker Agent on every machine where the controller will start work. To install you need superuser (root) authority. Depending on your network, you may need root authority on the other servers. If you use IBM LoadLeveler, you can run the Tracker Agent on one machine and use LoadLeveler to distribute the workload among the networked machines. See Appendix D, “Using LoadLeveler” on page 107. Certain programs for administration and operation of the Tracker Agent can only be run by the system administrator logged in with certain user IDs. Table 7 shows you the user IDs that you need, and “Creating a User Group and User IDs” on page 12 shows how to create them.
Table 7. Planning for Users
Create user tracker For ... Running the tracker with Tivoli OPC Group opc Recommended home directory /u/tracker

Even if you plan to run several instances of the Tracker Agent for the same type of controller on the same machine, you should run them all under the same user ID.

© Copyright IBM Corp. 1995, 1999

11

Planning

Creating a User Group and User IDs
As a security measure, the Tracker Agent requires that users belong to the opc user group. The administrator does this by using the UNIX file-privilege mechanism, making the Tracker Agent programs executable only by users in this group.

Creating the User Group
If you are running the Network Information System (NIS), the user IDs and group must be created on the NIS master, and the maps rebuilt before you continue to install. AIX only To create a user group: 1. 2. 3. 4. 5. 6. Start SMIT. Select Security and Users. Select Groups. Select Add a Group. Fill in the relevant information. The group name must be opc. Press Do to create the user group.

HP-UX only SAM can create it for you. The group name must be opc. |

Sun Solaris, SunOS, and Silicon Graphics IRIX Create a user group with the name opc, and check that the group ID (GID) is the same on all your platforms.

Digital UNIX Create a user group with the name opc, by using the addgroup command.

Digital OpenVMS only Refer to the corresponding section in the Chapter 4, “Installing and Customizing the Tracker Agent” on page 29.

OS/390 only Contact your system administrator to create the user group OPC and the user ID TRACKER with root authority.

12

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Planning

Creating the User ID
AIX only To create a new user ID: 1. 2. 3. 4. 5. 6. Start SMIT. Select Security and Users. Select Users. Select Add a User. Fill in the user name and group name as in Table 7 on page 11. Press Enter or select Do to create the user ID.

HP-UX only To create a new user ID: 1. Login as root. 2. Enter the System Administration Manager with the sam command. 3. Select Users and Groups. 4. Select Users. 5. Select Actions and Add. 6. Fill in the relevant information: Login Name UID Home Directory Primary Group Name Start-up Program Login Environment As in Table 7 on page 11 A spare ID (consistent across the network) As in Table 7 on page 11 As in Table 7 on page 11 /bin/sh Shell (start-up program)

Sun Solaris only You can create a user ID with admintool. |

SunOS and Silicon Graphics IRIX You can create a user ID by editing the local /etc/passwd file if you are not using NIS.

Digital UNIX Create a user ID with the name tracker, by using the adduser command.

Digital OpenVMS only Refer to the corresponding section in the Chapter 4, “Installing and Customizing the Tracker Agent” on page 29.

Chapter 3. Planning your Tracker Agent Installation

13

Planning

OS/390 only Contact your system administrator to create the user group OPC and the user ID TRACKER with root authority.

Adding New Users to the opc Group
Users who are going to work with the Tracker Agent must belong to the opc user group. AIX only To add a user ID to a group: 1. 2. 3. 4. 5. 6. 7. Start SMIT. Select Security and Users. Select Users. Select Change / Show characteristics of a User. Type the user NAME. Press Enter. Add opc to the values in the Group set field. (Values are separated by commas.) 8. Press Do to add the user to the group.

Repeat this for all users of the Tracker Agent.

HP-UX only To add a user ID to a group: 1. 2. 3. 4. 5. Start SAM. Select Users and Groups. Select Groups. Select the opc group and Actions and Modify. Add the users that are going to work with the Tracker Agent.

Sun Solaris only Add a user ID to the opc group for each user of the Tracker Agent. |

SunOS, Silicon Graphics IRIX, and Digital UNIX Add a user ID to the opc group for each user of the Tracker Agent. The initial program will probably be /bin/sh or /bin/csh.

Digital OpenVMS only Refer to the corresponding section in the Chapter 4, “Installing and Customizing the Tracker Agent” on page 29.

14

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Planning

OS/390 only Contact your system administrator to create the user group OPC and the user ID TRACKER with root authority.

Kernel Considerations
SunOS only Before installing the Tracker Agent, make sure that the selected kernel is generated with these options: options options options IPCMESSAGE IPCSEMAPHORE IPCSHMEM

These options include System V IPC facilities.

Planning Your Directory Layouts
This is the directory structure, omitting the log and temporary directories:
Table 8 (Page 1 of 2). Tracker Agent Directory Structure
Directory/File /usr/lpp/tracker/EQQPARM /usr/lpp/tracker/etc /usr/lpp/tracker/bin eqq_daemon eqqdr eqqew eqqgmeth Description Tracker configuration file Configuration parameter files and samples Daemon Data Router Event Writer Access method subtask (works in Digital UNIX, HP–UX, Sun Solaris, and AIX environments for SAP R/3 support) Generic Submittor Generic Subtask LoadLeveler Submittor LoadLeveler Tracker exit Cleans up log files after the tracker has terminated abnormally TCP Reader TCP Writer Script to initialize directories Script to set file permissions Script to start the tracker Script to stop the tracker Script to show status of the tracker Utility to verify the configuration Utility to convert scripts File monitor utility Checkpoint file view utility Script to delete log files Tracker monitor utility

eqqgs eqqgssub eqqls eqqlsext eqqclean eqqtr eqqtw eqqinit eqqperm eqqstart eqqstop eqqshow eqqverify eqqcv80p eqqfm eqqview eqqdelete eqqmon

Chapter 3. Planning your Tracker Agent Installation

15

Planning

Table 8 (Page 2 of 2). Tracker Agent Directory Structure
Directory/File /usr/lpp/tracker/methods Description Access method directory, in which you install R3 batch access method for SAP R/3 support Catalog message. Used by R3 batch access method ASCII copy of this book Message catalogs /msg /prime /EN_US /loc /iconvTable /usr/lpp/tracker/samples eop0 tracker.cmd ecf Default (US English) message catalog US English message catalog (not used) Code page converters Sample directory Sample configuration file Sample LoadLeveler script Parameters used by eqqfm

/usr/lpp/tracker/catalog /usr/lpp/tracker/doc /usr/lpp/tracker/nls

Note: The naming conventions of the product and home directories are those commonly used in AIX systems. You may have other conventions, such as /home or /users instead of /u, and /opt or /usr/packagename instead of /usr/lpp. You may need to alter the supplied sample scripts (such as eqqinit) accordingly. If you have many systems, you can reduce your administration by planning the layout of your directories carefully. You can run several instances of the Tracker Agent with a single copy of the binary files, and you can even use the same binary files if you run the Tracker Agent with the Tivoli OPC controller. Use the environment variable EQQHOME to point to the home directory on the local file system. All the files in Table 8 on page 15 can be links to files on a common file system. If you have many instances of the Tracker Agent, you will simplify your administration if they all use the same binaries. For this reason, install the Tracker Agent image into the /usr/lpp/tracker directory, and then create links for the common files. Besides the directories in Table 8 on page 15, the Tracker Agent uses log and temporary directories. See “Customizing the Directories” on page 55 for recommendations. Notes: 1. The directory tree must be complete in order for the Tracker Agent to start. The base directory must contain all required subdirectories. 2. The iconvTable is empty for OS/390 and there is no sample LoadLeveler script.

16

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Planning

Verifying Host and Service Names
Specify the hostname to IP address in your /etc/hosts file. | If NIS is running, (not for OS/390), update /etc/hosts on the NIS master. To check whether NIS is running, use the ypwhich command: if it returns a hostname, you are using NIS. If ypwhich returns a message like “Domain name not bound,” you are not using NIS.

| | | | | | | | | | |

For OS/390 only:
Edit the /etc/rc.tcpip file. This file is processed at startup to initiate all TCP/IP related processes. To add the Tracker Agent to the /etc/rc.tcpip file: 1. Login as superuser. 2. Edit /etc/rc.tcpip file, using as editor such as OEDIT. 3. At the end of the file add this section: /u/tarcker/bin/eqqstart

4. Also edit the .profile to set the environment variables: set EQQINSTANCE=muconfig set EQQHOME=/u/tracker export EQQINSTANCE export EQQHOME If you have a name server (you can check its name with the nslookup command), update the host names on that machine.

# The form for each entry is: # <internet address> <official hostname> <aliases> # 127. . .1 localhost loopback 9.52.53.254 troute taix 9.52.53.1 thp testhp thp 9.52.51.34 sun4.ldg.se.ibm.com sun4 9.52.51.47 hp5.ldg.se.ibm.com hp5 9.52.52.3 m23wn12.ldg.se.ibm.com m23wn12 . . .

Figure 1. An Extract from the /etc/hosts File

| | | | |

Specify service names for the ports in your /etc/services file. Then you can use these service names in the configuration parameter file in the directory on your local machine where TCP/IP is installed. You can then use these service names in the configuration parameter file instead of using port numbers. If you prefer to use port numbers, you do not need to update the SERVICES file. If NIS is running, (not for OS/390) update /etc/services on the NIS master. If you have a name server, update the service names on that machine. You must use port numbers above 1024 to avoid running the Tracker Agent with root authority. You should use port numbers much higher than this (for example, above 5000) to avoid conflict with other programs.

Chapter 3. Planning your Tracker Agent Installation

17

Planning

| | | | | | |

A Tivoli OPC controller needs one port, TCPIPPORT, which is also the tracker's controller port. The default port number is 424. Each Tracker Agent needs two ports: The tracker's controller port, which is also the controller's tracker port (TCPIPPORT). The tracker's local port, which must be unique for each machine, but Tracker Agents on different machines can have teh same local port number.

OPC Controller TCPIPPORT 424

UNIX Tracker 1 controller 424 local 5005

UNIX Tracker 2 controller 424 local 5006

Figure 2. Example Configuration Showing Port Numbers

The arrows in Figure 2 link the controllers' tracker ports with the corresponding trackers' controller ports. Tracker Agents on the same machine must have different local port numbers. Figure 3 shows suitable entries in the UNIX tracker 1 file /etc/services; Figure 4 on page 19 shows suitable entries in the UNIX tracker 2 file /etc/services for this configuration.

. . . route timed tempo courier conference opctracker tracker1

52 /udp 525/udp 526/tcp 53 /tcp 531/tcp 424/tcp 5 5/tcp

router routed timeserver newdate rpc chat Tivoli OPC controller's tracker port TCPIPPORT Local port for UNIX tracker 1

Figure 3. An Extract from the /etc/services File on UNIX Tracker 1

18

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Planning

. . . route timed tempo courier conference opctracker tracker2

52 /udp 525/udp 526/tcp 53 /tcp 531/tcp 424/tcp 5 6/tcp

router routed timeserver newdate rpc chat Tivoli OPC controller's tracker port TCPIPPORT Local port for UNIX tracker 2

Figure 4. An Extract from the /etc/services File on UNIX Tracker 2

Figure 5 shows the necessary entry in the configuration parameter file for UNIX tracker 1.
. . . controller_portnr = opctracker local_portnr = tracker1

# or controller_portnr = 424 # or local_portnr = 5 5

Figure 5. Ports in the UNIX Tracker 1 Configuration File

Figure 6 shows the necessary entries in the configuration parameter file for UNIX tracker 2.
. . . controller_portnr = opctracker local_portnr = tracker2

# or controller_portnr = 424 # or local_portnr = 5 6

Figure 6. Ports in the UNIX Tracker 2 Configuration File

The Tivoli OPC controller subsystem must be started with this parameter: ROUTOPTS TCPIPPORT(424) ... The TCP/IP /etc/services file on the controllers must also contain entries for the Tracker Agent machines. The next task is verifying the TCP/IP environment. This is described in the following sections, depending on the type of operating system that you have: 1. “Verifying the TCP/IP Environment (AIX Only)” 2. “Verifying the TCP/IP Environment (HP-UX Only)” on page 23 3. “Verifying the TCP/IP Environment (Sun Solaris and SunOS only)” on page 25

Verifying the TCP/IP Environment (AIX Only)
Verify the TCP/IP setup on the Tracker Agent machine and communication with the controller machine before you start to install the Tracker Agent. The bosnet.tcpip image must be installed on the Tracker Agent and controller machines. Use the Systems Management Interface Tool (SMIT) to verify the images installed: 1. 2. 3. 4. Start SMIT. Select Software Maintenance and Installation. Select Install/Update Software. Select List the Installed Software.
Chapter 3. Planning your Tracker Agent Installation

19

Planning

Check for the TCP/IP install image:

COMMAND STATUS Command: OK stdout: yes stderr: no

Before command completion, additional instructions may appear below. [MORE...153] bosnet.tcpip.obj 3.2. 325 bosnet Maintenance Level bsl.en_US.aix.loc 3.2. 325 bsl Maintenance Level bsl.lat-1.fnt.loc 3.2. . No Maintenance Level Applied. bsl.sv_SE.aix.loc 3.2. 325 bsl Maintenance Level bsl.sv_SE.pc.loc 3.2. [MORE...235] F1=Help F8=Image F2=Refresh F9=Shell F3=Cancel F1 =Exit F6=Command C U491 5 C C U491 U491 4 5

Note: The names may differ slightly depending on the version of the software that you have. TCP/IP must be configured correctly and operational on the Tracker Agent machine. To verify that TCP/IP is configured correctly: 1. 2. 3. 4. Start SMIT. Select Communications Applications and Services. Select TCP/IP. Select Minimum Configuration & Startup.

Verify these fields with your network administrator: Host IP Address Network Subnet Mask Broadcast Address.

20

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Planning

The panel should look similar to:

Minimum Configuration & Startup To Delete existing configuration data, please use Further Configuration menus Type or select values in entry fields. Press Enter AFTER making all desired changes. HOSTNAME Internet ADDRESS (dotted decimal) Network MASK (dotted decimal) Network INTERFACE NAMESERVER Internet ADDRESS (dotted decimal) DOMAIN Name Default GATEWAY Address (dotted decimal or symbolic name) RING Speed START Now F1=Help F5=Undo F9=Shell F2=Refresh F6=Command F1 =Exit F3=Cancel F7=Edit Enter=Do [Entry Fields] [m23wn12] [9.52.52.3] [255.255.255. ] tr [9.52.5 .254] [ldg.se.ibm.com.] [9.52.52.254] 16 no F4=List F8=Image + +

Verifying that TCP/IP Is Operational
TCP/IP must be operational on the Tracker Agent machine. To verify this, use the ping command on the Tracker Agent machine, specifying the same machine as the destination. For example, to test machine m23wn12, with IP address 9.52.52.3: ping -c 5 9.52.52.3 OR ping -c 5 m23wn12 Use -c 5 to specify the number of packets to be echoed. The output should be similar to: ldg2:/ldg/proj/opc/planit/AIX/src >> ping -c 5 m23wn12 PING m23wn12.ldg.se.ibm.com: (9.52.52.3): 56 data bytes 64 bytes from 9.52.52.3: icmp_seq= ttl=255 time=9 ms 64 bytes from 9.52.52.3: icmp_seq=1 ttl=255 time=6 ms 64 bytes from 9.52.52.3: icmp_seq=2 ttl=255 time=4 ms 64 bytes from 9.52.52.3: icmp_seq=3 ttl=255 time=4 ms 64 bytes from 9.52.52.3: icmp_seq=4 ttl=255 time=4 ms

Chapter 3. Planning your Tracker Agent Installation

21

Planning

Verifying the Connection to the Controller Machine
The controller machine must be able to respond to the Tracker Agent machine across the TCP/IP network. To test the connectivity, use ping from the Tracker Agent machine. For example, ping -c 5 9.52.52.13 should generate output similar to: $ ping -c 5 ldg4 PING ldg4.ldg.se.ibm.com: 64 bytes from 9.52.52.13: 64 bytes from 9.52.52.13: 64 bytes from 9.52.52.13: 64 bytes from 9.52.52.13: 64 bytes from 9.52.52.13: (9.52.52.13): 56 data bytes icmp_seq= ttl=59 time=79 ms icmp_seq=1 ttl=59 time=68 ms icmp_seq=2 ttl=59 time=69 ms icmp_seq=3 ttl=59 time=67 ms icmp_seq=4 ttl=59 time=67 ms

----ldg4.ldg.se.ibm.com PING Statistics---5 packets transmitted, 5 packets received, % packet loss round-trip min/avg/max = 67/7 /79 ms

Verifying the Connection from the Controller Machine
The Tracker Agent machine must be able to respond to the controller machine across the TCP/IP network. Check this as described in “Verifying the Connection to the Controller Machine,” but from the controller machine. If you have problems, there might be a problem with your network setup. Consult your system or network administrator.

Verifying the Network Routing
Verify the route to the controller. From the Tracker Agent machine, enter traceroute controllermachine. The generated output should be similar to: traceroute to ldg4.ldg.se.ibm.com (9.52.52.13), 3 1 2 $ If the host is not found, there may be a problem with your network setup. Consult your system or network administrator. Continuing your installation ... Continue reading from Chapter 4, “Installing and Customizing the Tracker Agent” on page 29. hops max, 4 byte packets

ldgnames.ldg.se.ibm.com (9.52.5 .254) 3 ms 28 ms 6 ms ldg4.ldg.se.ibm.com (9.52.52.13) 64 ms 78 ms 134 ms

22

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Planning

Verifying the TCP/IP Environment (HP-UX Only)
Check that you have a /system/NETINET directory. Use these commands to check your connections: $ hostname hp5 $ /etc/ping hp5 8 1 PING hp5.ldg.se.ibm.com: 8 bytes from 9.52.51.47: 1 2 8 byte packets icmp_seq= .

----hp5.ldg.se.ibm.com PING Statistics---1 packets transmitted, 1 packets received, $ netstat -rn 3 Routing tables Destination Gateway Flags 127. . .1 127. . .1 UH default 9.52.51.3 UG 9.52.51 9.52.51.47 U

% packet loss

Refs Use 4 348265 16 4185161 7 35227

Interface lo lan lan

The hostname command 1 gives you the name of your machine. Use this in the ping command 2 to give you the IP address of the machine (in this case, 9.52.51.47). The netstat command 3 should show at least one entry with a nonlocal interface (where the data in the Interface column does not begin with lo). The default gateway (9.52.51.3 in the above example) is used for destinations not listed in the Destination column. TCP/IP must be configured correctly and be operational on the HP-UX machine. Use the ifconfig lan command to check the network status: $ ifconfig lan lan : flags=63<UP,BROADCAST,NOTRAILERS,RUNNING> inet 9.52.51.47 netmask ffffff broadcast 9.52.51.255 The response should say that the network is UP.

Chapter 3. Planning your Tracker Agent Installation

23

Planning

Verifying the Connection to the Controller Machine
The controller machine must be able to respond to the Tracker Agent machine across the TCP/IP network. To test the connectivity from the Tracker Agent machine, use the ping command: $ ping -v -o ldg2 64 2 PING ldg2.ldg.se.ibm.com: 64 byte packets 64 bytes from 9.52.5 .2 1: icmp_seq= . time=34. ms 64 bytes from 9.52.5 .2 1: icmp_seq=1. time=32. ms ----ldg2.ldg.se.ibm.com PING Statistics---2 packets transmitted, 2 packets received, % packet loss round-trip (ms) min/avg/max = 32/33/34 2 packets sent via: 9.52.52.3 - m23wn12.ldg.se.ibm.com 9.52.5 .254 - ldgnames.ldg.se.ibm.com 9.52.6 .3 - ldgkisrt.ldg.se.ibm.com 9.52.51.3 - [ name lookup failed ] 9.52.51.47 - hp5.ldg.se.ibm.com

Verifying the Connection from the Controller Machine
The Tracker Agent machine must be able to respond to the controller machine across the TCP/IP network. Test this as described in “Verifying the Connection to the Controller Machine,” but from the controller machine. If you have problems, there might be a problem with your network setup. Consult your system or network administrator. Continuing your installation ... Continue reading from Chapter 4, “Installing and Customizing the Tracker Agent” on page 29.

24

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Planning

Verifying the TCP/IP Environment (Sun Solaris and SunOS only) Verifying the Gateway
TCP/IP must be configured correctly and operational on the Tracker Agent machine. To verify that TCP/IP is configured correctly: netstat -rn The output should be similar to: /u/tracker netstat -rn Routing Table: Destination -------------------127. . .1 9.52.51. 224. . . default

Gateway Flags Ref Use Interface -------------------- ----- ----- ------ --------127. . .1 UH 976265 lo 9.52.51.49 U 3 92477 le 9.52.51.49 U 3 le 9.52.51.3 UG 771 3

The default gateway (9.52.51.3 in the above example) is used for destinations not listed in the Destination column. Verify this with your network administrator.

Verifying that TCP/IP Is Operational
To verify that TCP/IP is operational: ifconfig le

Where le is the interface of the Tracker Agent machine. You can use the netstat command to find the interface of your machine. If TCP/IP is operational, your output will be similar to: le : flags=63<UP,BROADCAST,NOTRAILERS,RUNNING> inet 9.52.51.34 netmask ffffff broadcast 9.52.51.255 To verify the IP address: ping 9.52.51.34 Where 9.52.51.34 is the IP address of the Tracker Agent machine. If TCP/IP is operational, your output will be similar to: 9.52.51.34 is alive To verify that the local name server is operational: ping sun4 Where sun4 is the local name of the Tracker Agent machine. You can use the hostname command to find the name of your machine. If TCP/IP is operational, your output will be similar to: sun4.ldg.se.ibm.com is alive

Chapter 3. Planning your Tracker Agent Installation

25

Planning

Verifying the Connection to the Controller Machine
The controller machine must be able to respond to the Tracker Agent machine across the TCP/IP network. To test the connectivity from the Tracker Agent machine, use the ping controllermachine command. For example, ping ldg2 should generate output similar to: ldg2.ldg.se.ibm.com is alive

Verifying the Connection from the Controller Machine
The Tracker Agent machine must be able to respond to the controller machine across the TCP/IP network. Test this as described in “Verifying the Connection to the Controller Machine,” but from the controller machine.

Verifying the Network Routing
Verify the route to the controller machine. From the Tracker Agent machine, enter traceroute controllermachine. You must have root authority to use this command. The generated output should be similar to: sun4# /usr/local/bin/traceroute ldg2 traceroute to ldg2ldg.se.ibm.com (9.52.5 .2 1), 3 hops max, 4 byte

1 ldgenet.ldg.se.ibm.com (9.52.51.3) 3 ms 2 ms 2 ms 2 ldgnames.ldg.se.ibm.com (9.52.5 .254) 5 ms 5 ms 5 ms 3 ldgmvs1.ldg.se.ibm.com (9.52.5 .2 1) 76 ms 28 ms 28 ms sun4# If the controller is not found, there might be a problem with your network setup. Consult your system or network administrator.

26

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Planning

Verifying the TCP/IP Environment (OS/390 only)
Contact your system administrator to verify that TCP/IP is installed, then check that it is running.

Verifying that TCP/IP Is Operational
Verify that the TCP/IP procedure is active. If the procedure has not started, contact your system administrator.

Verifying the Connection to the Controller Machine
To test the connectivity, use the TSO ping command on the OS/390 Open Edition Tracker Agent machine.

Verifying the Connection from the Controller Machine
To test the connectivity, use the TSO ping command on the OPC controller machine.

Chapter 3. Planning your Tracker Agent Installation

27

28

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Chapter 4. Installing and Customizing the Tracker Agent
Your next task is to download the binary install image to the Tracker Agent machine. Otherwise, continue from “Creating Links between the Directories” on page 48.
Table 9. The Tracker Agent Installation Process
Stage 1 2 3 4 5 6 7 Description Verify the TCP/IP connections. Check host and service names. Download the software if the Tracker Agent machine cannot access the software. Create links between directories and files. Customize the configuration parameter file. Create the log and temporary directories. Customize file permissions, if required. For more information ... See page 19. See page 17. See page 29. See page 48. See page 48. See page 55. See page 56.

Unless otherwise specified, all steps must be performed as the superuser (root). If you do not have the root password, contact your systems support area.

Download the Tracker Agent Files from the Controller System
Skip this step if the Tracker Agent machine shares a file system with the controller machine. The Tracker Agent files must be transferred from the controller host. Any file transfer program can be used. It is recommended that you use the TCP/IP FTP program because this is a good way to test that the TCP/IP connection works properly (not applicable for OS/390). Note: For details of the procedure for applying Tracker PTFs, see Appendix N, “Applying Tracker Maintenance on Non-AIX Machines” on page 127.

AIX Only
You can send the Tracker Agent files to the Tracker Agent machine, or you can receive them from the controller machine.

© Copyright IBM Corp. 1995, 1999

29

Installation

To send files to the Tracker Agent machine, enter these commands on the controller machine (from the TSO command line if using MVS): ftp aix1 user root passwd xxxxx binary cd /usr/sys/inst.images put 'OPCESA.INST.SEQQEENU(EQQTXAIX)' tracker.image.aix quit To receive files from the controller machine, enter these commands on the Tracker Agent machine: cd /usr/sys/inst.images ftp control user opc passwd xxxxx binary get 'OPCESA.INST.SEQQEENU(EQQTXAIX)' tracker.image.aix quit In these examples, the AIX machine is known as aix1, and the controller machine as control. You can receive the image to any directory. /usr/sys/inst.images is the recommended and default directory. tracker.image is an installp image.

Using SMIT to Install the Required Features
This section describes how to install the Tracker Agent using the SMIT install facility. Consult your administrator for assistance if you have not previously used this function. There is more than one way to perform this task. For example, you can install all the features from the installation media, or you can copy the software to a hard disk for future installation. When installing the different features of the Tracker Agent, consider these points: The programs can be installed on one machine only, if: – The file system is network mounted with NFS. – The installation directory is exported. – The remote clients NFS-mount the install directory. These network setups must be performed on every machine that uses the Tracker Agent.

30

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Installation

Updating .toc File for First Install: If this is the first time the Tracker Agent has been installed on the machine, you must update the SMIT .toc file with the tracker information. To update your .toc file, enter:
inutoc /usr/sys/inst.images If the Tracker Agent image is stored in another directory, use that directory instead of /usr/sys/inst.images.

Reinstalling the Tracker Agent: If the Tracker Agent has previously been installed, set these values on the Install Software Products at Latest Available Level SMIT panel:
Install Prerequisite Software: no. Overwrite existing version: yes. Depending on the level of SMIT, you might have to remove all Tracker Agent files from the system when re-installing. To remove the old files: cd /usr/lpp rm –rf tracker

Use SMIT to Install Images
Note: The steps and panel names may vary slightly depending on the version of SMIT that you have. The sequence below is for a Version 3 system. To use SMIT to install the required features either from the installation media or the hard disk: 1. Start SMIT. 2. Select Software Installation & Maintenance. 3. Select Install / Update Software. 4. Select Install Software Products at Latest Available Level. 5. Enter the device or directory name where the installation media resides. Enter the full path to the directory containing the file that was downloaded from the controller machine, for example, /usr/sys/inst.images If the installation is being performed from tape, enter the device name, for example, /dev/rmt

Chapter 4. Installing and Customizing the Tracker Agent

31

Installation

6. Position the pointer on the line SOFTWARE to install, and press F4.

Install Software Products at Latest Available Level Type or select values in entry fields. Press Enter AFTER making all desired changes. INPUT device / directory for software SOFTWARE to install Automatically install PREREQUISITE software? COMMIT software? SAVE replaced files? VERIFY Software? EXTEND file systems if space needed? REMOVE input file after installation? OVERWRITE existing version? ALTERNATE save directory [Entry Fields] /usr/sys/inst.images [] yes yes no no yes no no []

+ + + + + + + +

F1=Help F5=Reset F9=Shell

F2=Refresh F6=Command F1 =Exit

F3=Cancel F7=Edit Enter=Do

F4=List F8=Image

7. Select the required features from the list, position the pointer beside the package, and press F7.

Install Software at Latest Available Level Type or select values in entry fields. Press Enter AFTER making all desired changes. INPUT device / directory for software SOFTWARE to install Automatically install PREREQUISITE software? COMMIT software? SAVE replaced files? SOFTWARE to install Move cursor to desired item and press F7. ONE OR MORE items can be selected. Press Enter AFTER making all selections. > 2.2. . tracker 2.2. . tracker.obj F1=Help F7=Select Enter=Do F2=Refresh F8=Image F3=Cancel F1 =Exit ALL [Entry Fields] /ldg/proj/opc/trackit/> [all] yes yes no

+ + + +

8. Press Enter or select OK.

32

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Installation

9. Select Do to start the installation. When the Tracker Agent install is complete you should see a panel similar to:

COMMAND STATUS Command: OK stdout: yes stderr: no

Before command completion, additional instructions may appear below. [TOP] installp -qacFNXd/ldg/proj/opc/trackit/images \ -f {File Containing Software} 2>&1 Contents of {File Containing Software}: tracker 2.2. . .all installp: Performing requisite checking. (This may take several minutes.) installp: The following software products will be applied: tracker.obj at level 2.2. . installp: Requisite checking complete. [MORE...56] F1=Help F8=Image F2=Refresh F9=Shell F3=Cancel F1 =Exit F6=Command

Continuing your installation ... Continue reading from “Creating Links between the Directories” on page 48.

Chapter 4. Installing and Customizing the Tracker Agent

33

Installation

HP-UX Only
To send files to the Tracker Agent machine, enter these commands on the controller machine (from the TSO command line if using MVS): ftp hp5 user root passwd xxxxx binary cd /usr/lpp put 'OPCESA.INST.SEQQEENU(EQQTXHP1)' tracker.image.hp1.Z quit or to receive files from the controller machine, enter this series of commands on the HP-UX machine: cd /usr/lpp ftp control user opc passwd xxxxx binary get 'OPCESA.INST.SEQQEENU(EQQTXHP1)' tracker.image.hp1.Z quit In these examples, the HP-UX machine is known as hp5, and the controller machine as control. You can receive the image to any directory. /usr/lpp is the recommended and default directory. The file is packaged in compressed format. Uncompress it before you continue with the installation process. To uncompress the file, enter: uncompress tracker.image.hp1.Z The file name is changed to tracker.image.hp, which now contains an uncompressed update image.

Using Swinstall to Install the Required Features
This section describes how to install the Tracker Agent on HP-UX V10 or HP-UX V11, using the swinstall tool. Consult your administrator for assistance if you have not previously used this tool. When you have loaded and uncompressed the image file, use the swinstall to process it. Note that the OPC member to be loaded, for the Tracker Agent for HP-UX V10 or HP-UX V11, is EQQTXHP1. To run the swinstall tool: 1. If the system is not in single-user mode, enter: $ /usr/sbin/shutdown 2. If the workstation is not already in graphical mode, restart VUE on a workstation console, to run swinstall in graphical mode: $ /usr/vuew/bin/vuerc 3. If the swagentd daemon if it is not already running, enter: $ /usr/sbin/swagentd

34

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Installation

4. Run swinstall: $ /usr/sbin/swinstall 5. Specify, in the path field, the source of the software to be loaded: $ /usr/lpp/tracker/tracker.image.hp1 When the tracker product name is shown in the window, you can double-click on it to see its content. 6. On the Actions menu, choose Mark for Install. 7. Choose Install (analysis). 8. When installation is complete, check that the correct tracker directories have been created with the correct owning users/group. To do this, enter the command: ls -las 9. Make sure that the owning user ( 1 ) is tracker and that the group ( 2 ) is opc. This association of owning user entries and group entries should happen automatically. | | Now from a root userid, set EQQHOME to /usr/lpp/tracker and run the /usr/lpp/tracker/bin/eqqperm script to set file permissions. After this, check the /usr/lpp/tracker directory again to ensure that the userid and groupid are correct. To do this: ls -la This is an extract from the output: | | | | | | | | | | |
$ cd tracker /usr/lpp/tracker $ ls -lg 1 -rw-rw-r-- 1 tracker drwxrwxr-x 2 tracker drwxrwxr-x 4 tracker drwxrwxrwx 2 tracker drwxrwxrwx 2 tracker drwxrwxrwx 2 tracker drwxrwxrwx 2 tracker drwxrwxrwx 2 tracker 2 opc opc opc opc opc opc opc opc

446 512 512 512 512 512 512 512

Dec Dec Dec Dec Dec Dec Dec Dec

8 1 1 8 8 8 8 8

1 :46 6:38 3:44 12: 1 12: 1 12: 1 12: 1 12: 1

EQQPARM bin/ doc/ log/ etc/ nls/ samples/ tmp/

Continuing your installation ... Continue reading from “Creating Links between the Directories” on page 48.

Chapter 4. Installing and Customizing the Tracker Agent

35

Installation

Sun Solaris Only
To send files to the Tracker Agent machine, enter these commands on the controller machine (from the TSO command line if using MVS): ftp sun2 user yourid passwd xxxxx binary cd /usr/lpp put 'OPCESA.INST.SEQQEENU(EQQTXSOL)' tracker.image.sol.Z quit

Receiving the Files from the Controller Machine
To receive files from the controller machine, enter this series of commands on the Tracker Agent machine: cd /usr/lpp ftp control (where control is the name of the controller machine user yourid passwd xxxxx binary get 'OPCESA.INST.SEQQEENU(EQQTXSOL)' tracker.image.sol.Z quit The file is packaged in compressed format. Uncompress it before you continue with the installation process. To uncompress the file you should be in the /usr/lpp/ directory and enter: uncompress tracker.image.sol.Z When the file is uncompressed, a file called tracker.image.sol is placed in the /usr/lpp/ directory.

Installing the Required Features
This section describes how to install the Tracker Agent. There is more than one way to perform this task. For example, you can install all the features from the installation media, or you can copy the software to a hard disk for future installation. When installing the features of the Tracker Agent, consider these points: The programs can be installed on one machine only, if: – The file system is network mounted with NFS. – The installation directory is exported. – The remote clients NFS-mount the install directory. These network setups must be performed on every machine that uses the Tracker Agent.

36

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Installation

Installing the Tracker for the First Time: If this is the first time you have installed the Tracker Agent, use this command to install the files:
pkgadd -d /usr/lpp/tracker.image.sol After the files have been installed, check the usr/lpp/tracker directory to see that the files are there. To do this: pkginfo -l tracker An extract of the output is shown in Figure 7.

|

| | | | |

/u/tracker$ pkginfo -l tracker PKGINST: tracker NAME: OPC Tracker Agent for Solaris CATEGORY: application ARCH: sparc VERSION: 2.3 BASEDIR: /usr/lpp VENDOR: IBM Rome Tivoli Lab DESC: OPC Tracker Agent for Solaris INSTDATE: Jul 2 1998 8: 8 STATUS: completely installed FILES: 1 9 installed pathnames 15 directories 25 executables 4 setuid/setgid executables 5 52 blocks used (approx)

Figure 7. Checking the Tracker Files on Solaris

Reinstalling the Tracker Agent: If the Tracker Agent has previously been installed, you should remove Tracker Agent files from the system before re-installing.
To check if the Tracker Agent has been installed: pkginfo -l tracker To remove the Tracker Agent package: pkgrm tracker Refer to Solaris documentation for more information about installing packages. Continuing your installation ... Continue reading from “Creating Links between the Directories” on page 48.

Chapter 4. Installing and Customizing the Tracker Agent

37

Installation

SunOS Only
To send files to the Tracker Agent machine, enter these commands on the controller machine (from the TSO command line if using MVS): ftp sun4 user yourid passwd xxxxx binary cd /usr/lpp put 'OPCESA.INST.SEQQEENU(EQQTXSUN)' tracker.tar.sun.Z quit

Receiving the Files from MVS
To receive files from the MVS machine, enter this series of commands on the Tracker Agent machine: cd /usr/lpp ftp control (where control is the name of the controller machine user yourid passwd xxxxx binary get 'OPCESA.INST.SEQQEENU(EQQTXSUN)' tracker.tar.sun.Z quit The file is packaged in compressed format. Uncompress it before you continue with the installation process. To uncompress the file you should be in the /usr/lpp directory and enter: uncompress tracker.tar.sun.Z When the file is uncompressed, a file called tracker.tar is placed in the /usr/lpp directory. To verify the contents of the file: tar tvf /usr/lpp/tracker.tar.sun

Installing the Required Features
This section describes how to install the Tracker Agent. There is more than one way to perform this task. For example, you can install all the features from the installation media, or you can copy the software to a hard disk for future installation. When installing the features of the Tracker Agent, consider these points: The programs can be installed on one machine only, if: – The file system is network mounted with NFS. – The installation directory is exported. – The remote clients NFS-mount the install directory. These network setups must be performed on every machine that uses the Tracker Agent.

Installing the Tracker for the First Time: If this is the first time you have installed the Tracker Agent, use this command to install the files:
| tar xvof usr/lpp/tracker.tar.sun After the files have been extracted from the tar file, check the usr/lpp/tracker directory to see that the files are there. To do this: ls -lg

38

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Installation

| | |

Now from a root userid, set EQQHOME to /usr/lpp/tracker and run the /usr/lpp/tracker/bin/eqqperm script to set file permissions. After this, check the /usr/lpp/tracker directory again to ensure that the userid and groupid are correct. To do this: ls -la An extract of the output is shown in Figure 8.

|

$ cd tracker /usr/lpp/tracker $ ls -lg 1 -rw-rw-r-- 1 tracker drwxr-xr-x 2 tracker -rw-r--r-- 1 tracker drwxr-xr-x 4 tracker drwxr-xr-x 2 tracker drwxr-xr-x 2 tracker drwxr-xr-x 2 tracker drwxr-xr-x 2 tracker drwxr-xr-x 2 tracker

2 opc opc opc opc opc opc opc opc opc

446 512 512 512 512 512 512 512 512

Dec Dec Dec Dec Dec Dec Dec Dec Dec

8 1 1 1 8 8 8 8 8

1 :46 6:38 6:38 3:44 12: 1 12: 1 12: 1 12: 1 12: 1

EQQPARM bin/ copyright/ doc/ log/ etc/ nls/ samples/ tmp/

Figure 8. Checking the Tracker Files on SunOS

Check that the owning user ( 1 ) is tracker, and that the group ( 2 ) is opc.

Reinstalling the Tracker Agent: If the Tracker Agent has previously been installed, you might have to remove all Tracker Agent files from the system when re-installing. To remove the old files:
cd /usr/lpp rm –rf tracker

Chapter 4. Installing and Customizing the Tracker Agent

39

Installation

|

SGI IRIX Only
To send files to the Tracker Agent machine, enter these commands on the controller machine (from the TSO command line if using MVS): ftp mips4 user yourid passwd xxxxx binary cd /usr/lpp put 'OPCESA.INST.SEQQEENU(EQQTXPYR)' tracker.tar.mips.Z quit

Receiving the Files from MVS
To receive files from the MVS machine, enter this series of commands on the Tracker Agent machine: cd /usr/lpp ftp control (where control is the name of the controller machine user yourid passwd xxxxx binary get 'OPCESA.INST.SEQQEENU(EQQTXPYR)' tracker.tar.mips.Z quit The file is packaged in compressed format. Uncompress it before you continue with the installation process. To uncompress the file you should be in the /usr/lpp directory and enter: uncompress tracker.tar.mips.Z When the file is uncompressed, a file called tracker.tar is placed in the /usr/lpp directory. To verify the contents of the file: tar tvf /usr/lpp/tracker.tar.mips

Installing the Required Features
This section describes how to install the Tracker Agent. There is more than one way to perform this task. For example, you can install all the features from the installation media, or you can copy the software to a hard disk for future installation. When installing the features of the Tracker Agent, consider these points: The programs can be installed on one machine only, if: – The file system is network mounted with NFS. – The installation directory is exported. – The remote clients NFS-mount the install directory. These network setups must be performed on every machine that uses the Tracker Agent.

Installing the Tracker for the First Time: If this is the first time you have installed the Tracker Agent, use this command to install the files:
| | | tar xvof usr/lpp/tracker.tar.mips Now from a root userid, set EQQHOME to /usr/lpp/tracker and run the /usr/lpp/tracker/bin/eqqperm script to set file permissions.

40

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Installation

After this, check the /usr/lpp/tracker directory again to ensure that the userid and groupid are correct. To do this: ls -la An extract of the output is shown in Figure 9.

|

$ cd tracker /usr/lpp/tracker $ ls -lg 1 -rw-rw-r-- 1 tracker drwxrwxr-x 2 tracker -rw-r--r-- 1 tracker drwxrwxr-x 4 tracker drwxrwxrwx 2 tracker drwxrwxrwx 2 tracker drwxrwxrwx 2 tracker drwxrwxrwx 2 tracker drwxrwxrwx 2 tracker

2 opc opc opc opc opc opc opc opc opc

446 512 512 512 512 512 512 512 512

Dec Dec Dec Dec Dec Dec Dec Dec Dec

8 1 1 1 8 8 8 8 8

1 :46 6:38 6:38 3:44 12: 1 12: 1 12: 1 12: 1 12: 1

EQQPARM bin/ copyright/ doc/ log/ etc/ nls/ samples/ tmp/

Figure 9. Checking the Tracker Files on SGI IRIX

Check that the owning user ( 1 ) is tracker, and that the group ( 2 ) is opc.

Reinstalling the Tracker Agent: If the Tracker Agent has previously been installed, you might have to remove all Tracker Agent files from the system when re-installing. To remove the old files:
cd /usr/lpp rm –rf tracker | Note: In some environments it may be necessary to set the variable SYMBTEST before starting the tracker. For example, in a Silicon Graphics IRIX system: export SYMBTEST=-h

Chapter 4. Installing and Customizing the Tracker Agent

41

Installation

Digital UNIX Only
To send files to the Tracker Agent machine, enter these commands on the controller machine (from the TSO command line if using MVS): ftp decunix4 user yourid passwd xxxxx binary cd /usr/lpp put 'OPCESA.INST.SEQQEENU(EQQTXDUX)' tracker.tar.decunix.Z quit

Receiving the Files from MVS
To receive files from the MVS machine, enter this series of commands on the Tracker Agent machine: cd /usr/lpp ftp control (where control is the name of the controller machine user yourid passwd xxxxx binary get 'OPCESA.INST.SEQQEENU(EQQTXDUX)' tracker.tar.decunix.Z quit The file is packaged in compressed format. Uncompress it before you continue with the installation process. To uncompress the file you should be in the /usr/lpp directory and enter: uncompress tracker.tar.decunix.Z When the file is uncompressed, a file called tracker.tar is placed in the /usr/lpp directory. To verify the contents of the file: tar tvf /usr/lpp/tracker.tar.decunix

Installing the Required Features
This section describes how to install the Tracker Agent. There is more than one way to perform this task. For example, you can install all the features from the installation media, or you can copy the software to a hard disk for future installation. When installing the features of the Tracker Agent, consider these points: The programs can be installed on one machine only, if: – The file system is network mounted with NFS. – The installation directory is exported. – The remote clients NFS-mount the install directory. These network setups must be performed on every machine that uses the Tracker Agent.

Installing the Tracker for the First Time: If this is the first time you have installed the Tracker Agent, use this command to install the files:
| tar xvof usr/lpp/tracker.tar.decunix After the files have been extracted from the tar file, check the usr/lpp/tracker directory to see that the files are there. To do this: ls -la

42

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Installation

| |

Now from a root userid, set EQQHOME to /usr/lpp/tracker and run the /usr/lpp/tracker/bin/eqqperm script to set file permissions. After this, check the /usr/lpp/tracker directory again to ensure that the userid and groupid are correct. To do this: ls -la An extract of the output is shown in Figure 10.

| | | | | | | | | | |

$ cd tracker /usr/lpp/tracker $ ls -lg 1 -rw-rw-r-- 1 tracker drwxr-xr-x 2 tracker drwxr-xr-x 2 tracker -rw-rw-r-- 1 tracker drwxr-xrwx 2 tracker drwxr-xrwx 2 tracker drwxr-xrwx 2 tracker drwxr-xrwx 2 tracker drwxr-xrwx 4 tracker drwxr-xrwx 2 tracker drwxr-xrwx 3 tracker

2 opc opc opc opc opc opc opc opc opc opc opc

2

3 512 512 257 512 512 512 512 512 512 512

May Sep Feb May May Sep Sep Feb May Mar May

15 1998 13 11: 1 24 1999 15 1998 15 1998 22 14:2 22 14:17 24 1999 15 1998 15 1999 15 1998

EQQPARM bin catalog copyright doc etc log methods nls samples tmp

Figure 10. Checking the Tracker Files on Digital UNIX

Check that the owning user ( 1 ) is tracker, and that the group ( 2 ) is opc.

Reinstalling the Tracker Agent: If the Tracker Agent has previously been installed, you might have to remove all Tracker Agent files from the system when re-installing. To remove the old files:
cd /usr/lpp rm –rf tracker

Chapter 4. Installing and Customizing the Tracker Agent

43

Installation

Digital OpenVMS VAX/Alpha Only
| | | | To install the Tracker Agent, create a temporary directory to put the zip file, for example: create/dir dka3 :[ set def dka3 :[ .u.kit_install] .u.kit_install]

To send files to the Tracker Agent machine, enter these commands on the controller machine (from the TSO command line if using MVS): ftp dec4 user yourid passwd xxxxx binary cd dka3 :[ .u.kit_install] put 'OPCESA.INST.SEQQEENU(EQQTXDEC)' TRACKER_IMAGE_VMS.ZIP quit

|

Receiving the Files from MVS
To receive files from the MVS machine, enter this series of commands on the Tracker Agent machine: | set def dka3 :[ .u.kit_install] ftp control (where control is the name of the controller machine user yourid passwd xxxxx binary get 'OPCESA.INST.SEQQEENU(EQQTXDEC)' TRACKER_IMAGE_VMS.ZIP quit The file is packaged in compressed format. Uncompress it before you continue with the installation process. To uncompress the file you should be in the /usr/lpp directory and enter: unzip "–V" TRACKER_IMAGE_VMS.ZIP by using unzip When the file is uncompressed, a file called TRACKER_IMAGE_VMS is placed in the /pub/archiving directory.

Installing the Required Features
This section describes how to install the Tracker Agent. There is more than one way to perform this task. For example, you can install all the features from the installation media, or you can copy the software to a hard disk for future installation. When installing the features of the Tracker Agent on DEC platforms (VAX and ALPHA), consider this point: The package contains the following three backup files: | | | | | | – TRK020.A – TRK020.B – TRK020.C The file TRK020.A contains a script, which you can use to restore from TRK020.B and TRK020.C. TRK020.B contains the DEC VAX files; TRK020.C contains the DEC Alpha files. To proceed with the installation, you need a SYSTEM account.

44

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Installation

| |

Installing the Tracker for the First Time: If this is the first time you have installed the Tracker Agent, after the login, using the ftp command, copy the three TRK020.* files in a temporary directory.
1. Choose a unique UIC for the TRACKER to be created. 2. Verify the user UIC in the system; enter the following OpenVMS commands on the DLC command line: $ set def sys$system $ run authorize UAF> show /identifier/user=[ , ] (check the UICs and choose a new one to be used in the installation procedure) UAF> exit 3. Now enter the following DCL commands: $ set def sys$update (to set the current directory) @vmsinstal (to start the installation procedure) 4. Now follow these steps: a. At the prompt: * Where will the distribution volumes be mounted: specify the file path (for example, dka3 b. At the prompt: * Enter the product to be processed for first distribution volume set. :[ .u.kit_install])

|

specify the installing product: TRK 2 c. At the prompt: * Enter installation options you wish to use (none) press Enter. The following message is displayed:

| |

The following products will be processed: - TRK V2. Beginning installation of TRK V2. at 18:4 5. Create a UIC for the new TRACKER user: a. At the prompt: * Enter unique UIC for the TRACKER USER [[450,350]] do one of the following: Press Enter to accept the default UIC. Specify a UIC for the new TRACKER user you are creating, in the format [[n.m]] and press Enter. b. At the prompt: * Enter device for TRACKER's user directory [OPCDEC$DKA300:]]: specify the device on which to install the package. The default is OPCDEC$DKA300:. Either press Enter to accept it, or specify another device and press Enter.

Chapter 4. Installing and Customizing the Tracker Agent

45

Installation

c. At the prompt: * Enter root directory for the TRACKER's user directory [[000000]]: The default is 000000. Either press Enter to accept it, or specify another root directory and press Enter. The system then generates a message to inform the operator that the TRACKER user will be created with the password TRACKER, which you must modify at the first login. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6. As the OpenVMS Tracker Agent uses system resources, you might need to increase the system parameters PQL_* and GBLSECTIONS, depending on your use of the Tracker Agent. Normally the system default is valid for ALPHA systems. On VAX systems, you need at least: Parameter Name PQL_DASTLM PQL_MASTLM PQL_DBIOLM PQL_MBIOLM PQL_DBYTLM PQL_MBYTLM PQL_DCPULM PQL_MCPULM PQL_DDIOLM PQL_MDIOLM PQL_DFILLM PQL_MFILLM PQL_DPGFLQUOTA PQL_MPGFLQUOTA PQL_DPRCLM PQL_MPRCLM PQL_DTQELM PQL_MTQELM PQL_DWSDEFAULT PQL_MWSDEFAULT PQL_DWSQUOTA PQL_MWSQUOTA PQL_DWSEXTENT PQL_MWSEXTENT PQL_DENQLM PQL_MENQLM PQL_DJTQUOTA PQL_MJTQUOTA Value 24 4 18 1 8192 65536

18 1 32 1 2 5 2 5 16/32 16 327 512 654 1 24 2 5 2 5 3 2 1 24

Operating the Tracker Agent
To operate the Tracker Agent: | | | | | | 1. Login as user TRACKER (the user tracker is created with the password "tracker") The DCL prompt is displayed: $ The installation program sets DCL as the shell for user TRACKER. The current directory is the directory specified during the installation process. 2. Customize the file setpath.com in the home directory, to set up the following environment variables:

46

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Installation

EQQDISK EQQINSTANCE EQQHOME EQQBIN

The name of the disk that contains the Tracker Agent home directory The name of the Tracker Agent configuration file The home directory, UNIX style

|

3. Create and customize the configuration file referred to by EQQINSTANCE in $EQQHOME/etc. | | 4. Execute the setpath file. 5. Run tracker commands, such as eqqverify, eqqstart, and eqqmon.

Chapter 4. Installing and Customizing the Tracker Agent

47

Installation

OS/390 Open Edition Only
To send Tracker Agent files from the Tivoli OPC controller to the OS/390 Open Edition Tracker Agent, enter the following command from the TSO command line: | OPUT 'OPCESA.INST.SEQQEENU(EQQTXOED)' /usr/lpp/tracker.oe.tar.Z' BINARY This command copies the data set containing the archive file to the member EQQTXOED in the specified directory in the file system. From the TSO command line, proceed to the shell using the OMVS command and move to the /usr/lpp directory. The archive file tracker.oe.tar.Z is packaged in compressed format. Uncompress it by entering the shell command: /usr/lpp:>uncompress tracker.oe.tar.Z The file name is changed to tracker.oe.tar. If this is the first installation, create the tracker directory under the /usr/lpp directory with superuser authority. If, instead, you are reinstalling the tracker agent, then remove the old files from the system with the command: /usr/lpp:>rm -rf tracker | | | | Then extract into the /usr/lpp directory the tracker agent files from the package: usr/lpp:>tar -xvof tracker.oe.tar Now from a root userid, set EQQHOME to /usr/lpp/tracker and run the /usr/lpp/tracker/bin/eqqperm script to set file permissions. Attention: The tracker for OS/390 Open Edition cannot reach the same level of performance as the tracker for AIX.

Creating Links between the Directories
This step must be performed for all platforms except AIX and Digital OpenVMS. If you have standard directory tree naming conventions, you can create the required links using the sample script eqqinit. Ensure that EQQHOME is set to your home directory; if not see “Home Directory” on page 49 for details. /usr/lpp/tracker/bin/eqqinit -tracker Run the script from the tracker user ID. See “eqqinit” on page 97 for further details of this sample script. Use the -tracker parameter to create the links.

| |

Customizing the Configuration Parameter File
| | Use the configuration parameter file to specify the configuration parameters for the Tracker Agent. Use a sample file in the samples directory to create your own configuration file. Edit the configuration file using an editor such as vi, ISHELL, or OEDIT.

48

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Installation

Home Directory
You normally start the Tracker Agent under the tracker user ID, and have /u/tracker as the home directory. You are strongly recommended to set two environment variables that tell the Tracker Agent binaries where the files are: EQQHOME variable EQQINSTANCE variable.

Setting the EQQHOME Variable
This variable is the name of the home directory. To check the home directory, enter the following commands as the user ID under which the Tracker Agent runs: cd pwd To set the environment variable EQQHOME in the Korn Shell (ksh): export EQQHOME=/u/tracker To set the environment variable EQQHOME in the C Shell (csh): setenv EQQHOME /u/tracker To set the environment variable EQQHOME in the Bourne Shell (sh): EQQHOME=/u/tracker export $EQQHOME These examples assume that the home directory is /u/tracker. $EQQHOME points to the base tree structure shown in Table 8 on page 15. In the remainder of this book, the home directory is referred to as $EQQHOME. If /u/tracker is the home directory, for example, $EQQHOME/etc refers to /u/tracker/etc.

Setting the EQQINSTANCE Variable
This variable is the name of the configuration parameter file. Set it in the same way as for the EQQHOME variable: EQQINSTANCE=myconfig.file export EQQINSTANCE You can also specify the name of the configuration file using the -f flag on the eqqstart script or when you start the Tracker Agent directly. The -f flag takes precedence over the EQQINSTANCE variable. See “Starting the Tracker Agent” on page 62 for more information on starting the Tracker Agent. Put the configuration parameter file in the $EQQHOME/etc directory. Copy the configuration file to each machine where a Tracker Agent is installed, ensuring that the values are consistent.

Chapter 4. Installing and Customizing the Tracker Agent

49

Installation

Updating the PATH Variable
Add $EQQHOME/bin to the PATH environment variable. Sun Solaris only Add /usr/ucb to the PATH environment variable.

Updating the Configuration Parameter File
The general syntax rules for the statements are: All keywords must be lowercase. All user-specified values must be in the format shown. A # sign can be used for comments. Blank lines are treated as comments. Unknown or invalid keywords are ignored. Each keyword must begin on a separate line. If a keyword definition cannot fit on a single line, use a back slash (\) as a continuation character. If you code a keyword more than once, the last value is used. Keyword values can include: – – – – – Previously defined configuration or environment variables ($variable) Home directories in the format ∼user The logged-on-user home directory (∼ on its own). Service names, for port numbers. Host names, for IP addresses.

50

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Installation

┌─,───────────────────────┐ ──controller_ipaddr──=─────controller_IP_Address──┴───────────────────────── ──┬──────────────────────────────────────────┬────────────────────────────── └─controller_portnr=controller_Port_Number─┘ ──┬─────────────────────────┬─────────────────────────────────────────────── └─local_ipaddr=IP_Address─┘ ──┬──────────────────────────┬────────────────────────────────────────────── └─local_portnr=Port_Number─┘ ──┬────────────────────────┬──────────────────────────────────────────────── └─eqqshell=default_shell─┘ ──┬───────────────────┬───────────────────────────────────────────────────── └─eqqfilespace=nnnn─┘ ──┬──────────────┬────────────────────────────────────────────────────────── └─eqqmsgq=nnnn─┘ ──┬────────────────┬──────────────────────────────────────────────────────── └─eqqshmkey=nnnn─┘ ──┬────────────────────┬──────────────────────────────────────────────────── └─event_logsize=nnnn─┘ ──┬─────────────────────────────┬─────────────────────────────────────────── └─ew_check_file=event_logfile─┘ ──┬───────────────────────────────┬───────────────────────────────────────── └─local_codepage=local_codepage─┘ ──┬─────────────┬─────────────────────────────────────────────────────────── └─ipc_base=ID─┘ ──┬────────────────────────────────┬──────────────────────────────────────── │ ┌─NO,DELETE────────┐ │ └─job_log=─┼─NO,KEEP──────────┼──┘ ├─IMMEDIATE,KEEP───┤ ├─IMMEDIATE,DELETE─┤ ├─DELAYED,KEEP─────┤ └─DELAYED,DELETE───┘

Figure 11 (Part 1 of 2). Keyword Syntax

Chapter 4. Installing and Customizing the Tracker Agent

51

Installation

|

──┬─────────────────────┬─────────────────────────────────────────────────── └─controller_type=opc─┘ ──┬─────────────────────┬─────────────────────────────────────────────────── │ ┌─ ─┐ │ └─trace_level=─┼─1─┼──┘ ├─2─┤ ├─3─┤ └─4─┘ ──┬───────────────────┬───────────────────────────────────────────────────── └─num_submittors=nn─┘ ──┬───────────────────────────┬───────────────────────────────────────────── └─subnn_workstation_id=wsID─┘ ──┬────────────────────────────┬──────────────────────────────────────────── └─subnn_check_file=checkfile─┘ ──┬────────────────────────┬──────────────────────────────────────────────── │ ┌─GS─┐ │ └─subnn_subtype=─┴─LS─┴──┘ ──┬──────────────────┬────────────────────────────────────────────────────── └─xxxxx_retry=nnnn─┘

Figure 11 (Part 2 of 2). Keyword Syntax

Keywords
controller_ipaddr=controller_IP_Address Specifies the IP addresses for the systems where the controllers are running. There can be up to 10 addresses, each in the format nnn.nnn.nnn.nnn, where nnn is in the range 1-254, or a host name. It is a required keyword: there is no default value. Separate the addresses with commas. The Tracker Agent tries the first address in the list at startup. If it is unable to make a connection, it tries the next address in the list, and so on. Only one controller can be connected at any one time. controller_portnr=controller_Port_Number Specifies the port number that the Tracker Agent TCP-Writer connects to, or a services name. See “Verifying Host and Service Names” on page 17 for help on defining ports. local_ipaddr=IP_Address Specifies the IP address for the machine where the Tracker Agent is running. It must be in the format nnn.nnn.nnn.nnn, where nnn is in the range 1-254, or an environment variable such as $HOST, or a host name. local_portnr=Port_Number Specifies the port number that the Tracker Agent TCP-Reader binds a socket to. See “Verifying Host and Service Names” on page 17 for help on defining ports. eqqshell=default_shell Specifies the shell that noninterpreted scripts run under. The default is the Korn shell, if there is one, and otherwise the Bourne shell (/bin/sh).

52

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Installation

eqqfilespace=nnnn | 1000 Specifies the minimum number of blocks of temporary file space. The size of a block depends on the Tracker Agent machine. If there is less space available, the Tracker Agent closes down in an orderly way with a message. The default is 1000 blocks. eqqmsgq=nnnn | 16384 Maximum size of a message transferred to the controller, in bytes. The default is 16384. If the kernel does not allow the size that you specify, the maximum allowable size is used, and a message is issued. eqqshmkey=nnnn | 58871 The key to the shared memory that the Tracker Agent will use. The default is 58871. event_logsize=nnnn | 1000 Specifies the number of events to be logged in the Event Writer event logfile. The value defined is the number of events, not the event record size. The default is 1000, and this value is also the minimum. ew_check_file= event_logfile The name of the file used for logging events. If a file does not exist, it will be created. The default file name is ewriter_check. You can guarantee a unique network-wide name for the log file by specifying a name in this format: $HOST.$local_portnr.ew.check local_codepage = local_codepage | ISO8859-1 Specifies the ASCII codepage used. The default is ISO8859-1. ipc_base=ID | A Specifies the unique ID character to be used for creating unique inter-process communication (IPC) numbers. The default value is A. job_log=Log_Option Specifies how joblogs will be processed by the Tracker Agent. These combinations are valid: IMMEDIATE,DELETE IMMEDIATE,KEEP NO,DELETE NO,KEEP DELAYED,DELETE DELAYED,KEEP The values have these meanings: NO IMMEDIATE DELAYED KEEP DELETE Joblogs will not be sent to the controller. Joblogs are immediately sent to the controller when the job ends. Joblogs are returned to the controller only if a Tivoli OPC dialog user requests the joblog. Joblogs are stored on disk for all jobs. Joblogs are deleted when the job ends if NO or IMMEDIATE is also specified. When DELAYED is specified, the joblog is kept on disk until a retrieval request is received from the controller.

Chapter 4. Installing and Customizing the Tracker Agent

53

Installation

When saving joblogs on disk, consider the disk space available on the system. The jobs will end in error if there is no space for the log. The default value is IMMEDIATE,DELETE. That is, joblogs are sent to the controller immediately and deleted. Joblogs are never sent to the controller if the submittor uses LoadLeveler. | controller_type=opc Specifies the type of controller for this Tracker Agent: opc The controller is OPC/ESA.

There is no default value. opc must be specified. trace_level=0 | 1 | 2 | 3 | 4 Specifies the trace level for the component. The default is 0 (no trace). You do not need a trace for normal running. num_submittors= nn | 1 The number of submittors to initialize, from 1 to 56. By default, one submittor is started. Each submittor will have a default subtype of GS, a default workstation ID of AXnn, and a default check file of AXnn.check, but you can override these defaults with the following parameters. subnn_workstation_id= wsID | AXnn Connects submittor nn with a workstation. It is a character string. The length must not exceed 4 (only the first 4 characters will be used). The default value is AXnn. subnn_check_file= checkfile | wsID.check Connects submittor nn with the file used for job checkpointing and event logging. If a file does not exist, it will be created. The default file name is wsID.check. You can guarantee a unique network-wide name for each file by specifying a name in this format: wsID.$HOST.$local_portnr.check subnn_subtype=GS | LS Controls whether a submittor uses LoadLeveler (LS) or not (GS). GS, which stands for generic submittor, is the default. A generic submittor is simply one that does not use LoadLeveler.

xxxxx_retry=nnnn | 60 These are retry intervals, in seconds, for various Tracker Agent components. xxxxx can be:
eqqtr eqqtw eqqdr subnn The interval that Tracker Agent will wait before attempting to communicate with the controller if a TCP read attempt fails. The interval that Tracker Agent will wait before attempting to communicate with controller if a TCP write attempt fails. The interval that Tracker Agent will wait before attempting to connect and revalidate the connection to the controlling system. The interval that the nn submittor will wait before retrying an operation (for example, because the number of processes had reached the limit).

54

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Installation

An example of a configuration file is shown in Figure 12.
# Configuration parameter file ------------------------ Tracker for Tivoli OPC controller_type = opc trace_level = # global trace level -4 default is # Configuration parameter file ------------------------ IP addresses controller_ipaddr = 9.52.52.3 # REQUIRED # controller_ipaddr = mvs1 # if using an entry in /etc/hosts # could be up to 1 IP addresses separated by , local_ipaddr = 9.52.51.49 # local SunOS ip address # Configuration parameter file ------------------------ Tracker ports controller_portnr = 2 5 # must match the Tivoli OPC TCPIPPORT parameter # controller_portnr = agent1 # if using a service name from /etc/services local_portnr = 2 51 # local_portnr = sun1 # if using a services name # Configuration parameter file ------------------------ General parameters eqqfilespace = 5 # tracker checks that this space is available local_codepage = ISO8859-1 # default is ISO8859-1 job_log = immediate,keep # default is immediate,delete event_logsize = 1 # default is 1 ew_check_file = $HOST.$local_portnr.ew.check # to be unique ipc_base = A # default is A # Configuration parameter file ------------------------ Tracker submittors num_submittors = 1 # default is 1 sub 1_subtype = gs # default is gs sub 1_check_file = SN 1.$HOST.$local_portnr.check # to be unique sub 1_workstation_id = SN 1 # must match workstation name in Tivoli OPC # loadleveler submittor # sub 2_subtype = ls # sub 2_check_file = SN 2.check # sub 2_workstation_id = SN 2 # must match workstation name in Tivoli OPC

Figure 12. Example of a Configuration File

After editing the configuration parameter file, always check it using eqqverify, as described in “Checking the Configuration Parameter File” on page 73. Make especially sure that the values using configuration or environment variables are correctly substituted—there is no error message if the variable is not set or wrongly set.

Customizing the Directories
By default, the Tracker Agent creates the log, temporary, and trace files in the $EQQHOME/log and $EQQHOME/tmp directories. If the Tracker Agent home directory is NFS mounted, these directories will be the same for every Tracker Agent. This can cause performance problems. Also, if the network connection to the NFS server is lost, the Tracker Agent cannot function fully. Put these directories on a local file system (or with a symbolic link to a local file system) to improve log performance. If you run several instances of the Tracker Agent, and they share the same directory, use variables in the configuration parameter file to ensure that they do not use the same checkpoint and log files.

Chapter 4. Installing and Customizing the Tracker Agent

55

Installation

If the Tracker Agent is running from a NFS mounted file system, it is recommended that the log and temporary directories are configured on the local file system. The eqqinit command (see Appendix B, “Utilities and Samples” on page 93) initializes a directory on the local machine: eqqinit -v This command must be run as root. This local file system must have write privileges for everyone, including a root user who is logged in across the network. The recommended name of the local directory is /var/tracker. You might require administrator privileges to create the /var directory if it does not exist, and to create the links. Note: The /tmp directory is not suitable because this file system is frequently cleaned when booting the system, and this would cause the Tracker Agent to be unable to recreate its internal status. There can be a problem if a job writes too much output: this can fill up the allocated space. To protect the system, use a logical volume for the tmp and log directories, where this is supported, or set up a separate file system for them. If this fills up, the Tracker Agent will stop submitting jobs, but the operating system will continue to work. You can use SMIT to create a logical volume. The log directory includes an event writer checkpoint file, a message log (eqqmsglog), and a trace log (EQQtrc.log) for each Tracker Agent instance, and a submittor checkpoint file for each submittor instance. See “Checking Files in the Log and Temporary Directories” on page 72 for a description of the files in these directories.

Customizing File Permissions
Some of the Tracker Agent components are designed to run with root authority. These components are: The TCP Reader The Generic subtask The LoadLeveler submittor. If you use port numbers lower than 1025, the TCP Reader process must have root authority. If port numbers are defined with values greater than 1024, the TCP Reader process does not need root authority.

Running without Root Authority
To update the TCP Reader to run without root authority, enter the following command as root: chown tracker $EQQHOME/bin/eqqtr The generic and LoadLeveler submit processes must also run as root if the user ID under which submitted jobs should be started is supplied by the controller. If the Tracker Agent is not required to run jobs under other user IDs, the submittors can also be updated to run with normal user authority.

56

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Installation

To update the submittor processes to run without root authority, enter the following commands as root: chown tracker $EQQHOME/bin/eqqls chown tracker $EQQHOME/bin/eqqgssub

Restoring Root Authority
To update the TCP Reader and submittor processes to run with root authority, enter the following commands as root: chown chmod chown chmod chown chmod chown chown root u+s root u+s root u+s root u+s $EQQHOME/bin/eqqtr $EQQHOME/bin/eqqtr $EQQHOME/bin/eqqls $EQQHOME/bin/eqqls $EQQHOME/bin/eqqgssub $EQQHOME/bin/eqqgssub $EQQHOME/bin/eqqgmeth $EQQHOME/bin/eqqgmeth

Restrictions and Dependencies on System Software
This section outlines restrictions and system dependencies that you need to consider.

NFS Restrictions
When running the Tracker Agent on NFS mounted directories, the user ID running the Tracker Agent must have write access to the file system. If the Tracker Agent is running on an NFS mounted file system, the superuser must have write access to the file system.

Number of Processes per User
If the Tracker Agent is running under a user ID other than root, or many jobs are run under one user ID, the number of processes per user ID should be increased. AIX only To set this parameter: 1. 2. 3. 4. Start SMIT. Select System Environments. Select Change / Show characteristics of Operating System. Select Maximum number of PROCESSES allowed per user.

HP-UX only Use this method: 1. 2. 3. 4. 5. Login as root. Enter the System Administration Manager with the sam command. Select Kernel Configuration. Select Configurable Parameters. Change the maxuprc value.

Chapter 4. Installing and Customizing the Tracker Agent

57

Installation

Sun Solaris and SunOS only The system administrator should update this value in the Kernel.

Coordinating Clock Values
The value of Greenwich Mean Time (GMT) must be approximately the same for the Tracker Agent machine and the controlling system. If you set GMT as local time, both the Tracker Agent and controller environments must set GMT as local time. The Tracker Agent will not be able to connect to the controller if the GMT value for the Tracker Agent machine is not within 60 minutes (plus or minus) of GMT on the controlling system. If machines are in different time zones, these times must be set correctly on the different machines, or coordinated through a network time service.

58

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Chapter 5. Operation
This chapter contains general information to help you control the operation and behavior of the Tracker Agent. To run scripts under the Tracker Agent, you do not have to change them, but you should read these sections first.

Running Scripts for Tivoli OPC Storing Scripts
The controller schedules scripts to be executed by the Tracker Agent in the normal Tivoli OPC way—a script is an operation, which is part of an application. Refer to Planning and Scheduling the Workload for details of creating application descriptions. You can store the scripts in the controller EQQJBLIB dataset, or retrieve them with the controller EQQUX002 exit, which is described in Customization and Tuning. If you edit the member using ISPF, make sure that you have numbers set off (UNNUM), or the editor will add sequence numbers in columns 73–80, which will cause errors. Note: The scripts residing in the MVS host dataset EQQJBLIB must have a logical record length of 80 (LRECL=80). The tool eqqcv80p is provided to facilitate the 80-bytes formatting. For details of the eqqcv80p utility, see “eqqcv80p” on page 98.

Writing Scripts
Scripts can contain Tivoli OPC variable substitution and automatic recovery directives, which are described in Planning and Scheduling the Workload, but they cannot use MVS-specific functions such as catalog management and step-level restart.

Determining the Shell that Scripts Run Under
There are two ways to determine the shell that scripts run under. The value of the eqqshell keyword in the EQQPARM file determines where noninterpreted scripts run. If you do not give this keyword a value, it takes the default, which is /bin/ksh (the Korn shell) for AIX systems, and /bin/sh (the Bourne shell) for other UNIX systems. Interpreted script files, that is files that begin with the line: #! pathname run under the shell indicated in the pathname.

© Copyright IBM Corp. 1995, 1999

59

Operation

Specifying a User ID
If the controller supplies user IDs for submitted jobs, the user ID must exist on the Tracker Agent machine. Ensure the user ID is supplied in the correct format, with lowercase and uppercase characters as defined on the Tracker Agent machine. You cannot specify a user ID if the script will run under LoadLeveler: the LoadLeveler runs all scripts under its own user ID.

Getting Output from Scripts
If an empty script is sent to the Tracker Agent, or a script that is too long, the operation completes with error code JCLI. If there is an environment error during execution of the script, the operation completes with error code JCL. If you use the generic submittor, you can browse the standard out and error files from the script (the job log) using the controller dialogs. You cannot browse the job log of scripts submitted using LoadLeveler. If you have very large script output, check the event_logsize configuration parameter. Every 512 bytes of output (approximately) causes an event, so the parameter must be set large enough for the largest expected output. If the output is too big, the Tracker Agent writes an error message to the message log. The event log wraps round, and the Tracker Agent must scan the whole file for events, so do not make the file unnecessarily large, or this will impact performance. There is a limit of approximately 64 KB on the job log output that can be retrieved by the Tivoli OPC controller.

60

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Operation

Testing for Errors from Commands
The generic submittor monitors the return code from the script, and the return code sent to the controller is the return code from the execution of the script. If you have a multiline script like: date touch /tmp/file date and the touch command fails, the return code in the shell is set. On the next command, date, the return code is reset to 0, so the return code from the touch is gone and the script will return 0 (job successful). If you want to verify each step in the script, add tests after each call in the script to verify the shell return code: date (test touch (test date (test rc) - if rc nonzero exit with rc /tmp/file rc) - if rc nonzero exit with rc rc) - if rc nonzero exit with rc

The test depends on the shell used to run the job—the syntax for /bin/sh is different to /bin/csh, for example. If you want to monitor a single command directly, specify the command as a single-line script. In the case of scripts that are only one line long, the submittor monitors the actual command and the return code sent to the controller is the return code from the execution of the command. In this case, the command is run using a standard UNIX exec, so you can not use shell syntax. Testing return codes will only work, of course, if the command returns a bad code when it fails, and a zero code when it works. If you are not sure, try the command from the UNIX command line, and echo the return code from the shell. If the script is more than one line, the generic submittor submits the script and monitors the shell for a return code. This means that in very rare cases, the script can have run without error, but an error in the shell can result in an error return code. | | | | It is important to note that, if more than 256 error codes are produced from the execution of a command or a program, then they are processed modulus 256. This means that return code multiples of 256 are treated as return code zero, for example, return code 769 (256*3 + 1) is treated as return code 0001, and so on. Note: Make sure that the correct code page is set for your terminal emulator. If the code page is incorrect, such characters as £, $, and # in scripts sent from the Tivoli OPC controller might be mistranslated, causing jobs not to run correctly.

Chapter 5. Operation

61

Operation

Specifying the Path
The default path, and the sequence in which the libraries are searched for commands to be executed, depends on the user ID that the tracker is started with. To be sure that a program or script is loaded from the correct library, it is advisable to specify the full path in the script or command.

Controlling the Tracker Agent
These topics are presented: Starting the Tracker Agent Checking the Tracker Agent status Stopping the Tracker Agent.

Starting the Tracker Agent
A sample script has been provided to start the Tracker Agent. Login as the Tracker Agent user ID (normally tracker), and enter: eqqstart [-f filename] | | For Digital OpenVMS only @eqqstart

You can use the -f flag to specify the configuration file. This overrides the EQQINSTANCE environment variable. You must also set the EQQHOME environment variable to point to the home directory. See “Customizing the Configuration Parameter File” on page 48. The method for automatically starting the Tracker Agent depends on your operating system: For AIX only Edit the /etc/rc.tcpip file. This file is processed at startup to initiate all TCP/IP related processes. To add the Tracker Agent to the /etc/rc.tcpip file: 1. Login as root. 2. Edit /etc/rc.tcpip, using an editor such as vi. 3. At the bottom of the file add this section: set EQQINSTANCE=myconfig set EQQHOME=/u/tracker export EQQINSTANCE export EQQHOME /u/tracker/bin/eqqstart Attention: Daemons cannot control TTY consoles. Therefore, if you are running applications that access TTY consoles, either start AIX Tracker by issuing the eqqstart command, or use local applications to avoid the need for console access.

62

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Operation

| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |

For OS/390 only Edit the /etc/rc.tcpip file. This file is processed at startup to initiate all TCP/IP related processes. To add the Tracker Agent to the /etc/rc.tcpip file: 1. Login as superuser. 2. Edit /etc/rc.tcpip file, using an editor such as OEDIT. 3. At the bottom of the file add this section: /u/tracker/bin/eqqstart 4. Edit also the .profile setting the environment variables: set EQQINSTANCE=myconfig set EQQHOME=/u/tracker export EQQINSTANCE export EQQHOME

For HP-UX 10 and HP-UX 11 only All startup files are in the /sbin directory. Here are a series of init state directories: /SBIN/RC1.D /SBIN/RC2.D /SBIN/RC3.D /SBIN/RC3.D and so on. The files in each of these directories are processed at startup and shutdown, in sequence (first rc1.d files then rc2.d and so on). These files have also a naming convention files snnnxxxxx, where nnn is a numeric value and xxxxx is a somewhat informative, are processed at startup in sequence s001 s999, files knnnxxxxx at shutdown. With these premises, follow these steps: 1. LOGIN AS ROOT. 2. Create a file snnnopc in /sbin/rcn.d directory containing these instructions: EQQINSTANCE=MYCONFIG EQQHOME=/U/TRACKER EXPORT EQQINSTANCE EXPORT EQQHOME IF ° -X $EQQHOME/BIN/EQQSTART é; THEN $EQQHOME/BIN/EQQSTART FI 3. Make this file executable, using command: CHMOD 777

Chapter 5. Operation

63

Operation

For Sun Solaris only Create a file called S99ibm.tracker in the /etc/rc2d directory. This file is processed at startup to initiate all TCP/IP related processes. To create the S99ibm.tracker file: 1. Login as root. 2. Create the /etc/rc2d/ directory. 3. Create the S99ibm.tracker file in this directory and add this:
#!/bin/sh # start/stop IBM Tracker during system startup/shutdown # Copyright International Business Machines, Corp. 1995 EQQINSTANCE=myconfig EQQHOME=/u/tracker export EQQINSTANCE export EQQHOME case $1 in start) $EQQHOME/bin/eqqstart ;; stop) $EQQHOME/bin/eqqstop ;; ) echo "Unknown tracker start/stop command: $1" ;; esac

For SunOS only Edit the /etc/rc.local file. This file is processed at startup to initiate all TCP/IP related processes. To add the Tracker Agent to the /etc/rc.local file: 1. Login as root. 2. Edit /etc/rc.local, using an editor such as vi. 3. At the bottom of the file add this section: EQQINSTANCE=myconfig EQQHOME=/u/tracker export EQQINSTANCE export EQQHOME $EQQHOME/bin/eqqstart

These methods start the tracker as root, using the configuration parameter file myconfig. You can also start the Tracker Agent under a userid other than root when you start the workstation. The following example applies to the AIX, HP-UX, and Sun Solaris platforms: 1. Add to the file /etc/inittab the line: tracker:2:once:/home/tracker/bin/start.tracker 2. Add to the script file /home/tracker/bin/start.tracker the line: su - tracker -c /home/tracker/bin/eqqstart

64

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Operation

When you reboot the workstation, the Tracker Agent processes start under the userid tracker.

Checking Tracker Status
Use the eqqshow command to determine the current state of the Tracker Agent.

Shutting Down the Tracker Agent
The Tracker Agent can be orderly shut down by using the eqqstop command on a command line. Note: No confirmation is requested when using this command. Only the Tracker Agent administrator or root user can shutdown the Tracker Agent. If the Tracker Agent has been started as root, only the root user can request shutdown. | | | For Digital OpenVMS only Stop the tracker by running eqqmon: 5. $run [.bin]eqqmon and selecting option

Modify Commands
There is no support for interactive commands to communicate with the Tracker Agent or the individual components. The only command that can be used is the UNIX kill command, which is used to terminate a process. This command can also be used to stop the Tracker Agent. The command syntax is kill -1 <pid> where <pid> is the process ID of eqq_daemon. For AIX, HP-UX, and Sun Solaris This can be determined with the ps –def | grep eqq_daemon command.

For SunOS only This can be determined with the ps –aucx | grep eqq_daemon command.

The signals passed for termination should be -HUP (-1) or -TERM (-15). This shuts down the entire Tracker Agent system, because the daemon process sends a -HUP to each process, and then waits five seconds before the same procedure is repeated with -KILL. You are recommended to use the eqqstop command to shut down the Tracker Agent.

Dealing with Temporary and Log Files
| | Temporary file names include the token number assigned for the operation, the jobname, and the submission date and time. The submittor creates these temporary files, and removes them only if the job ends with a zero final return code, and if you do not specify KEEP in the job_log configuration parameter. The KEEP option can be useful for debugging.

Chapter 5. Operation

65

Operation

To keep enough space in the log directory, periodically run the eqqdelete script to remove files, or remove all the files using the rm –rf command. Note: Be careful if you have several Tracker Agents using the same temporary space.

Checking Disk Space
Make sure there is sufficient disk space on the partition. $ cd ∼tracker $ cd log (if this is the home directory

Command for AIX, showing 4MB free $ df . Filesystem /dev/hd9var Total KB 8192 free %used 3856 52% iused %iused Mounted on 231 11% /var

Command for HP-UX, showing about 66MB free $ bdf . Filesystem /dev/dsk/c2 1d6t kbytes used 1818624 175 924 avail capacity Mounted on 677 96% /nfs/home/m

Command for Sun Solaris and SunOS, showing about 120MB free $ df . Filesystem /dev/sd h kbytes 457926 used 29 5 avail capacity 121634 7 % Mounted on /opt

About 20MB should be enough for the log files. If there are many scripts, the partition size should be increased.

Restarting after an Abnormal Termination
Use the eqqclean command to tidy the temporary files. Sometimes, depending on how the Tracker Agent terminates, the shared memory key might not be deleted, preventing a subsequent restart of the Tracker Agent. When this happens and the Tracker Agent is restarted, the security mechanism prevents the Tracker Agent from starting. A warning message informs you that a Tracker Agent might already be active, and a message describing a shared memory allocation failure is issued to the message log. If the problem is not due to a tracker with the same key already running, you must remove the shared memory segment before the Tracker Agent can restart. To do this: 1. Enter the command ipcs –mob 2. In the generated output, find the identifier of the segment with: Owner Group Size tracker opc 6048

66

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Operation

3. Remove the segment using the ipcrm –m <identifier> command. 4. Perform the first step again to ensure that the segment is no longer listed.

Chapter 5. Operation

67

Operation

68

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Chapter 6. Diagnosing Problems
This chapter contains information to help you diagnose problems with the Tracker Agent. See Appendix B, “Utilities and Samples” on page 93 for a full syntax of any commands mentioned. See Appendix A, “Messages” on page 77 for more information about messages. Problems will usually be one of these kinds: Communication between the Tracker Agent and the controller. Follow the procedures in “Verifying that TCP/IP Is Operational” on page 25, and see “Checking the Configuration Parameter File” on page 73. Failure to start the Tracker Agent. Check the parameter file (“Checking the Configuration Parameter File” on page 73) and reset the Tracker Agent (“Resetting the Tracker Agent” on page 76). Failure of submitted jobs. Check the log files (“Checking Files in the Log and Temporary Directories” on page 72). Abnormal termination of the Tracker Agent. See “Exit Codes” on page 70. Nothing is happening. See “Dealing with a Hung Tracker Agent” on page 71. Performance problems. See “Tuning and Performance” on page 76.
Table 10 (Page 1 of 2). Symptoms and Required Actions for Common Problems
Symptom A number of jobs ends in error. Required action The Tracker Agent probably terminated while the jobs were executing. The status of jobs cannot be determined if the Tracker Agent is not started. Rather than leaving the operations in Started status forever, the operations are set to Error status to highlight the problem. Manually check the status of the jobs, and change the status in the controller to the current status. Ensure the Tracker Agent is restarted. Check that TCP/IP is active on the controller. Check that the TCPIPPORT parameter value is valid on the controller. Check that there is free disk space in the Tracker Agent tmp and log directories. Check the Tracker Agent status from the controller machine, and run eqqverify on the Tracker Agent machine. Check the message log for error messages describing the problem. If there are none, run eqqverify. If the problem cannot be identified, set the trace level to 4, restart the Tracker Agent to gather trace information, and contact your IBM representative. The Tracker Agent cannot locate the controller. There might be a problem with the network setup of the machine. Contact your system or network administrator. The controller is not responding to the Tracker Agent. Check that: The controller is active. The tracker machine is defined in the controller database. controller_ipaddr and controller_portnr are correct.

A controller never becomes active.

A process continually terminates.

Cannot connect with Errno=78.

Cannot connect with Errno=79.

© Copyright IBM Corp. 1995, 1999

69

Fixing Problems

Table 10 (Page 2 of 2). Symptoms and Required Actions for Common Problems
Symptom Tracker machine (workstation) not active in the controller. Required action If the TCP/IP conversation is active, this normally means that the Tracker Agent and the controller have not completed synchronization processing. Check the Tracker Agent status from the controller machine. Check that the controller parameter (TCPIPPORT) matches the controller_portnr parameter in the Tracker Agent configuration parameter file. Check the status of the tracker using the eqqshow command.

Exit Codes
The Tracker Agent sets exit codes according to the reason for termination. You can see the exit return code: In the Korn shell (ksh), using $? In the Bourne shell (sh), using $? In the C shell (csh) variable status. These exit codes are possible: Code Reason 1 2 3 4 The environment could not be initialized. This includes configuration errors. Error in the parameters specified at invocation. Trace mode could not be set. A needed shared memory segment already exists. This could indicate an already running Tracker Agent that is using the same configuration parameter file and shared memory key. See “Restarting after an Abnormal Termination” on page 66. The Tracker Agent could not attach to its shared memory segment. Memory needed for restart information could not be allocated. Main loop has terminated for a reason other than that a process time quota has been exceeded or a Tracker Agent component has exited. The reason for this condition is entered into the message log—both in text and as a numeric code. See “Return Codes” on page 71 for details about the return codes. 8 A restartable condition has occurred. Restarts are performed according to the setting of the –v flag. A message describing the error is written to the message log. Semaphore ID needed for component signalling can not be retrieved. Semaphore initialization can not be performed.

5 6 7

9 10

70

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Fixing Problems

Return Codes
The following codes are placed in the message log when the Tracker Agent experiences a fatal condition after it has started main processing. The exit code connected to the return codes is 7. If a restart is to be attempted, the exit code is 8. Code Reason 1 2 4 8 16 32 64 A Tracker Agent component time quota has been exceeded. A restart attempt will be performed. There is no place in shared memory to place a new process. A restart attempt will not be performed. A Tracker Agent component could not be started. A restart attempt will not be performed. Parsing of configuration failed. The name of the Tracker Agent components could not be accessed. A restart attempt will not be performed. There are no executables to manage. A restart attempt will not be performed. A Tracker Agent component has exited. A restart attempt will be performed. A message queue could not be deleted. A restart attempt will not be performed.

128 A message queue could not be created. A restart attempt will not be performed.

General Troubleshooting
This section contains general troubleshooting hints. If you suspect you have a problem with the Tracker Agent, check the conditions that apply to your installation before calling your IBM representative. You can use the eqqapars command to collect diagnostic information for IBM.

Dealing with a Hung Tracker Agent
If the Tracker Agent is hung and needs to be terminated: 1. Find out the process ID of the daemon by using the UNIX command: ps –def | grep eqq 2. Use the kill command to kill the process. kill –HUP <pid> You must be running under root or the user ID that owns the daemon process to do this. Investigate the reason for the failure before attempting to restart the daemon. If the problem continues, restart the Tracker Agent with trace level 4 to collect diagnostic information, and contact your IBM representative.

Chapter 6. Diagnosing Problems

71

Fixing Problems

| | | |

For Digital OpenVMS only Check the eqqout.dat file to see any errors resulting from the tracker execution. If you find an exceeded quota error, contact your systems administrator to tune the system quotas. The Tracker Agent uses the PQL_ quotas.

Checking Files in the Log and Temporary Directories
These files can be very useful in diagnosing a problem.

Checking the Log Directory
If nothing is in the log directory, check the file permissions. If it is a link, change directory (cd) to the link and check the directory. | | The file permissions for the log and tmp directories should be 644. If not, use 644 to set the permissions: chmod 644 /u/tracker/log (if this is your directory name

Checking the Message Log File
Check the file $EQQHOME/log directory for information. The message log eqqmsglog contains information, warning, and error messages. See Appendix A, “Messages” on page 77 for a description of the messages generated by the Tracker Agent.

Event Logfile
This file (its real name is specified in the ew_check_file configuration parameter, but is typically called ewriter.check) contains all events logged by the Tracker Agent. The log file contains either a complete tracker internal message structure or a joblog data record. Its function is the same as the Tivoli OPC event dataset. The file contains binary data. Use the eqqview command to browse it.

Trace Files
Trace information is generated when the trace level in the configuration parameter file is greater than 0. The file names are EQQenv.log, EQQpgm.log, and EQQtrc.log. You do not need to set any trace level unless you suspect there is a problem with the Tracker Agent. The EQQxxxx.ENV, EQQxxxx.PGM, and EQQxxxx.TRC files contain trace data for the individual components when the trace level is greater than 0, or when there is a severe error. You only see these files when a process is running, or if a process ends without cleaning up. The data is normally appended to the EQQenv.log, EQQpgm.log, and EQQtrc.log files when the process ends normally.

Checking the Other Files
EQQenv.log EQQpgm.log EQQtrc.log This file contains environment-related trace data for the Tracker Agent if the trace level is greater than 0. This file contains the program log for the Tracker Agent. Program errors are reported in the program log. This file contains the trace log for the Tracker Agent if the trace level is greater than 0.

72

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Fixing Problems

EQQxxxx.ENV

These files contain environment-related trace data for the individual components when the trace level is greater than 0, or when there is a severe error. You only see these files when a process is running, or if a process ends without cleaning up. This data is normally appended to EQQenv.log when a process ends, and the file is deleted. These files contain program trace data for the individual components when the trace level is greater than 0, or when there is a severe error. You only see these files when a process is running, or if a process ends without cleaning up. This data is normally appended to EQQpgm.log when a process ends, and the file is deleted. These files contain trace data for the individual components when the trace level is greater than 0, or when there is a severe error. You only see these files when a process is running, or if a process ends without cleaning up. This data is normally appended to EQQtrc.log when a process ends, and the file is deleted. This file contains error messages and other auditing information.

EQQxxxx.PGM

EQQxxxx.TRC

eqqmsglog

You can choose different names for the following files when you code the configuration parameter file. ewriter.check This file (its real name is specified in the ew_check_file configuration parameter) contains event records for the event writer. This file (its real name is specified in the subnn_check_file configuration parameter) contains checkpoint records for the submittor wsID, number nn.

wsID.check

You can run the eqqdelete command, for example once a day, to delete log files. You can do this manually or by scheduling a batch job that runs regularly.

Job Output Files
Standard out and standard error lists of submitted jobs are saved and stored temporarily in the tmp directory according to the job_log keyword in the configuration parameter file. | | | | | The temporary file names include the jobname, the submission date and time, and an 8-character hexadecimal token that is unique for every job. The file name is built from this information, concatenated with a string of .OUT for stdout. A job with jobname "UNIXDIR" submitted at 10:00 on July 01, 1999 with token number 10 will therefore build a stdout file name of UNIXDIR__99 7 1_1 _ A.OUT.

Checking the Configuration Parameter File
Specify the configuration parameter file using the EQQINSTANCE variable, or using the -f flag when you start the Tracker Agent. It is in the $EQQHOME/etc directory. Use the eqqverify command to check the configuration parameter file for syntax and consistency. It writes messages to the message log and displays the current settings at your terminal. Always rerun eqqverify after: Editing the configuration parameter file.

Chapter 6. Diagnosing Problems

73

Fixing Problems

Deleting log files. Changing the user ID. Re-installing the Tracker Agent. Changing the Tracker Agent directory links. To run eqqverify, enter: $EQQHOME/bin/eqqverify [-f filename] You can omit -f filename if the file is $EQQHOME/etc/EQQPARM, or the EQQINSTANCE variable is set. Messages will be written to the screen and the $EQQHOME/log/eqqmsglog file. If you get error messages, check: File File File File File permissions permissions permissions permissions permissions on on on on on $EQQHOME/etc $EQQHOME/tmp $EQQHOME/log $EQQHOME/log/eqqmsglog $EQQHOME/nls

Also check the parameters, and especially: That the Tracker Agent parameter controller_portnr matches the controller parameter (TCPIPPORT). The IP address parameters. The workstation name parameter subnn_workstation_id.

Checking File Permissions
Ensure the files in $EQQHOME have the correct file permissions. The home directory should be similar to:
$ ls –la /u/tracker lrwxrwxrwx 1 tracker lrwxrwxrwx 1 tracker drwxrwxr–x 5 tracker lrwxrwxrwx 1 tracker lrwxrwxrwx 1 tracker lrwxrwxrwx 1 tracker lrwxrwxrwx 1 tracker

opc opc opc opc opc opc opc

2 2 512 16 2 24 16

Aug 4 12: 9 bin@ –> /usr/lpp/tracker/bin/ Aug 4 12: 9 doc@ –> /usr/lpp/tracker/doc/ Jul 18 19:13 info/ Aug 4 12:11 log@ –> /var/tracker/log Aug 4 12: 9 nls@ –> /usr/lpp/tracker/nls/ Aug 4 12: 9 samples@ –> /usr/lpp/tracker/samples/ Aug 4 12:11 tmp@ –> /var/tracker/tmp

The arrow (–>) shows that the files are linked.
$ ls –la /usr/lpp/tracker/bin lrwxrwxrwx 1 tracker opc 43 Mar lrwxrwxrwx 1 tracker opc 41 Mar lrwxrwxrwx 1 tracker opc 41 Mar lrwxrwxrwx 1 tracker opc 41 Mar lrwxrwxrwx 1 tracker opc 42 Mar lrwxrwxrwx 1 tracker opc 38 Mar lrwxrwxrwx 1 tracker opc 38 Mar lrwxrwxrwx 1 tracker opc 38 Mar lrwxrwxrwx 1 tracker opc ...

9 9 9 9 9 9 9 9

11:48 11:48 11:48 11:48 11:48 11:48 11:48 11:48

eqq_daemon@ –> /usr/lpp/tracker/bin/eqq_daemon eqqapars@ –> /usr/lpp/tracker/bin/eqqapars eqqclean@ –> /usr/lpp/tracker/bin/eqqclean eqqcv8 p@ –> /usr/lpp/tracker/bin/eqqcv8 p eqqdelete@ –> /usr/lpp/tracker/bin/eqqdelete eqqdr@ –> /usr/lpp/tracker/bin/eqqdr eqqew@ –> /usr/lpp/tracker/bin/eqqew eqqfm@ –> /usr/lpp/tracker/bin/eqqfm

74

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Fixing Problems

Checking the Tracker User ID
Check that the user ID tracker is defined, and that the group ID opc is defined correctly. See “Creating a User Group and User IDs” on page 12 for more information.

Checking the NFS File System
If you run the Tracker Agent on an NFS mounted drive, ensure the tmp and log directories are allocated locally on each machine. Also ensure that the root user has write access to the NFS mounted file system.

Checking the NIS Master
If you are running NIS, ensure the tracker user ID and group are defined on the NIS master host, and that the NIS maps are updated.

Checking the name server
If you run a name server, ensure the name server has been updated with entries for both the Tracker Agent machine and the controller machine.

Checking Duplicate Port Definitions
Check the /etc/services file on the Tracker Agent machine for duplicate port numbers. Check that controller_portnr and local_portnr in the configuration parameter file are set to unused ports. On the controller machine, check that the same port numbers are defined. Tracker Agent ports are unavailable if they are specified in the /etc/inetd.conf file.

Defining local_ipaddr if Multiple Interfaces
If the Tracker Agent machine has multiple network cards, the address used for the connection is unpredictable. Set the parameter local_ipaddr in the configuration parameter file.

Fixing Problems with Symbolic Links
When the Tracker Agent was installed, you probably used the eqqinit script to set up the symbolic links for the Tracker Agent. Normally, you only need to do this only once, but if the links are deleted, run eqqinit again. If you use the ls -las command on the $EQQHOME/bin directory for the Tracker Agent, the output should be similar to: lrwxrwxrwx 1 tracker opc 2 Jul 15 19:18 bin –> /usr/lpp/tracker/bin

If there is a problem with the links (if they already exist, or you need to update the destination file system), you can reset them manually. To do this, login as the tracker user and enter: rm rm ln ln –rf log –rf tmp –s /usr/lpp/tracker/tmp –s /usr/lpp/tracker/log

Chapter 6. Diagnosing Problems

75

Fixing Problems

Resetting the Tracker Agent
Sometimes you may need to tidy the log files before the Tracker Agent will start properly: 1. Run eqqstop to stop the Tracker Agent. 2. Run eqqclean to tidy up the log files. If you use nonstandard log file names, change the eqqclean script to use your file names. Check that it really has removed the event history file with the name specified in the eqq_check_file configuration parameter file parameter. If you specified a name beginning with an environment variable such as $HOST.checkfile and this variable is not set, the file is called .checkfile, which is a file invisible to the ls command but visible to the ls -las command, so use this command for extra security, and use the eqqverify utility to check the resolution of variables. 3. Run eqqstart. To completely reset the tracker: 1. 2. 3. 4. 5. 6. Set EQQHOME. Change to the log directory and remove everything. Change to the tmp directory and remove everything. Run eqqverify. Check the message log in $EQQHOME/log/eqqmsglog. Run eqqstart.

If the Tracker Agent is terminated without being restarted, run eqqclean and manually remove any file not related to the tracker, which is stored in the /log, /tmp, /tmp/save. Then restart the Tracker Agent.

Checking IPC Queues
If you run the tracker under root, and then under another user, you might need to check that old queues are not left undeleted: 1. Run the ipcs -a command. 2. Check for queue keys owned by the tracker, and that have a qbytes size equal to that specified by the eqqmsgq configuration parameter.

Tuning and Performance
Data is sent to the controller in blocks whose size is set by the eqqmsgq keyword. You can find the value of eqqmsgq by running the eqqverify command. Should output exceed the maximum size, the Tracker Agent splits it into two or more chunks and sends these in sequence.

76

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

EQQTX10I

EQQTX19I

Appendix A. Messages
EQQTX10I Process: READY event from component component number1 of number2 Explanation: The component with ID component has completed its initial set-up and is ready for events. System action: Processing continues. System administrator response: None. EQQTX11I Process: is ending Explanation: The process Process has been requested to stop by the tracker daemon. System action: The process terminates. System administrator response: None. EQQTX12I Process: Completed Initialization. Explanation: The process Process has received and successfully processed configuration data. System action: The process is operational and waits for work. System administrator response: None. EQQTX13I Process: Initialized tracker object. Explanation: The process Process has initialized the tracker. System action: The process is operational and waits for work. System administrator response: None. EQQTX14I Process: Setting field keyword value to value Explanation: During initialization of the process Process, the keyword identified in the message is not defined in the configuration parameter file, or the value defined for the keyword is not valid. The default value for the keyword will be used. System action: Initialization processing continues. System administrator response: None. EQQTX15I Process: Verified configuration file Explanation: The configuration parameters for the process process have been successfully verified. System action: The Tracker Agent will create the process. System administrator response: None. EQQTX19I Process: Port number for the tracker is port_nr Explanation: The TCP/IP port number defined for the Tracker Agent is port_nr. The controller will attempt to connect to this port. System action: Processing continues. System administrator response: None. EQQTX16I Process: IP address for controller Number is ip_addr Explanation: Up to 10 IP addresses can be defined for the controller. When the Tracker Agent starts it attempts to connect to the first controller address in the list. If the connection can not be made, the Tracker Agent will retry 5 times (at intervals specified in the eqqdr_retry parameter) before trying to connect to the next controller address in the list. If there is only one controller, the Tracker Agent will continue to attempt connection until the Tracker Agent is stopped or the connection is successful. System action: Processing continues. System administrator response: None. EQQTX17I Process: IP address for local machine is ip_addr Explanation: The IP address for the local machine can either be specified in the configuration parameter file or the Tracker Agent can determine the address automatically. When there is more than one IP address defined for the machine, the address determined by the Tracker Agent is not predictable. The preferred address should be defined in the configuration parameter file. System action: Processing continues. System administrator response: Verify that the IP address is correct. If your machine has more than one address defined, specify the preferred address in the configuration parameter file. EQQTX18I Process: Port number for the controller is port_nr Explanation: The TCP/IP port number defined for the controller is port_nr. The Tracker Agent will attempt to connect to this port. System action: Processing continues. System administrator response: None.

© Copyright IBM Corp. 1995, 1999

77

EQQTX20I

EQQTX31I

EQQTX20I Process: Local codepage is codepage Explanation: The local codepage is defined as codepage, default is ISO8859-1. System action: Processing continues. System administrator response: None. EQQTX21I Process: Host codepage is codepage Explanation: The host codepage is defined as codepage, default is IBM-037. System action: Processing continues. System administrator response: None. EQQTX22I Process: Job jobname submitted to LL, job_id is job_id Explanation: The LoadLeveler submittor has successfully submitted the job jobname to LoadLeveler. System action: Processing continues. System administrator response: None. EQQTX23I Process: TCP/IP connection to controller cccccccc value ddddddddddddd established Explanation: The Tracker Agent has established a TCP/IP connection to the cccccccc controller successfully. Communication is handled by the TCP-Reader and the TCP-Writer. System action: Processing continues. System administrator response: None. EQQTX24I Process: Trying to establish a connection to controller cccccccc Explanation: All processes are started and the Tracker Agent will attempt to connect to the cccccccc controller. System action: Processing continues. System administrator response: None. EQQTX25I Process: Event logfile has been formatted Explanation: The Event Writer has successfully formatted the event logfile. System action: Processing continues. System administrator response: None.

EQQTX26I Process: Event logfile will be formatted Explanation: The Event Writer has detected that either a new event logfile has been allocated, or the event_logsize has changed since the previous start of the Tracker Agent. The event logfile will be re-formatted, old data is erased. System action: Processing continues. System administrator response: None. EQQTX27I Process: Controller type is type Explanation: The message informs you that the controller is OPC/ESA. System action: Processing continues. System administrator response: None. EQQTX28I Process: Sending submit checkpoint status for workstation ws to controller Explanation: The Tracker Agent has received a request for synchronization for the workstation ws. This message is issued after the Tracker Agent and the controller have established a connection and the Tracker Agent is ready to receive job submit requests. System action: Processing continues waiting for job submit requests. System administrator response: None. EQQTX29I Process: Killed process process errno=error Explanation: The tracker has processed a kill request from the controller. System action: Processing continues. System administrator response: None. EQQTX30I Process: Trace level is set to trace_level Explanation: The trace level determines how much trace information is printed for servicing the code. The default is 0, no trace information. System action: Processing continues. If the trace_level is greater than 0, the Tracker Agent will generate trace information. This can impact the performance of the Tracker Agent. System administrator response: None. EQQTX31I Process: Log filename is filename Explanation: Process will use this file to log internal events System action: Processing continues. System administrator response: None.

78

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

EQQTX32I

EQQTX3BI

EQQTX32I Process: Size for the event logfile is logsize Explanation: The Tracker Agent will log this number of events in the event logfile for internal use by Event Writer. If the event logfile already exists and the value has changed since the previous start of the Tracker Agent, the logfile will be re-formatted, erasing old data. The default value is 1000. System action: Processing continues. System administrator response: None. EQQTX33I Process: Directory to store temporary files is directory Explanation: The Tracker Agent will use this directory to create, store, and remove temporary files. Job logs and script data are written to this directory. System action: Processing continues. System administrator response: None. EQQTX34I Process: Tracker message catalog is msgcat Explanation: The Tracker Agent will use the message catalog msgcat for messages. The full path to the message catalog is displayed. The default value for the message catalog file is $EQQHOME/nls/msg/$LANG/eqqmsgcat.cat. System action: The Tracker Agent will open message catalog. System administrator response: None. EQQTX35I Process: Parameter file is parmfile Explanation: The Tracker Agent will use the file parmfile for configuration parameter information. The full path to the configuration parameter file is displayed. The default value for the configuration parameter file is the variable $EQQHOME/EQQPARM. If the environment variable $EQQHOME is not set, the Tracker Agent uses the home directory of the user ID running the Tracker Agent. If the environment variable $EQQINSTANCE is set, the Tracker Agent will use this value for the configuration parameter file. System action: The Tracker Agent will start to verify the configuration parameters. System administrator response: None.

EQQTX36I Process: Job Log is job_log Explanation: The Tracker Agent will use this option when processing stdout and stderr for jobs submitted by the generic submittor. System action: Processing continues. System administrator response: None. EQQTX37I Process: Workstation ID is workstation for this submittor Explanation: This submittor is processing operations on the work submittor workstation in the controller. System action: Processing continues. System administrator response: None. EQQTX38I Process: All components active sending ID event. Explanation: The data router has verified that all tracker components are active, and will attempt to connect to the controller. System action: Processing continues. System administrator response: None. EQQTX39I Process: Shutting down active tracker processes Explanation: The Tracker Agent has either been requested to shutdown by a kill command, or has detected an error in a process and will restart the daemon and all processes. System action: Termination continues. System administrator response: None. EQQTX3AI Process: Restart of tracker will not be performed. Exiting Explanation: The Tracker Agent has been requested to terminate. The exit code and return code do not allow the Tracker Agent to automatically restart. System action: The Tracker Agent will terminate. System administrator response: See Chapter 6, “Diagnosing Problems” on page 69 for a description of reasons for Tracker Agent exit codes. EQQTX3BI Process: Submitting tracker component comp Explanation: During initialization, the Tracker Agent has determined that the component started is comp. There should be at least 5 components started. System action: Processing continues. System administrator response: None.

Appendix A. Messages

79

EQQTX3CI

EQQTX3NI

EQQTX3CI Process: Waiting num seconds before restarting tracker Explanation: The Tracker Agent process process has been terminated. It will be automatically restarted in num seconds. System action: Processing continues. System administrator response: None. EQQTX3DI Process: Initializing num Objects with numsub submittors Explanation: Process has initialized the required number of objects. Each Tracker Agent process has an internal object to store information. The total number of objects is the number of Tracker Agent processes plus one for each submittor defined. System action: Processing continues. System administrator response: None. EQQTX3EI Process: Directory to store log files is directory Explanation: The Tracker Agent will create and store logfiles in this directory. The event logfile, submit checkpoint file, message log file, and trace files are stored in this directory. System action: Processing continues. System administrator response: None. EQQTX3FI Process: Found component comp Explanation: Process has verified that the component exists in the current configuration. There is a message issued for each Tracker Agent process plus one for each submittor defined. System action: Processing continues. System administrator response: None. EQQTX3GI Process: Shared memory operation optype - successful Explanation: Process has completed a shared memory operation. The operation could be create, attach, or drop. System action: Processing continues. System administrator response: None

EQQTX3HI Process: Attempting to allocate shared memory size num Explanation: Process will attempt to allocate a shared memory segment to store configuration parameter information. System action: Processing continues. System administrator response: None. EQQTX3JI Process: Allocated shared memory key key Explanation: Process has allocated a shared memory segment with the key key. This key is used to identify the configuration information for each machine. System action: Processing continues. System administrator response: None. EQQTX3KI Process: Attached data segment key Explanation: Process has attached to the shared memory segment with key key to obtain configuration information. System action: Processing continues. System administrator response: None. EQQTX3LI Process: Initialized component Explanation: Process has successfully initialized component. System action: Processing continues. System administrator response: None. EQQTX3MI Process: Current system level level version version Explanation: This message displays the software level. System action: Processing continues. System administrator response: None. EQQTX3NI Process: Closing component component Explanation: Process has received a shutdown request. A terminating message will be issued for each component. System action: Termination continues. System administrator response: None.

80

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

EQQTX3OI

EQQTX44W

EQQTX3OI Process: RFW for submittor submittor. Current Agent Limit=limit Explanation: The controller has connected to the tracker and is requesting information about active workstations. System action: Processing continues. System administrator response: None. EQQTX3PI Process: SYNCH for submittor. Current Agent Status status Explanation: The controller has verified all workstations and now resets the submittor to handle new work. System action: Processing continues. System administrator response: None. EQQTX3QI Process: Set initial READY timeout to tttt seconds Explanation: The data router sets an initial timeout for all components to become active. All components must respond within tttt seconds or the tracker will end. System action: Processing continues. System administrator response: None. EQQTX3RI Process: ID event status reader reader writer writer Explanation: The data router has processed an ID event. Both the TCP-Reader and the TCP-Writer must signal ID events to the data router. System action: Processing continues. System administrator response: None. EQQTX3SI Process: setting Workstation workstation status to status Explanation: This shows the new status of the submittor. System action: Processing continues. System administrator response: None. EQQTX3TI Process: bound socket to controller on controller on port port Explanation: This shows which port has been used to connect to the controller. System action: Processing continues. System administrator response: None.

EQQTX41W Process: environ error setting UID root group=group user=user jobname=jobname Explanation: This message shows an environment error with user IDs. The subtask usually runs with root authority. System action: If the process cannot switch to the required user ID, the job will not run. System administrator response: Check the user ID. Check that the submittor can run with root authority. EQQTX42W Process: Connection closed by controller Explanation: The controller has closed the connection. It might be down. System action: The Tracker Agent attempts to reconnect at intervals specified in the eqqdr_retry parameter. System administrator response: Check the status of the controller, and restart it if necessary. EQQTX43W Process: environ error initgroups=igroups user=user jobname=jobname errno=errno Explanation: The submittor could not initialize secondary groups. System action: This could cause a problem running jobs that need these groups. System administrator response: Check that the subtask has root authority. Check that the secondary groups in /etc/groups match groups on separate machines. Check the error code (listed in /usr/include/sys/errno.h). EQQTX44W Process: Connect to controller on ip_addr failed, errno = error. Explanation: During startup, the Tracker Agent failed to connect to the controller. It might be down, or the IP address defined in the configuration file may not be correct. System action: The Tracker Agent attempts to reconnect according to the retry parameters. System administrator response: Check that the IP address defined for the controller is correct. Check the error code (listed in /usr/include/sys/errno.h).

Appendix A. Messages

81

EQQTX45W

EQQTX53W

EQQTX45W Process: Job jobname has been processed to status status Explanation: Process was unable to post the Event Writer with the status of the job jobname. This can happen if the Tracker Agent daemon is killed while there are jobs executing. System action: Status for this job is not reported accurately. System administrator response: The Tracker Agent is probably no longer active. Restart it. Check the job status in the controller. It should have been set to Ended status. Manually set the status for the operation. EQQTX46W Process: Job Log is currently disabled but received a Job Log Request Explanation: Job log handling for the Tracker Agent is currently disabled but the controller has sent a request for the Tracker Agent to retrieve a job log. System action: The request is ignored and processing continues. System administrator response: If job logs should be sent to the controller, set the Job_log keyword to delayed or immediate in the configuration file and restart the Tracker Agent. The user requesting the job log for this operation will be informed that is not available. EQQTX47W Process: System call fork failed, will try again in tttt seconds Explanation: The submittor tried to create a child process but failed. This can happen if too many processes are currently executing for the user running the Tracker Agent. System action: The submittor will try again after tttt seconds. System administrator response: If this message is issued regularly, check for the limit of active processes defined for the user running the Tracker Agent, increase the limit if it is too low. EQQTX48W Process: Could not change to home directory for user user, jobname is jobname Explanation: The home directory could not be found in the system for the user user. The job will be executed using the directory where the Tracker Agent is running. System action: Processing continues. System administrator response: Check the home directory for the user, and create one if necessary.

EQQTX49W Process: Diskspace available in filesystem filesystem might not be enough Explanation: The Tracker Agent has determined that there might be too little disk space for the Tracker Agent functions to run correctly. The amount of space is reviewed by the Tracker Agent at startup, and also after every 100 job submits. System action: Processing continues. System administrator response: Check for disk space available for the file system by using the df filesystem command. EQQTX50W Process: Number number of events have been lost Explanation: The Tracker Agent has detected that events have been lost. This can happen if communication to the controller is inactive while many jobs are executing, and the event logfile is too small. System action: Processing continues. System administrator response: Check the status of all jobs on the submittors handled by this Tracker Agent. Review the size of the event logfile. It must be large enough to allow network outages without losing the status of jobs running on the system. EQQTX51W Process: Invalid controller type type Explanation: The controller type must be OPC. System action: Processing continues. System administrator response: Check the configuration parameter file. EQQTX52W Process: Tracker will now try to connect to the controller on ip_addr Explanation: More than one IP address has been defined for the controller. The Tracker Agent has not been able to connect to a controller, and will switch to the next IP address defined. System action: Processing continues. System administrator response: Check the defined IP addresses in the configuration file. Ensure one of the controllers in the list is started. EQQTX53W Process: Tracker exiting with reason code code - msg Explanation: The Tracker Agent is exiting with the code defined in the message. System action: Termination continues. System administrator response: See Chapter 6, “Diagnosing Problems” on page 69 for a description of reasons for Tracker Agent exit codes.

82

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

EQQTX54W

EQQTX5CW

EQQTX54W Process: Tracker restarting with reason code code - msg Explanation: The Tracker Agent is restarting with the reason code defined in the message. System action: Restart processing continues. System administrator response: See Chapter 6, “Diagnosing Problems” on page 69 for a description of reasons for Tracker Agent exit codes. EQQTX55W Process: Process group could not be changed, reason rsnc Explanation: The process group could not be changed. System action: Processing continues. System administrator response: Correct the error and restart the Tracker Agent. See Chapter 6, “Diagnosing Problems” on page 69 for a description of reasons for Tracker Agent exit codes. EQQTX56W Process: The number of components running exceeds maximum limit. Unpredictable events might occur, reason code Explanation: The number of Tracker Agent components, specifically the number of submittor tasks, exceeds the maximum number of components allowed for the system. System action: Processing continues. System administrator response: When convenient, stop the Tracker Agent, decrease the number of submittor tasks and restart the Tracker Agent. EQQTX57W Process: The tracker submit process reported error submitting jobname Explanation: An error was encountered when the Tracker Agent attempted to submit the job using the fork system call. System action: The job is not submitted. Processing continues. System administrator response: The job will be reported back to the controller in Error status. Determine the reason for the failure and restart the operation. EQQTX58W Process: Received signal sig Explanation: Process has received the signal identified in the message. System action: Depends on the signal received. System administrator response: Take the appropriate action depending on the signal received.

EQQTX59W Process: Restart time value assignment failed. This might cause unpredictable restart behavior Explanation: The Tracker Agent was unable to determine the restart interval required. It might be defined incorrectly. System action: Processing continues. System administrator response: When convenient, restart the Tracker Agent with a valid restart interval. EQQTX5AW Process: Message catalog could not be opened Explanation: Process could not open the message catalog. The Tracker Agent will continue to run, but messages will not be written to the message log. System action: Processing continues. System administrator response: Check the default message catalog defined for $EQQHOME/nls/msg/$LANG/eqqmgcat.cat. Ensure the file exists and is accessible to the Tracker Agent. Restart the Tracker Agent when the problem is resolved. EQQTX5BW Process: Job jobname with LL Jobid job_id has been removed Explanation: The LoadLeveler submittor task submitted the job jobname to LoadLeveler successfully. LoadLeveler has removed the job from its queue. System action: The job is reported in Error status to the controller. Processing continues. System administrator response: Determine why LoadLeveler removed the job. EQQTX5CW Process: Initializing field field with default value value Explanation: Field field was not specified, or an invalid value was specified. The field is initialized with the default value. This information is displayed so that you can verify that the Tracker Agent is running with the correct parameters. System action: Verification of the parameter file continues. System administrator response: If required, shutdown the Tracker Agent, set the correct value and restart the Tracker Agent.

Appendix A. Messages

83

EQQTX5DW

EQQTX64E

EQQTX5DW Process: Field field Size size less than minimum min Explanation: A field in the parameter file has been detected with a value less than the required minimum value. System action: The default value for the field will be used. Processing continues. System administrator response: Update the parameter file with a value greater than the minimum min and restart the Tracker Agent when it is convenient to do so. EQQTX5EW Process: Field field Size size greater than maximum max Explanation: A field in the parameter file has been detected with a value greater than the required maximum value. System action: The default value for the field will be used. Processing continues. System administrator response: Update the parameter file with a value less than the maximum max and restart the Tracker Agent when it is convenient to do so. EQQTX5FW Process: Tracker will be reinitialized in tttt seconds Explanation: The connection to the controller has been lost. The Tracker Agent will attempt to reconnect to the controller in tttt seconds. System action: The Tracker Agent is reinitialized. System administrator response: Ensure TCP/IP is active on both Tracker Agent and controller systems. EQQTX5GW Process: Resetting controller connect timeout to tttt seconds Explanation: The data router will wait tttt seconds before retrying the connection to the controller. System action: The data router will try to reconnect. System administrator response: If the controller is not available, restart it. EQQTX5HW Process: Received job request for unknown WS workstation Explanation: A job needs a submittor associated with the workstation workstation, but this workstation is not defined in the configuration parameter file. System action: The job is reported in Error status to the controller.

System administrator response: Check that the workstation is defined in the configuration parameter file. EQQTX61E Process: Job jobname could not be executed, exec system call failed, errno = error Explanation: The exec system call has failed with the error code supplied. System action: Process terminates. If the process is the generic submittor, the Tracker Agent will be shut down. The job jobname is not submitted. System administrator response: Check the error code (listed in /usr/include/sys/errno.h). EQQTX62E Process: Job jobname terminated due to the signal sig Explanation: The job submitted by the Tracker Agent has been terminated by the signal sig. System action: The job is reported in Error status to the controller. System administrator response: Check the status of the operation in the controller. If required, set the correct status. EQQTX63E Process: Job jobname could not be submitted to LL Explanation: Job jobname could not be submitted to LoadLeveler because the argument list containing job information became too long. This can happen if the Tracker Agent log and tmp directories have long path names. System action: The job is report in Error status. Processing continues. System administrator response: Check the path length for the Tracker Agent directories. Use a shorter path name if possible and restart the tracker. EQQTX64E Process: Directory directory cannot be accessed Explanation: The directory directory is not accessible to the process. The directory must exist and have the required file permissions. System action: Processing will terminate after the configuration parameter file is processed. System administrator response: Verify that the directory exists and that the file permissions are correct. Restart the Tracker Agent.

84

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

EQQTX65E

EQQTX74E

EQQTX65E Process: Cannot determine local IP address, check system configuration Explanation: Process could not determine the IP address for the local machine. System action: Process terminates. System administrator response: Ensure the local machine is defined in the network. When the problem is resolved restart the Tracker Agent. EQQTX66E Process: Connect request refused, unknown controller IP address ip_addr Explanation: An unknown client has attempted to connect to the Tracker Agent. System action: Process terminates. System administrator response: Ensure the IP address defined for the controller in the configuration parameter file is correct. EQQTX67E Process: Unknown type found with value value Explanation: An unknown value was detected in the configuration parameter file. System action: Processing will terminate after the configuration parameter file is processed. System administrator response: Update the configuration parameter file with the correct value and restart the Tracker Agent. EQQTX68E Process: Required field field not specified Explanation: The required field field was not specified in the configuration parameter file. System action: Processing will terminate after the configuration parameter file is processed. System administrator response: Update the configuration parameter file with the required field. EQQTX69E Process: Internal IPC message queue open/read/write error, rc = rc Explanation: Process failed to initialize a message queue. System action: The Tracker Agent is terminated and restarted. System administrator response: Ensure there is not another Tracker Agent already started. Check the error code (listed in /usr/include/sys/errno.h).

EQQTX70E Process: File file can not be accessed Explanation: File file is not accessible to the process. The file must exist and have the correct file permissions. System action: Processing will terminate after the configuration parameter file is processed. System administrator response: Verify that the file exists and that the file permissions are correct. Restart the Tracker Agent when the problem is resolved. EQQTX71E Process: I/O error on file file, errno = error Explanation: File I/O operation on the file file failed. System action: Process terminates. System administrator response: Check the error code (listed in /usr/include/sys/errno.h). EQQTX72E Process: Workstation workstation is not defined Explanation: The workstation is not defined in the configuration parameter file. System action: Process: terminates. System administrator response: Check that there is a submittor associated with this workstation. EQQTX73E Process: ID verification timed out Explanation: During Tracker Agent initialization, the ID event was not received back from the controller within the specified timeout period. The controller might have been shut down. System action: The Tracker Agent is terminated and restarted. System administrator response: Ensure the controller is active. EQQTX74E Process: Conversion table for codepages codepage and codepage could not be created Explanation: The codepages defined in the configuration parameter file for either the Tracker Agent or the controller are not known on your system, or the codepage conversion table required could not be found. System action: Process terminates. System administrator response: Verify the codepages defined are valid and installed on your system. Restart the Tracker Agent when the problem is resolved.

Appendix A. Messages

85

EQQTX75E

EQQTX82E

EQQTX75E Process: Job jobname will not be executed, script data is too large Explanation: The Tracker Agent was unable to forward script data received for job jobname to a submittor task because the script received from the controller is too large. The maximum size of script data handled by the Tracker Agent is 15 800 bytes, excluding trailing blanks. System action: The job is not started. It will be reported in Error status to the controller. System administrator response: Split the script for the job into two or more scripts and resubmit the job. EQQTX76E Process: Could not get hostname for the calling controller, errno = error Explanation: The hostname for the calling host where the controller is started could not be determined. An unknown client might have tried to connect to the Tracker Agent. System action: The Tracker Agent terminates. System administrator response: Ensure the IP address defined in the configuration file is correct. Check the error code (listed in /usr/include/sys/errno.h). EQQTX77E Process: Connection broken with the controller on ip_addr Explanation: The connection to the controller has been broken, The controller might have been shut down. System action: The Tracker Agent is terminated and restarted. System administrator response: Ensure the controller is started. Also verify that TCP/IP is started on Tracker Agent and on controller systems. EQQTX78E Process: ID time values mismatch Explanation: The Tracker Agent has been unable to authenticate the ID event returned from the controller. This could happen in an unauthorized client attempt to connect to the Tracker Agent. System action: The Tracker Agent is terminated and restarted. System administrator response: Verify the configuration parameters for both the Tracker Agent and the controller. Ensure IP addresses and port numbers are correct.

EQQTX79E Process: Field field value value is not unique Explanation: Field field value values must be unique. For example, two submittors cannot use the same checkpoint file. System action: Processing will terminate after the configuration parameter file is processed. System administrator response: Update the configuration parameter file with a unique value for the field. EQQTX80E Process: Could not run setuid, job jobname, user user, uid uid, errno = error Explanation: A request to set the user ID for job jobname to uid failed. System action: The job is reported in Error status to the controller. System administrator response: Check the error code (listed in /usr/include/sys/errno.h). Verify that the user ID supplied by the controller is defined on the Tracker Agent system. EQQTX81E Process: Could not run setgid, job jobname, user user, gid gid, errno = error Explanation: A request to set the user ID for job jobname failed because the group ID (GID) could not be set for the supplied user ID. System action: The job is reported in Error status to the controller. System administrator response: Verify that the user ID supplied by the controller is defined on the Tracker Agent system. Check the error code (listed in /usr/include/sys/errno.h). EQQTX82E Process: Could not run chown to uid uid, gid gid, job jobname, user user Explanation: A request to set the user ID for job jobname to uid failed. System action: The job is reported in Error status to the controller. System administrator response: Verify that the user ID specified has a primary group defined. The process Process must be owned by root. The SUID bit must be on if the user ID supplied by the controller is not the same user ID as is currently being used by the Tracker Agent.

86

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

EQQTX83E

EQQTX92E

EQQTX83E Process: Errors found while sending data to the controller on ip_addr Explanation: An error was encountered while sending data to the controller. System action: The Tracker Agent is terminated and restarted. System administrator response: Ensure TCP/IP is functioning correctly on Tracker Agent and controller systems. EQQTX84E Process: Failed to bind a socket to the port port_nr, errno = error Explanation: Binding a socket to the local port number failed. System action: Process will try again. System administrator response: If you are using the default port number, or any other port number under 1025, ensure the Process is owned by root and that the SUID bit is on. Check the error code (listed in /usr/include/sys/errno.h). EQQTX85E Process: Could not open/create file file, errno = error Explanation: File file could not be opened or created. System action: The tracker is terminated. System administrator response: Check that there is sufficient disk space available for the file system where temporary and log files are stored. Look up the error code in the /user/include/sys/errno.h file. EQQTX86E Process: Configuration file not verified Explanation: Errors were detected in the configuration file. One or more fields could not be validated. System action: The Tracker Agent is terminated. System administrator response: Update the required fields in the configuration parameter file, and restart the Tracker Agent. EQQTX87E Process: Component component not specified Explanation: Component component could not be found while processing the configuration parameter file. System action: Processing will terminate after the configuration parameter file is processed. System administrator response: Update the required fields in the configuration parameter file, and restart the Tracker Agent.

EQQTX88E Process: Errors from LL llsubmit, jobname jobname, rc = rc Explanation: Submit to LoadLeveler failed, LoadLeveler might not be active. System action: Job jobname is not executed and is reported in Error status to the controller. System administrator response: Ensure LoadLeveler is started. EQQTX89E Process: User user does not exist, job jobname is not submitted Explanation: Username user does not exist in the system. System action: The job jobname is reported in Error status to the controller. System administrator response: Define the user ID if necessary. EQQTX90E Process: file is not a regular file Explanation: The file must be a regular file. It cannot be a directory, FIFO, or block device. System action: Processing will be terminated after the configuration parameter file has been processed. System administrator response: Ensure that file file is a regular file. EQQTX91E Process: Error Reading file file Explanation: The file cannot be read. The file might not exist, or could not be read with the current file permissions. System action: Processing will be terminated after the configuration parameter file is processed. System administrator response: Create the file or update the environment variable EQQHOME and restart the Tracker Agent. EQQTX92E Process: Error loading the parameter file file Explanation: A syntax error was detected when loading the parameter file file. System action: Processing terminates. System administrator response: Check the parameter file for variables without associated values. Update the configuration parameter file and restart the Tracker Agent.

Appendix A. Messages

87

EQQTX93E

EQQTX9AE

EQQTX93E Process: Error can not kill process process errno=error Explanation: The tracker was requested to kill the process, but could not. System action: Processing continues. System administrator response: Check that the process is inactive. Check the error code (listed in /usr/include/sys/errno.h). Check that the subtask has authority. EQQTX94E Process: Cannot find IPC queue queue Explanation: The Tracker Agent verifies the IPC queues when the parameter file is read. During this verification the Tracker Agent could not find a required IPC queue. System action: Processing terminates after the configuration parameter file is processed. System administrator response: Contact your IBM representative. EQQTX95E Process: Workstation ID workstation must be nnnn characters long Explanation: The ID must be more than 1 and less than 5 characters long. The first character must be alphabetic. System action: Processing will terminate. System administrator response: Rename the workstation and restart the tracker. EQQTX96E Process: proc failed. Reason code is code and the tokens are tokens Explanation: This message is normally associated with an I/O error. System action: Depending on the severity of the reason, processing might be terminated. If the message is received after the Tracker Agent has executed for some time the message usually indicates a less severe error, and the Tracker Agent will continue to execute. If the error occurs during Tracker Agent startup, processing is terminated. System administrator response: See Chapter 6, “Diagnosing Problems” on page 69 for a description of reasons for Tracker Agent exit codes. Correct the error and restart the Tracker Agent.

EQQTX97E Process: Program instance number num can not be invoked, reason rsnc Explanation: The program instance reported in the message could not be invoked. The fork system call failed. System action: Processing is terminated. System administrator response: Ensure the maximum number of processes defined for the system and the user has not been exceeded. Also check that there is sufficient virtual memory on the machine to satisfy the request. Correct the error and restart the Tracker Agent. Chapter 6, “Diagnosing Problems” on page 69 for a description of reasons for Tracker Agent codes. EQQTX98E Process: Environment variable var not set Explanation: Environment variable var is not specified and the user ID tracker does not exist. The process cannot find the directory tree structure. System action: Processing terminates. System administrator response: Set the environment variable EQQHOME or define the correct user ID, and restart the tracker. EQQTX99E Process: Loading of parsed data failed, reason rsnc, tokens token Explanation: Process could not load the parsed data. System action: Processing terminates. System administrator response: Examine the message log for previous messages which describe the problem. Correct the error and restart the Tracker Agent. See Chapter 6, “Diagnosing Problems” on page 69 for a description of reasons for Tracker Agent codes. EQQTX9AE Process: Set environment host type failed, reason rsnc, tokens token Explanation: Process could not set the environment host type. System action: Processing terminates. System administrator response: Examine the message log for previous messages which describe the problem. Correct the error and restart the Tracker Agent. See Chapter 6, “Diagnosing Problems” on page 69 for a description of reasons for Tracker Agent codes.

88

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

EQQTX9BE

EQQTX9KE

EQQTX9BE Process: Dropping of parsed data failed, reason rsnc Explanation: Process was unable to drop the parsed data. System action: Processing terminates. System administrator response: Examine the message log for previous messages which describe the problem. Correct the error and restart the Tracker Agent. See Chapter 6, “Diagnosing Problems” on page 69 for a description of reasons for Tracker Agent codes. EQQTX9CE Process: Environment termination failed, reason rsnc, tokens token Explanation: During daemon shutdown, the tracing environment could not be released. System action: Termination processing continues. System administrator response: Examine the message and trace logs for previous messages which describe the problem. See Chapter 6, “Diagnosing Problems” on page 69 for a description of reasons for Tracker Agent codes. EQQTX9DE Process: Process not created. Too many processes active. Tokens fromtok totok Explanation: An Tracker Agent component could not be started because the maximum number of concurrent processes has been reached. System action: The Tracker Agent terminates. System administrator response: Either decrease the number of submittor tasks defined for the Tracker Agent, or increase the maximum number of concurrent processes. EQQTX9EE Process: Process could not be added to process control table Explanation: There was insufficient space available in control blocks to add the process. System action: Processing continues. System administrator response: Decrease the number of submittor tasks defined and restart the Tracker Agent. EQQTX9FE Process: Executable name could not be submitted, reason reason Explanation: The executable subroutine identified in the message text could not be started. System action: Processing continues. System administrator response: Check the name, path, and file permissions for the executable.

EQQTX9GE Process: Process proc is not activated and will be discarded. Tokens token1 token2 - token3 - token4 Explanation: The process could not be registered within the timeout value. System action: Processing continues. The component will be killed. System administrator response: Check the message log for previous messages that describe the problem. If possible, increase the value of –I specified when the daemon is started. EQQTX9HE Process: Shared memory operation operation - could not be performed Explanation: The shared memory operation create, attach, or drop could not be completed. View the message log or other messages which describe the problem. System action: Processing terminates. System administrator response: Check if another daemon process is already running. EQQTX9IE Process: The parm parameter can only be specified once Explanation: A flag has been specified multiple times on the start command. System action: The process terminates. System administrator response: Restart the Tracker Agent with the correct flags. EQQTX9JE Process: An invalid parameter has been passed as parm Explanation: An invalid flag was specified on the start command. System action: The process terminates. System administrator response: Restart the Tracker Agent with the correct flags. EQQTX9KE Process: The minimum number passed as flag is num Explanation: An invalid value was specified for a flag on the start command. The value specified is below the minimum value. System action: The process terminates. System administrator response: Restart the Tracker Agent with the correct flags.

Appendix A. Messages

89

EQQTX9LE

EQQTX9UE

EQQTX9LE Process: The maximum number passed as flag is num Explanation: An invalid value was specified for a flag on the start command. The value specified is greater than the maximum value. System action: The process terminates. System administrator response: Restart the Tracker Agent with the correct flags. EQQTX9ME Process: directory is not a directory Explanation: The file name directory is not a directory. The file name must be a directory and have the required permissions. System action: Processing will terminate after the configuration parameter file is processed. System administrator response: Verify that the directory exists and that the file permissions are correct. EQQTX9NE Process: Module module variable pointer = NULL Explanation: Internal processing error detected in module. A variable pointer was detected as NULL when it should have a value. System action: Processing terminates. System administrator response: Contact your IBM representative. EQQTX9OE Process: Cannot read the parameter file file Explanation: The configuration parameter file could not be read. The file must exist and have read permission from the current process. System action: Processing terminates. System administrator response: Verify that the file exists, and that the user ID running the Tracker Agent has read access. EQQTX9PE Process: Error Initializing component rc = int Explanation: The component could not be initialized. System action: Processing terminates. System administrator response: Check previous messages in the message log for messages describing the reason for this error. Update files, or permissions, and restart the Tracker Agent.

EQQTX9QE Process: Unable to find error messages. Contact system administrator Explanation: The process could not initialize the message system. System action: Processing terminates. System administrator response: Check the directory structure, the environment variable, and the Tracker Agent user ID. Restart the Tracker Agent when the problem is corrected. EQQTX9RE Process: The passwd file for user ID user could not be opened. Check installation or set EQQHOME Explanation: The environment variable EQQHOME was not set. The Tracker Agent defaults to the home directory of the Tracker Agent user ID. If this user ID does not exist or the home directory is inaccessible, the Tracker Agent cannot find the required files. System action: Processing terminates. System administrator response: Set the environment variable EQQHOME to point to an alternative tree or define the tracker user ID. Restart the Tracker Agent when the problem is corrected. EQQTX9SE Process: Cannot export the environment variable var Explanation: The tracker could not initialize the internal environment variable. System action: Processing terminates. System administrator response: Check the parameters in the configuration parameter file and run eqqverify. EQQTX9TE Process: Could not initialize message system: catalog cat log log Explanation: Files required for writing messages could not be found, or the access permissions on the files were incorrect. System action: Processing will terminate after the configuration parameter file is processed, System administrator response: Check that the files and the access permissions are correct. Restart the Tracker Agent when the problem is corrected. EQQTX9UE Process: Required field Host IP Address not specified in parameter file: file Explanation: The IP address for the controller host was not found in the parameter file. The IP address for the controller is required. System action: Processing will terminate after the configuration parameter file is processed.

90

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

EQQTX9VE

EQQTXA5E

System administrator response: Update the field controller_ipaddr in the parameter file. The value of this field is the IP address of the controller. Restart the Tracker Agent when the problem is corrected. EQQTX9VE Process: Did not receive ID from components in tttt seconds Explanation: The timeout period tttt expired without the data router receiving the ID event. System action: Processing terminates. System administrator response: Check the configuration parameter file and the message log. EQQTX9XE Process: Log file logfile size nnnn too large for Event logfile Explanation: A job log was found that is greater than the size of the event file. System action: Processing continues, but the job log is not available. System administrator response: Increase the event writer dataset size, and check the job output is not excessive. EQQTX9YE Process: Diskspace available in directory directory less than minimum minimum Explanation: There is too little filespace. System action: An Offline event is sent to the controller. The job fails with code OSPC. System administrator response: Increase the space available in the directory. Stop and restart the tracker. EQQTX9ZE Process: Job job could not be executed, environment error, error = error Explanation: The environment could not be initialized for the job. System action: The job does not run. System administrator response: Check the user (UID), the group (GID), and the file permissions. EQQTXA0E Process: Job job not executed. Tracker ending due to lack of filespace. Explanation: There is too little filespace. System action: Processing terminates. System administrator response: Stop the tracker, increase the log space, and restart the tracker.

EQQTXA1E Process: I/O error sending data to controller size=size nbytes=nbytes error=error Explanation: The connection between the tracker and the controller has been lost. System action: The tracker tries to reconnect. System administrator response: Check if the controller has gone down. EQQTXA2E Process: creating socket to controller controller failed errno=error Explanation: The Tracker Agent could not connect to the controller. System action: Processing terminates. System administrator response: Check the port numbers and address in the controller and Tracker Agent configuration parameter files. EQQTXA3E Process: Invalid data data read from controller Explanation: The tracker does not recognize incoming data. System action: The connection is closed. System administrator response: Check the controller and the TCP/IP connection. EQQTXA4E Process: Error validating ID event data data Explanation: There is invalid data from the controller. System action: Processing terminates. System administrator response: Check the synchronization of clocks, restart the tracker, and refresh the controller. EQQTXA5E Process: Socket operation operation failed sockfd=socket errno=error Explanation: The tracker found an error when communicating with the controller. System action: Processing terminates. System administrator response: Check the socket operation (bind/listen). Check the error code (listed in /usr/include/sys/errno.h) to determine the cause. Stop and restart the tracker.

Appendix A. Messages

91

92

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Appendix B. Utilities and Samples
The bin and samples directories supplied with the Tracker Agent contain utilities and samples to help you operate and maintain the Tracker Agent.

Utility Programs and Scripts
These utility programs and scripts are located in the bin directory. If you need to modify a script, save a copy of the original in the samples directory. Note: Some parameters take a -f parameter, which you can use to specify the configuration parameter file name, if you do not use the EQQINSTANCE environment variable.

eqqstart
This script starts the Tracker Agent. If the environment variable EQQHOME is set, the Tracker Agent starts from $EQQHOME/bin. Otherwise, the Tracker Agent starts from ∼tracker. The flags for start-up are: Flag Description -d|D Engine timeout, in minutes. When a process signals that it is active, this value specifies the maximum time allowed before it must send another active or idle signal. If the default value is used, a process cannot be active for more than five minutes without sending another active or idle signal, or the Tracker Agent will automatically shutdown and restart. -i|I Maximum inactivity time for server. This is the time allowed for a process to become active. Change it only in special cases. -f|F The configuration parameter file name. You need not specify this if you have set the EQQINSTANCE variable, or if the configuration parameter file is $EQQHOME/EQQPARM. -r|R Time between automatic restart, in minutes. At these intervals (for example, once a day), the daemon will refresh each process. -s|S Sleep between restart, in seconds. It takes about five seconds to complete a clean shutdown of the Tracker Agent processes. Sometimes it can be useful to delay the time between automatic restart attempts. -v|V The number of times, within a number of minutes, a process is allowed to restart before the Tracker Agent terminates abnormally. For example, the value -v4:1 specifies that the Tracker Agent will restart up to 4 times in a 10-minute period. There are two reasons for restart: A process has been active more than 5 minutes without a signal.
© Copyright IBM Corp. 1995, 1999

93

Utilities

A process has abnormally terminated. The eqqstart sample script passes these flags unaltered to the Tracker Agent daemon. You can change the eqqstart sample script if you want to change the defaults. The flags can be entered either as uppercase or lowercase characters. There is no difference in how they are treated. Syntax ──eqqstart──┬─────────────┬──┬───────────────────┬──────────────── └─-f filename─┘ └─-i inactive-limit─┘ ──┬──────────────────┬──┬──────────────────┬─────────────────────── └─-v restart-limit─┘ └─-d timeout-limit─┘ ──┬─────────────────────┬──┬──────────────────┬─────────────────── └─-r refresh-interval─┘ └─-s restart-delay─┘

You can specify these values for the flags:
Table 11. Values of Tracker Agent flags
Flag d|D f|F i|I r|R s|S v|V Min 1 1 1 0 1:1 Max 600 600 65535 60 200:2880 Default 5 30 0 0 1:1

eqqverify
This utility shows the Tracker Agent settings. The Tracker Agent configuration parameters are written to the $EQQHOME/log/eqqmsglog. The verified values are also written to the terminal. If a parameter cannot be verified, a message is generated describing the problem. Syntax ──eqqverify──┬─────────────┬───────────────────────────────────── └─-f filename─┘

eqqstop
This script stops the Tracker Agent. The current tracker processes are searched for the daemon. The daemon is sent a SIGHUP, (1) and exits normally. Syntax ──eqqstop──┬─────────────┬─────────────────────────────────────── └─-f filename─┘

94

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Utilities

eqqfm
You can run this utility program to check for file arrivals. The program checks for the existence and size of files you define in an event-control file (ECF), which is passed to the program as a parameter. There is a sample ECF in the samples directory. The program supports two types of files, flag files, where the size is not verified, and data files, where the size is checked. The checking is performed using a time interval of 60 seconds. If the size is the same as when last checked, the file is considered stable and will be checked against the ECF values to verify if the file is larger than the value specified. Syntax ──eqqfm──ECF_file────────────────────────────────────────────────

Parameter:

ECF_file Specifies the name of an ECF file.
The format of the file is: % <time_trigger> <type> <filename> <size> | | <time_trigger> has a format of yyyy:mm:dd:hour:min where: 'yyyy' must be within - 9999, 'mm' must be within 1 - 12, 'dd' must be in the range allowed for the month and year, 'hour' must be within - 24, 'min' must be within - 59. <type> can be (flag file) or 1 (data file) <filename> filename including path. <size> expected size for data files, any number for flag files. For example: | % 1994: 4:13:1 : /u/tracker/file 1 1 /u/tracker/file 2

1

This example tells eqqfm to check for the existence of file 1 and file 2. The size of file 2 must be greater than 10 bytes. The program returns: 0 1 2 5 The The The The file exists and matches the criteria specified. time expired before the file arrived. file has arrived, but is smaller than expected. format of time_trigger is incorrect.

You can include the utility as the first program in a script, and continue processing based on the return code.

Appendix B. Utilities and Samples

95

Utilities

eqqdelete
This script can be used to delete old log files according to the age of the files. This can be run periodically to clean up the disk. There is one parameter—the age in days. All Tracker Agent log files older than the value specified are deleted. The utility looks in the $EQQHOME/log directory. Syntax ──eqqdelete──┬─────┬───────────────────────────────────────────── └─age─┘

Parameter:

age | 0 All Tracker Agent log files older than the value specified are deleted. If no value is given, the eqqdelete script deletes all the files in the log directory.
Sample output: $ eqqdelete 1 Cleaning from EQQHOME /u/tracker Cleaning files from /u/tracker older than 1 Cleaning /u/tracker/log

days

eqqview
Use this utility to browse the event logfile and the submit checkpoint file. Syntax ──eqqview──┬─-e eventlog─┬─────────────────────────────────────── └─-s ckptfile─┘

Parameters: -e eventlog Enter this parameter to browse the event logfile. A command line will be prompted to enter the record number you want to view. The default is 1. The next record is displayed when you press Enter. Enter q to quit at any time. -s ckptfile Enter this parameter to to view the last checkpointed submit sequence number. The environment variable EQQHOME must point to the current tracker environment. eqqview tries to open the named file in $EQQHOME/log (replace eventlog and ckptfile with the names specified in the configuration parameter file).

96

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Utilities

eqqinit
Attention: Do not run eqqinit for the Digital OpenVMS Tracker Agent. Use this script to create a local directory, or to create symbolic links to the directory where the software is installed. If you do not use a conventional directory file structure, you can edit this script file. Syntax ──eqqinit──┬─-v─┬──────────────────────────────────────────────── └─-t─┘

Parameters: -v Enter this parameter to configure the local (log and tmp) directories. Run this as root. -t Enter this parameter to make symbolic links to the directories where the software is installed. Run this under the tracker user ID.

eqqclean
This script tidies the log files after the tracker has terminated abnormally. Syntax ──eqqclean───────────────────────────────────────────────────────

eqqperm
This script sets file permissions. It is necessary when you install the SunOS Tracker Agent, but optional for the others. Syntax ──eqqperm────────────────────────────────────────────────────────

Appendix B. Utilities and Samples

97

Utilities

eqqcv80p
This utility converts scripts so that they can be submitted using the Tivoli OPC controller. You can store your scripts on the Tracker Agent machine, or on MVS in the Tivoli OPC EQQJBLIB concatenation. If you store the scripts in the EQQJBLIB datasets, the data must be in a PDS with a fixed record length of 80 bytes. If you have scripts with longer data lines you can still store the data on MVS, if you use this utility to convert the script data to 80-byte records. Alternatively, you can edit the script manually. The eqqcv8 p utility reads data from a file and breaks down lines exceeding 80 bytes into 80-byte records. Changed records are marked with a / in column 80. If column 80 already has a /, column 1 of the next line will be marked with a /. Lines shorter than 80 bytes are padded with blanks. Syntax ┌─<──infile──┐ ──eqqcv8 p──┬────────────┬──┬────┬──┼────────────┼──────────────── └─-l──length─┘ └─-n─┘ └─-f──infile─┘ ──┬────────────┬────────────────────────────────────────────────── └─>──outfile─┘

Parameters: -l length | 80 The number of columns in a row. If not specified, a value of 80 is used. -n If specified, no newline character will be inserted at the end of the rows in the output file. The default is to insert newline characters. -f infile | < infile The name of the input file. If you do not specify -f, the utility expects input from stdin. > outfile Specify this to redirect the stdout output to a file.

eqqshow
This script shows the status of the Tracker Agent. The daemon process, the Tracker Agent processes, shared memory, and message queues are displayed. Syntax ──eqqshow──┬─────────────┬─────────────────────────────────────── └─-f filename─┘

98

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Utilities

Samples
The samples directory contains these files: File name ecf tracker.cmd Description Sample event control file used by the eqqfm utility. Sample LoadLeveler script. See “Sample LoadLeveler script” on page 108 for a listing.

The etc directory contains these files: File name EOP0 Description This is a sample configuration parameter file for a Tivoli OPC controller.

Appendix B. Utilities and Samples

99

100

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Appendix C. Enabling the Pulse Functions
This appendix explains how to enable the Pulse functions. To do this, you set up the KEEPALIVE parameters on the controller image and on the tracker agent machines for the AIX, UNIX, VMS, and OS/390 Open Edition platforms. To activate the SO_KEEPALIVE option on the tracker agent machine, you must configure the KEEPALIVE parameters. These parameters differ for each operating system. When you specify the SO_KEEPALIVE option, TCP/IP periodically sends packets to check that the other end of the connection is still available. If the other end of the connection is not available, it terminates the channel. Note: To enable automatic recovery of communication between the controller and a tracker that is using the KEEPALIVE functionality, perform any corrective action for network problems after the expiration of the specified KEEPALIVE timeout time. By that time, the controller and tracker will have registered the loss of the remote partner. Only then can the controller and tracker recover communication after the network problem has been fixed. You can define time intervals to control the behavior of the SO_KEEPALIVE option. When you change the time interval, only those TCP/IP channels started after the change are affected. The value that you choose for the time interval should be less than the value of the disconnect interval for the channel. Make sure you choose an appropriate value for the time interval: too high a value may not be useful, whereas too low a value may create a lot of traffic in the network.

|

Setting Up the Controller Machine and OS/390 OE System
Add the following statement to the TCP/IP profile: KEEPALIVEOPTIONS INTERVAL mmm SENDGARBAGE TRUE ENDKEEPALIVEOPTIONS where mm is the idle connection. It defaults to 120 minutes.

Setting Up an AIX System
To implement the KEEPALIVE functionality, use the no command to configure the following TCP/IP parameters in the kernel: tcp_keepidle Specifies the length of time to keep the connection active, measured in half seconds. The default is 14400 half seconds (7200 seconds or 2 hours). Sets the initial timeout value for a tcp connection. This value is defined in 0.5-second increments, and defaults to 150, which is 75 seconds. Specifies the interval, measured in half-seconds, between packets sent to validate the connection. The default is 150 half seconds (75 seconds).

tcp_keepinit

tcp_keepintvl

The no command operates only on the currently running kernel; it must be run again after each startup or after the network has been configured.

© Copyright IBM Corp. 1995, 1999

101

Pulse Functions

Attention: The no command performs no range checking. Because it therefore accepts all the values for the variables, if it is used incorrectly, it can cause the system to become inoperable. Syntax no -o Option [ =NewValue]

The -o flag both sets and displays an option value. Example: no -o tcp_keepidle=24 Sets the keepalive time to 2 minutes.

Setting Up an HP System
The following TCP/IP parameters should be configured to implement the KEEPALIVE functionality: tcp_keepstart Valid range: 5–12000 seconds Default: 7200 seconds Description: Specifies the number of seconds that a TCP connection can be idle (that is, no packets are received) before keepalive packets will be sent in an attempt to elicit a response. After a packet is received, further keepalive packets are sent only if the connection is again idle for this period of time. Valid range: 5-2000 seconds Default: 75 seconds Description: Specifies the interval in seconds at which keepalive packets will be sent on a TCP connection once they have been started. The receipt of a packet will stop the sending of keepalive packets. If you are increasing both tcp_keepfreq and tcp_keepstop, increase tcp_keepstop first. If you are decreasing both, decrease tcp_keepfreq first. Valid range: 10–4000 seconds Default: 600 seconds Description: Specifies the number of seconds keepalive packets will be sent on a TCP connection without the receipt of a packet after which the connection will be dropped. If you are increasing both tcp_keepfreq and tcp_keepstop, increase tcp_keepstop first. If you are decreasing both, decrease tcp_keepfreq first. From HP–UX Version 10.01, the nettune command, located in /usr/contrib/bin, is available to set these TCP/IP parameters. Syntax nettune -s Object Value

tcp_keepfreq

tcp_keepstop

102

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Pulse Functions

Example: nettune -s tcp_keepfreq 24

Setting Up a SunOS System
Attention: The kernel must be recompiled and replaced when you configure the KEEPALIVE parameter. It is therefore recommended that you do not use the KEEPALIVE implementation on a SunOS system. To implement the KEEPALIVE functionality on this system, configure the following TCP/IP parameters in the kernel: tcp_keepidle Determines how frequently to test whether an idle connection is still alive. The default value is 7200 seconds (2 hours). Determines how frequently to check an idle connection if the first check has failed. The default value is 75 seconds.

tcp_keepintvl

These parameters are set as standard C declarations in the file /sys/netinet/in_proto.c. After you modify them, you must rebuild the kernel.

Recompiling the Kernel
To recompile the kernel: 1. # more /etc/motd Show the current kernel. 2. # arch -k Show the architecture. 3. # cd /usr/kvm/sys/sun4m/conf 4. # cp GENERIC GENERIC_NEW Copy the kernel and use the new version. 5. # config GENERIC_NEW Allocate the directory ../GENERIC_NEW. 6. # cd ../GENERIC_NEW 7. # make Create the file vmunix. 8. # mv /vmunix /vmunix.orig Save the original vmunix file. 9. # cp vmunix /vmunix Copy the new kernel file to the root directory. 10. Reboot the machine to verify that everything is correct. If you experience any problems, reboot with the old vmunix, using the command: > b vmunix.orig -s

Appendix C. Enabling the Pulse Functions

103

Pulse Functions

Setting Up a Sun Solaris System
To implement the KEEPALIVE functionality on a Sun Solaris system, configure the following TCP/IP parameter: tcp_keepalive_interval Determines how frequently to test whether an idle connection is still alive. The default value is 7200000 ms (2 hours). Use the ndd command to tune this TCP/IP parameter. To list all the appropriate variables, supply the ndd command with the driver name and a ?. For example: % ndd /dev/tcp \? Note: Under Solaris 2.5 or higher, you will need to be user root to display these variables. You can change ndd variables by supplying the -set option, the variable name, and the value. For example: % ndd -set /dev/tcp tcp_keepalive_interval 1 To set an ndd variable each time you boot the system, add a line for it in the file /etc/rc2.d/S69inet, as follows: % cat /etc/rc2.d/S69inet . . . # # Set configurable parameters. # ndd -set /dev/tcp tcp_keepalive_interval 1

Setting Up a MIPS ABI System
Attention: Changes to tcp_keepidle directly affect the system kernel. It is therefore not recommended that you use the KEEPALIVE implementation on a MIPS ABI system. The only parameter to set is the tcp_keepidle parameter. To set it, edit following file: /var/sysgen/master.d/bsd You will need to have at least one of the older network patches to find tcp_keepidle. The IRIX 5.3 Recommended Patch Set is recommended.

Setting Up a Digital OpenVMS System
The KEEPALIVE function is not available on Digital OpenVMS systems.

104

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Pulse Functions

Setting Up a Digital UNIX System
Modify the Internet Subsystem (inet) KEEPALIVE attributes defined in the /etc/sysconfigtab file, using the dxkerneltuner or sysconfig command. tcp_keepalive_default When set to 1, the tcp_keepalive_default attribute Enables TCP keepalive for all sockets. Use this attribute to override programs that do not set keepalive on their own or for which you do not have access to the application sources. tcp_keepcnt Default value: 0 (disabled) The maximum number of keepalive probes that can be sent before a connection is dropped. Default value: 8 probes Idle time before the first keepalive probe. Default value: 2 hours (in increments of 0.5 seconds) Initial connect timeout. Default value: 75 seconds (in units of 0.5 seconds) The time between keepalive probes. Default value: 75 seconds (in increments of 0.5 seconds)

tcp_keepidle tcp_keepinit tcp_keepintvl

Appendix C. Enabling the Pulse Functions

105

106

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Appendix D. Using LoadLeveler
The Tracker Agent is supplied already linked with LoadLeveler modules at the Version 1 Release 2 base level. Before applying service to LoadLeveler, check the documentation to see if you must also apply service updates to the LoadLeveler submittor. Ensure the user loadl has permissions to execute the Tracker Agent exit program (eqqlsext). loadl should also have read and write permissions to the directory for temporary and log files. You must also set the SUID bit on. To do this, login as root and enter: chown root: eqqls chmod u+s eqqls If you use the Tivoli OPC job-submit exit (EQQUX 1) to supply the user ID of jobs to be submitted by the Tracker Agent, the LoadLeveler submittor must run as root. This can be done by changing the owner of the LoadLeveler submittor to root. For AIX only If the LoadLeveler submittor is used, you must set the environment variable LIBPATH for the Tracker Agent to include the LoadLeveler shared library. The Tracker Agent cannot start the LoadLeveler submittor if this is not set. Set LIBPATH using this command: export LIBPATH=$LIBPATH:/usr/lpp/LoadL/nfs/lib If LoadLeveler is not installed in the standard directory, adjust the path to the LoadLeveler directory. Ensure the user loadl has permissions to execute the Tracker Agent exit program (eqqlsext). loadl should also have read and write permissions to the directory for temporary and log files. If you use the Tivoli OPC job-submit exit (EQQUX 1) to supply the user ID of jobs to be submitted by the Tracker Agent, the LoadLeveler submittor must run as root. This can be done by changing the owner of the LoadLeveler submittor to root. You must also set the SUID bit on. To do this, login as root and enter: chown root: eqqls chmod u+s eqqls Then create (still as root) a link for the LoadLeveler shared library, such as: /usr/lib/libllapi.a -> /usr/lpp/LoadL/lib/libllapi.a

© Copyright IBM Corp. 1995, 1999

107

LoadLeveler

This submittor is based on LoadLeveler and takes advantage of load balancing in a large network of UNIX machines. The LoadLeveler submittor passes every job (LoadLeveler script) to LoadLeveler using the LoadLeveler application program interface (API), llsubmit. The LoadLeveler API submits the job and calls the Tracker Agent exit program to report job status to the Event Writer. Job logs from LoadLeveler submittor-submitted jobs can not be captured and returned to the controller, because LoadLeveler does not support this. This submittor uses only one fork system call. Child processes do setuid/setgid if the user ID is supplied by the controller. When the submit is successful, a message is written to the message log file with the returned LoadLeveler job ID. The parent process continues doing fork without waiting for child processes to finish.

Sample LoadLeveler script
This is the file tracker.cmd in the samples directory: #!/bin/ksh # @ job_name = tracker # @ input = /dev/null # @ output = $(Executable).OUT # @ error = $(Executable).OUT # @ notification = never # @ checkpoint = no # @ restart = no # @ queue echo "Start LL tracker" date uname -a echo "End LL tracker"

Restrictions
You cannot browse the job log of scripts submitted using LoadLeveler. When the Tracker Agent is started as root, the LoadLeveler submittor uses the tracker user ID. LoadLeveler does not support the submission of jobs using the root user ID. The controller cannot specify a user ID if the script will run under LoadLeveler: the LoadLeveler runs all scripts under its own user ID.

108

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Appendix E. EBCDIC and ASCII Codepage Tables
These tables convert between ISO (ASCII) and EBCDIC single-byte stateless code sets. The following types of conversions are supported: PC to/from ISO, PC to/from EBCDIC, and ISO to/from EBCDIC. Conversion is provided between compatible Latin-1 code sets: double-byte character set conversion is not supported. Conversion tables in the iconvTable directory are created by the genxlt command.
Table 12. Codepage Compatibility
Languages U.S. English, Portuguese, Canadian French Danish, Norwegian Finnish, Swedish Italian Japanese Spanish U.K. English German French Belgian, Swiss German ISO (ASCII) ISO8859-1 ISO8859-1 ISO8859-1 ISO8859-1 ISO8859-1 ISO8859-1 ISO8859-1 ISO8859-1 ISO8859-1 ISO8859-1 EBCDIC IBM-037 IBM-277 IBM-278 IBM-280 IBM-281 IBM-284 IBM-285 IBM-273 IBM-297 IBM-500

A character that exists in the source code set but does not exist in the target code set is converted to a converter-defined substitute character by the iconvTable converters found in the $EQQHOME/nls/loc/iconvTable directory.

© Copyright IBM Corp. 1995, 1999

109

110

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Appendix F. Machine and Program Requirements for AIX Systems
This appendix describes the hardware and software needed to operate the Tracker Agent under AIX. Attention: This does not include the hardware and software necessary to run Tivoli OPC or the OPC Tracker Agent enabler.

Hardware Requirements
| The Tracker Agent for AIX/6000 requires a RISC Systems/6000 computer, with a minimum of 3 MB of RAM, capable of running AIX/6000 Version 4.2.1 or later. 8 MB of RAM is recommended for performance reasons. The Tracker Agent also requires: Approximately 20 MB of disk space for the components Additional space on the local hard disk for the log files and other temporary data it generates. The volume of data is highly dependent on the volume and output of jobs managed by the Tracker Agent.

Software Requirements
| The Tracker Agent requires AIX Version 4.2.1 or later.

© Copyright IBM Corp. 1995, 1999

111

112

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Appendix G. Machine and Program Requirements for HP-UX Systems
This appendix describes the hardware and software needed to operate the Tracker Agent under HP-UX. Attention: This does not include the hardware and software necessary to run Tivoli OPC or the OPC Tracker Agent enabler.

Hardware Requirements
| The Tracker Agent for HP–UX requires a 700-series computer, capable of running HP–UX Version 10 or Version 11. It uses between 500 KB and 1 MB of memory. The Tracker Agent also requires: Approximately 20 MB of disk space for the components Additional space on the local hard disk for the log files and other temporary data it generates. The volume of data is highly dependent on the volume and output of jobs managed by the Tracker Agent.

Software Requirements
| | The Tracker Agent requires the Hewlett-Packard HP–UX Operating System Version 10 or Version 11.

© Copyright IBM Corp. 1995, 1999

113

114

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Appendix H. Machine and Program Requirements for Solaris Systems
This appendix describes the hardware and software needed to operate the Tracker Agent under Solaris. Attention: This does not include the hardware and software necessary to run Tivoli OPC or the OPC Tracker Agent enabler.

Hardware Requirements
The Tracker Agent for Sun Solaris requires a SPARC** computer capable of running Solaris Version 2 Release 3. It uses between 500 KB and 1 MB of memory. The Tracker Agent also requires: Approximately 20 MB of disk space for the components Additional space on the local hard disk for the log files and other temporary data it generates. The volume of data is highly dependent on the volume and output of jobs managed by the Tracker Agent.

Software Requirements
The Tracker Agent requires the following software: Sun Solaris Version 2 Release 3 or later.

© Copyright IBM Corp. 1995, 1999

115

116

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Appendix I. Machine and Program Requirements for SunOS Systems
This appendix describes the hardware and software needed to operate the Tracker Agent under SunOS. Attention: This does not include the hardware and software necessary to run Tivoli OPC or the OPC Tracker Agent enabler.

Hardware Requirements
The Tracker Agent for Sun OS requires a SPARC computer capable of running SunOS Version 4 Release 1 Modification Level 3 (Sun Solaris Version 1 Release 1 Modification Level 1). It uses between 500 KB and 1 MB of memory. The Tracker Agent also requires: Approximately 20 MB of disk space for the components Additional space on the local hard disk for the log files and other temporary data it generates. The volume of data is highly dependent on the volume and output of jobs managed by the Tracker Agent.

Software Requirements
The Tracker Agent requires the following software: | SunOS Version 4 Release 1 Modification 3_v1 (Sun Solaris Version 1 Release 1 Modification Level 1)

© Copyright IBM Corp. 1995, 1999

117

118

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Appendix J. Machine and Program Requirements for Digital OpenVMS Systems
This appendix describes the hardware and software needed to operate the Tracker Agent under Digital OpenVMS. Attention: This does not include the hardware and software necessary to run Tivoli OPC or the OPC Tracker Agent enabler.

Hardware Requirements
The Tracker Agent for Digital OpenVMS requires one of the following: A DEC VAX computer, with at least 16 MB of disk space, capable of running OpenVMS Version 7.0 or 7.1 A DEC Alpha computer, with at least 32 MB of disk space, capable of running OpenVMS Version 7.0 or 7.1 The Tracker Agent also requires: Approximately 1 MB of memory Additional space on the local hard disk for the log files and other temporary data it generates. The volume of data is highly dependent on the volume and output of jobs managed by the Tracker Agent.

Software Requirements
The Tracker Agent requires the following software: OpenVMS Version 7.0 or 7.1

© Copyright IBM Corp. 1995, 1999

119

120

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

| |

Appendix K. Machine and Program Requirements for Silicon Graphics IRIX Systems
This appendix describes the hardware and software needed to operate the Tracker Agent under Silicon Graphics IRIX. Attention: This does not include the hardware and software necessary to run Tivoli OPC or the OPC Tracker Agent enabler.

Hardware Requirements
| | | The Tracker Agent for Silicon Graphics IRIX requires: an SGI Indigo2 Family computer capable of running Silicon Graphics IRIX Version 5.3. The Tracker Agent also requires: Approximately 1 MB of memory Additional space on the local hard disk for the log files and other temporary data it generates. The volume of data is highly dependent on the volume and output of jobs managed by the Tracker Agent.

Software Requirements
The Tracker Agent requires the following software: DC/OSx Version 1.1.

© Copyright IBM Corp. 1995, 1999

121

122

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Appendix L. Machine and Program Requirements for Digital UNIX
This appendix describes the hardware and software needed to operate the Tracker Agent under Digital UNIX. Attention: This does not include the hardware and software necessary to run Tivoli OPC or the OPC Tracker Agent enabler.

Hardware Requirements
The Tracker Agent for Digital UNIX requires: | | A DEC Alpha computer, with at least 32 MB of disk space, capable of running UNIX 4.0D or later. The Tracker Agent also requires: Approximately 1 MB of memory Additional space on the local hard disk for the log files and other temporary data it generates. The volume of data is highly dependent on the volume and output of jobs managed by the Tracker Agent.

Software Requirements
The Tracker Agent requires the following software: | Digital UNIX Version 4.0D or later

© Copyright IBM Corp. 1995, 1999

123

124

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Appendix M. Machine and Program Requirements for OS/390
This appendix describes the hardware and software needed to operate the Tracker Agent under OS/390. Attention: This does not include the hardware and software necessary to run Tivoli OPC or the OPC Tracker Agent enabler.

Hardware Requirements
The Tracker Agent for OS/390 Open Edition runs on any IBM hardware configuration supported by OS/390 Version 1 Release 3. The Tracker Agent requires: Approximately 20 MB of file system space A display terminal supported by ISPF Version 4 or later, to invoke and run OPC host dialogs

Software Requirements
The Tracker Agent requires the following software: OS/390 Version 1 Release 3 or later with Open Edition services TCP/IP 3.2

© Copyright IBM Corp. 1995, 1999

125

126

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Appendix N. Applying Tracker Maintenance on Non-AIX Machines
The tracker PTFs (or fixes) are shipped as compressed tar files. The naming convention for non-AIX PTFs is: /usr/lpp/tracker/images/tracker.ptf.<platform>.Z where <platform> is a shortform name for the remote tracker, that is: hp sun sol dux mips omvs For For For For For For the the the the the the HP-UX tracker agent SunOS tracker agent Sun Solaris tracker agent Digital UNIX tracker agent Silicon Graphics IRIX tracker agent OS/390 Open Edition tracker agent

|

To install the PTFs on a non-AIX tracker machine, follow this procedure: cd /usr/lpp (directory above the tracker sub-directory tree) ftp controller binary get OPCDATASET(OPCMEMBER) tracker.ptf.<platform>.Z For OPC, replace OPCDATASET and OPCMEMBER with the correct information for the remote non-AIX tracker you are installing. To extract the compressed tar file on the non-AIX remote tracker machine: | uncompress tracker.ptf.<platform>.Z tar xvof tracker.ptf.<platform> The tracker subdirectories will be updated with the binaries from the PTF.

© Copyright IBM Corp. 1995, 1999

127

128

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Glossary A
ABARS. See Aggregate Backup and Recovery Support. active application description. An application description that is complete and ready for use in planning or scheduling. actual duration. At a workstation, the actual time in hours and minutes it takes to process an operation from start to finish. adjusted quantity. The current quantity of a special resource, taking the deviation into account. AD. See application description. Aggregate Backup and Recovery Support (ABARS). A DFHSM facility that manages backup and recovery of user-defined data set groups (aggregates). Aggregate backup copies and related control information are written as portable data and control files on 3480 or 3420 volumes. Advanced Program-to-Program Communications (APPC). An implementation of the Systems Network Architecture (SNA), logical unit (LU) 6.2 protocol that allows interconnected systems to communicate and share the processing of programs. all-days cyclic period. A cyclic period where all days are counted when calculating the interval. alert. Two Workload Monitor/2 objects, Operations List and Workstations List, can be used to monitor a Tivoli OPC subsystem and notify you if alert conditions are met. The alert can be a sound (Beep), or a message in a window (Message). The Details view of the Plan object must be open to monitor for plan alerts. The List or Icons views of the Operations List object must be open to monitor for operation alerts. APAR. Authorized program analysis report. A report of a problem that is suspected to be caused by a defect in a current, unaltered release of a program. API. See application programming interface. APPC. See Advanced Program-to-Program Communications. application. A measurable and controllable unit of work that completes a specific user task, such as the running of payroll or financial statements. The smallest entity that an application can be broken down into is an operation. Generally, several related operations make up an application. application description (AD). A database description of an application. application group. Type of application description which holds run cycle and calendar information for standard applications or job descriptions which have been defined as a member of the group. application ID. The name of an application. (For example, PAYROLL or DAILYJOBS.) application programming interface (API). A formally-defined programming language interface between an IBM system control program or a licensed program and the user of a program. application transaction program (ATP). A program that uses the Advanced Program-to-Program Communications (APPC) application programming interface (API) to communicate with a partner program at a remote node. application version. See versions. ATP. See application transaction program. authority. The ability to access a protected resource. authority group. A name used to generate a RACF resource name for authority checking. automatic events. Events recognized by or triggered by an executing program. Automatic events are usually generated by Tivoli OPC tracking programs but can also be created by a user-defined program. automatic hold/release. Function used to control jobs that are submitted outside Tivoli OPC. It allows you to define whether such jobs should be automatically released at the appropriate time if placed in HOLD status when submitted. automatic job and started-task recovery. A Tivoli OPC function that lets you specify, in advance, alternative recovery strategies for operations that end in error. automatic-reporting workstation. A workstation (for example, a processor or printer) that reports events (the starting and stopping of operations) in real time to Tivoli OPC.

© Copyright IBM Corp. 1995, 1999

129

availability. The degree to which a system (and in Tivoli OPC, an application) or resource is ready when needed to process data.

B
batch loader. A Tivoli OPC batch program that you can use to create and update information in the application-description and operator-instruction databases. buffer. A memory area reserved for performing input/output (I/O) operations. BMP. Batch message processing.

computer workstation. (1) A workstation that performs MVS processing of jobs and started-task operations, and that usually reports status to Tivoli OPC automatically. (2) A processor used as a workstation. It can refer to single processors or multiprocessor complexes serving a single job queue (for example, JES2 or JES3 systems). contingency plan. A plan for emergency response, backup procedures, and post-disaster recovery. Synonymous with disaster recovery plan, emergency plan. controller. The Tivoli OPC component that runs on the controlling system, and that contains the Tivoli OPC tasks that manage the Tivoli OPC plans and databases. controlling system. The system that the controller runs on. control on servers. If a workstation is defined with control on servers, OPC/ESA will not start more operations at the workstation than there are available servers. conversation. In Advanced Program-to-Program Communications (APPC), a connection between two transaction programs over a logical unit-logical unit (LU-LU) session that allows them to communicate with each other while processing a transaction. conversation verb. In Advanced Program-to-Program Communications (APPC), one of the verbs a transaction program issues to perform transactions with a remote program. CP. See current plan. CPI. See Common Programming Interface. CPI-C. Common Programming Interface for Communications. See also Common Programming Interface. cross-system coupling facility (XCF). MVS components and licensed programs use the XCF services to provide additional functions in a SYSPLEX. critical path. The route, within a network, with the least slack time. current plan (CP). A detailed plan of system activity that covers a period of at least 1 minute, and not more than 21 days. A current plan typically covers 1 or 2 days. cyclic interval. The number of days in a cyclic period. cyclic period. A period that represents a constant number of days. There are two types of cyclic periods:

C
calendar. The data that defines the operation department's work time in terms of work days and free days. capacity. The actual number of parallel servers and workstation resources available during a specified open interval. capacity ceiling. The maximum number of operations that a workstation can handle simultaneously. catalog. A directory of files and libraries, with reference to their locations. A catalog may contain other information such as the types of devices in which the files are stored, passwords, blocking factors. catalog management. Catalog management is a recovery function of Tivoli OPC, which handles the deleting or uncataloging of datasets created in a job operation that ends in error. CICS. Customer Information Control System. closed workstation. A workstation that is unavailable to process work for a specific time, day, or period. Common Programming Interface (CPI). A consistent set of specifications for languages, commands, and calls to enable applications to be developed across all Systems Application Architecture (SAA) environments. complete (C). The status of an operation indicating that it has finished processing. completion code. A Tivoli OPC system code that indicates how the processing of an operation ended at a workstation. See error code. complex of processors. A JES2 Multi-Access Spool system or a JES3 system with more than one processor.

130

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Work-days-only cyclic period, where only the work days are counted when calculating the number of days in the period. All-days cyclic period, where all days are counted.

D
daily planning. The process of creating a current plan. DASD. Direct access storage device. database. A collection of data that is fundamental to a system. Tivoli OPC uses six databases: calendar, period, workstation description, JCL variable table, application description, and operator instruction. Data Facility Hierarchical Storage Manager (DFHSM). A licensed MVS program which provides automatic and command functions that manage user storage space and data recovery. Data Facility Systems Management Subsystem/MVS (DFSMS/MVS). A group of licensed MVS programs which transform system environments from user-managed DASD volumes to administrator-controlled, system-managed data sets. Data Lookaside Facility (DLF). The MVS/ESA component that manages Hiperbatch objects. data processing center (DP center). A center or department, including computer systems and associated personnel, that performs input, processing, storage, output, and control functions to accomplish a sequence of operations on data.

deadline WTO message. You can specify that Tivoli OPC issue an operator message (EQQW776I) when a started operation has not been marked as completed before the deadline time. In addition to the standard message, the user-defined text that describes the operation is issued as part of the WTO. default calendar. (1) A calendar that you have defined for Tivoli OPC to use when you do not specify a calendar in an application description. (2) A calendar that Tivoli OPC uses if you have neither specified a calendar in an application description, nor defined your own default calendar. dependency. A relationship between two operations in which the first operation must successfully finish before the second operation can begin. descriptive text. User-written text describing the operation. This text is also issued as part of the write-to-operator message if the operation has been started, exceeds its deadline, and has the deadline write-to-operator (WTO) option specified. Details notebook. See Details view. Details view. A view of a Workload Monitor/2 object showing details about the object. The Details view of the Plan object shows information about the current plan. The Details view of the Operation object shows information about the selected operation. The Details view of the Workstation object shows information about the selected workstation. deviation. A temporary variation in the quantity of a special resource. DFHSM. See Data Facility Hierarchical Storage Manager. DFSMS/MVS. See Data Facility Storage Management Subsystem. dialog. The user's online interface with Tivoli OPC.

| | | | |

Data Store. The Tivoli OPC component managing the job runtime information at the tracked system. It is dedicated to the storing and possible retrieval of sysout datasets belonging to OPC-submitted jobs, to optimize the sysout availability. DB2. DATABASE 2. DBCS. Double-byte character set. ddname. Data definition name. deadline. See deadline date and deadline time. deadline date. The latest date by which an occurrence must be complete. deadline time. The latest time by which an occurrence must be complete.

Disaster Recovery Plan (DRP). A plan for emergency response, backup procedures, and post-disaster recovery. Synonymous with contingency plan, emergency plan. DLF. See Data Lookaside Facility. DP center. See data processing center. DRP. See Disaster Recovery Plan. duration. The length of time an operation is active at a workstation.

Glossary

131

E
end user. A person who uses the services of the data processing center. ended-in-error (E). The Tivoli OPC reporting status for an operation that has ended in error at a workstation. error code. A code set by Tivoli OPC to describe how the processing of an operation ended at a computer workstation. ETT. See event-triggered tracking. estimated duration. The estimated length of time an operation will use a workstation. This is initially based on a value that is provided when the operation is defined, but can be adjusted automatically by Tivoli OPC's feedback mechanism to reflect actual durations. event. An action that changes an operation's status and changes the current plan. event manager. The Tivoli OPC function that processes all tracking events and determines which of these are Tivoli OPC-related. event reader. A Tivoli OPC task that reads event records from an event dataset. event tracking. A function of Tivoli OPC that follows events in the operations department in real time and records status changes in the current plan. event-triggered tracking (ETT). A component of Tivoli OPC that waits for specific events to occur, and then adds a predefined application to the current plan. ETT recognizes two types of events: the reader event, which occurs when a job enters the JES reader, and the resource event, which occurs when the availability status of a special resource is set to “yes”. event writer. A Tivoli OPC task that writes event records in an event dataset. exclusive resource. A resource that can be used by only one operation at a time. expected arrival time. The time when an operation is expected to arrive at a workstation. It can be calculated by daily planning or specified in the long-term plan. extended status code. Together with the normal status codes, Tivoli OPC maintains extended status codes that provide additional information about the status of operations. The extended status code is not always present. external dependency. A relationship between two occurrences, in which an operation in the first

occurrence (the predecessor) must successfully finish before an operation in the second occurrence (the successor) can begin processing.

F
feedback limit. A numeric value in the range 100–999 that defines the limits within which actual data that is collected in tracking is fed back and used by Tivoli OPC. filter criteria. Input values that are used to limit the mass update of applications to only those specified. This term is used in the Tivoli OPC ISPF dialogs. first critical operation. An operation of an occurrence that has the earliest latest-start-time. The first critical operation of an occurrence determines the critical path. first operation. (1) An operation in an occurrence that has no internal predecessor. (2) The start node in a network. fixed resources. A set of resource names used to check the authority of users to access the Tivoli OPC dialogs. form number. A user-defined code that identifies the type of paper to be used for an operation on a printer workstation. Tivoli OPC can use the form number to identify the different print operations belonging to one job. free day. Any day that is not a work day. free-day rule. A rule that determines how Tivoli OPC will treat free days when the application run day falls on a free day.

G
general workstation. A workstation where activities other than printing and processing are carried out. A general workstation reporting to Tivoli OPC is usually manual, but it can also be automatic. Manual activities can include data entry and job setup. generic alert. An alert that is broadcast by Tivoli OPC, and collected by NetView, when an operation ends in error. You can specify this as an option when defining application descriptions. global search character. In Tivoli OPC, a percent sign (%), which represents any single character, or an asterisk (*), which represents any character string of any length. global variable table. The JCL variable table that Tivoli OPC checks for a variable substitution value if no

132

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

value is found in the specific JCL variable table that is associated with the operation. Graph view. (1) A view of the Workload Monitor/2 Workstation object. Shows the total number of operations with different statuses for a single workstation. (2) In the Graphical User Interface for Application Description, a view of the operations that make up an application. It shows the workstation where each operation is run, and dependencies between the operations. Graphs view. A view of the Workload Monitor/2 Workstations List object. Shows the total number of operations with different statuses for each of the workstations that are included in the object. group definition. The application group to which the application description or job description is a member.

in-progress operation. An operation with a status of A, R, *, I, E, or S. input arrival time (IAT). The user-defined date and time when an operation or an application is planned to be ready for processing. intermediate start. The date and time an operation started after processing was interrupted. internal date. Internally, Tivoli OPC uses a two-digit year format when handling dates. In order to handle dates before and after 31 December 1999 correctly, Tivoli OPC uses an origin year of 72 for the internal century window. This means that internally the year 1972 is represented as 00 and 2071 is represented as 99. internal dependency. A relationship between two operations within an occurrence, in which the first operation (the predecessor) must successfully finish before the second operation (the successor) can begin. interrupted (I). A Tivoli OPC reporting status for an operation that indicates that the operation has been interrupted while processing. ISPF. Interactive System Productivity Facility.

H
highest return code. A numeric value in the range 0–4095. If this return code is exceeded during job processing, the job will be reported as ended-in-error. Hiperbatch. The MVS/ESA facility that stores VSAM and QSAM data in Hiperspace for access by multiple jobs. The facility can significantly reduce the execution time of certain batch streams that access VSAM and QSAM data sets. Hot standby. Using the MVS/ESA cross-system coupling facility (XCF), you can include one or more standby controllers in your configuration. A standby system can take over the functions of a controller if the controller fails or if the MVS/ESA system that it was active on fails.

J
JCC. See job completion checker. JCL. Job control language. A problem-oriented language designed to express statements in a job that are used to identify the job or describe its requirements to an operating system. JCL tailoring. Tivoli OPC provides automatic JCL tailoring facilities, which enable jobs to be automatically edited using information that is provided at job setup or submit. JCL variable table. A group of related JCL variables. See variable table. JES. Job entry subsystem. A system facility for spooling, job queuing, and managing I/O. job. (1) A set of data that completely defines a unit of work for a computer. A job usually includes all necessary computer programs, linkages, files, and instructions to the operating system. (2) In Tivoli OPC, an operation performed at a computer workstation. job class. Any one of a number of job categories that can be defined. By classifying jobs and directing initiators to initiate specific classes of jobs, it is possible to control a mixture of jobs that can be run concurrently.

I
Icons view. The Workload Monitor/2 objects, Workstations List and Operations List, contain other objects. The Icons view shows an icon for each contained object. IMS. Information Management System. incident log. An optional function available under the job completion checker. initiator/terminator. The job scheduler function that selects jobs and job steps to be executed, allocates input/output devices for them, places them under task control, and at completion of the job, supplies control information for writing job output on a system output unit.

Glossary

133

job-completion checker (JCC). An optional function of Tivoli OPC that allows extended checking of the results from CPU operations. job description. A single processor (job or started-task) operation and its dependencies. Job Description dialog. The ISPF dialog used to create job descriptions. job ID. The JES job ID of the job associated with the operation. job name. The name of the job associated with an operation. The job name is assigned in the JOB statement of a job. It identifies the job to the system. job preparation. Job preparation involves modifying jobs in preparation for processing. This can be performed manually, by a job preparer, or automatically by Tivoli OPC JCL tailoring functions. job setup. The preparation of a set of JCL statements for a job at a job setup workstation. Job setup can be performed manually by an operator, or automatically by Tivoli OPC. job setup workstation. A general workstation defined with the job setup option. A job setup workstation lets you modify your job or STC JCL before execution. job submission. A Tivoli OPC process that presents jobs to MVS for running on a Tivoli OPC-defined workstation once the scheduling criteria for the operation is met. job tracking. A Tivoli OPC process that communicates with operating systems that control computer workstations. JS. The JCL repository dataset.

layout. In the Graphical User Interface for Application Description, a user-created file that determines which information about each application is displayed when you view a list of application descriptions. An application description contains many details about the application, such as application ID, valid to date, application status, and last user. A layout specifies which details the user wishes to view. layout ID. A unique name that identifies a specific ready or error list layout. limit for feedback. See feedback limit. list, application. In the Graphical User Interface for Application Description, a list of application definitions from which the user can select one to work with. It consists of application definitions selected according to user-specified criteria. List view. The Workload Monitor/2 objects Workstations List and Operations List contain other objects. The List view shows a list of the contained object and displays data about each contained object. local. Synonym for channel-attached. local processor. (1) In a complex of processors under JES3, a processor that executes users' jobs and that can assume global functions if the global processor fails. (2) In Tivoli OPC, a processor in the same installation that communicates with the controlling Tivoli OPC processor through shared DASD or XCF communication links. logical unit (LU). In Systems Network Architecture (SNA), a port through which an end user accesses the SNA network in order to communicate with another end user and through which the end user accesses the functions provided by system services control points (SSCPs). logical unit 6.2 (LU 6.2). A type of Systems Network Architecture (SNA) logical unit (LU) for communication between peer systems. Synonymous with APPC protocol, see Advanced Program-to-Program Communications (APPC). long-term plan (LTP). A high-level plan of system activity that covers a period of at least 1 day, and not more than 4 years. It serves as the basis for a service level agreement with your users, and as input to daily planning. LU. See logical unit. LU-LU session type 6.2. See logical unit 6.2. LTP. See long-term plan.

K
kanji. A character set for the Japanese language.

L
last operation. (1) An operation in an occurrence that has no internal successor. (2) The terminating node in a network. latest out time. See latest start. latest start. The latest day and time (calculated by Tivoli OPC) that an operation can start and still meet the deadline specified for the operation and any successor operations. The latest out time for an operation is identical to the latest start time.

134

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

M
manipulation button. One of the two mouse buttons. With default mouse settings, the manipulation button is mouse button 2, the button on the right. You press and hold this button to move an object, for example, to drag an object to a printer. Pressing the manipulation button once when the pointer is on an object, opens the object's pop-menu. manual reporting. A type of workstation reporting in which events, once they have taken place, are manually reported to Tivoli OPC. This type of reporting requires that some action be taken by a workstation operator. Manual reporting is usually performed from a list of ready operations. mass updating. A function of the Application Description dialog in which a large update to the application database can be requested. MCU. Multiple Console Support. Merged Graph view. A view of the Workload Monitor/2 Workstations List object. Shows the total number of operations with different statuses for all the workstations that are included in the object. The information is shown in a single graph. modify current plan (MCP). A Tivoli OPC dialog function used to dynamically change the contents of the current plan to respond to changes in the operation environment. Examples of special events that would cause alteration of the current plan are: a rerun, a deadline change, or the arrival of an unplanned application. most critical application occurrences. Those unfinished applications whose latest start time is less than or equal to the current time.

noncyclic period. A period that does not represent a constant number of days or work days. Examples: quarter, academic semester. nonreporting. A reporting attribute of a workstation, which means that information is not fed back to Tivoli OPC.

O
occurrence. An instance of an application in the long-term plan or current plan. An application occurrence is one attempt to process that application. Occurrences are distinguished from one another by run date, input arrival time, and application ID. For example, an application that runs four times a day is said to have four occurrences per day. occurrence group. Consists of one or more application occurrences added to the long-term plan or current plan, where such occurrences are defined as belonging to a particular application group specified in the group definition field of the application description or job description. offset. Values, in the ranges 1 to 999 and −1 to −999, that indicate which days of a calendar period an application runs on. This is sometimes called displacement. OI. See operator instruction. OPC/ESA. Operations Planning and Control/ESA OPC host. The processor where Tivoli OPC updates the current plan database. OPC local processor. A processor that connects to the Tivoli OPC host or remote processor through shared event datasets or XCF communication links. OPC remote processor. A processor connected to the Tivoli OPC host processor via an SNA network. A Tivoli OPC event writer and an event transmitter (Tivoli OPC Network Communication Function) are installed on the remote processor and transmit events to the Tivoli OPC host processor via VTAM. open interval. The time interval during which a workstation is active and can process work. operation. A unit of work that is part of an application and that is processed at a workstation. operation deadline. The latest time when the operation must be complete. operation latest out. For an operation that has predecessors, the latest out date and time are the latest
Glossary

N
NCF. See Network Communication Function. NCP. Network Control Program. NetView operations. Operations that consist of an operator instruction that Tivoli OPC passes to NetView. These operations are run at a general workstation with the WTO option specified. Network Communication Function (NCF). A VTAM application that submits work to remote systems and passes events back to the Tivoli OPC tracker subsystem on the Tivoli OPC controlling system.

135

start time for the first critical operation in the application occurrence. If the first critical operation has not started by this date and time, then the operation is flagged as late, because it will be impossible for it to start on time based on the sum of the planned durations of all the operations on its critical path. operation number. The number of the operation. This uniquely identifies each operation in an application. Operation object. An object contained in the Workload Monitor/2 Operations List object. It represents one operation in the current plan. operation status. The status of an operation at a workstation. operation waiting for arrival. The status of an operation that cannot begin processing because the necessary input has not arrived at a workstation. This status is applicable only for operations without predecessors. Operations List object. A Workload Monitor/2 object that can be used to display information about operations in the current plan. It contains Operation objects. operator instruction (OI). An instruction that an operator can view when the operator must manually intervene in Tivoli OPC operations. origin date. The date that a period (cyclic or noncyclic) starts on. owner ID. Owner ID is an identifier that represents the application owner.

pending occurrence. The dummy occurrence created by the daily planning process to honor a dependency that has been resolved in the long-term plan but cannot be resolved in the current plan because the predecessor's input arrival time is not within the current plan end time. pending predecessor. A predecessor dependency to an occurrence which is defined in the long-term plan but not yet included in the current plan. See also pending occurrence. period. A time period defined in the Tivoli OPC calendar. personal workstation. In Tivoli OPC documentation this term is used to refer to a computer that runs IBM Operating System/2. PIF. See program interface (PIF). plan. See current plan. Plan object. A Workload Monitor/2 object that can be used to get information about the status of the current plan. When the Details view of the Plan object is open, the object monitors for current plan alerts if alert conditions have been specified. predecessor. An operation in an internal or external dependency that must finish successfully before its successor operation can begin. print workstation. A workstation that prints output and usually reports status to Tivoli OPC automatically. printout routing. The ddname of the daily planning printout dataset. priority. The priority of an operation is a value from 1 to 9 (where 1=low, 8=high, and 9=urgent). It is one of the factors that determines how Tivoli OPC schedules applications. program interface (PIF). A Tivoli OPC interface that lets user-written programs issue various requests to Tivoli OPC.

P
parallel operations. Operations that are not dependent on one another and that can, therefore, run at the same time. parallel servers. These represent the number of operations that can be processed concurrently by that workstation. partner transaction program. An Advanced Program-to-Program Communications (APPC) transaction program located at the remote partner. PDF. Program Development Facility. pending application description. An application description that is incomplete and not ready for use in planning or scheduling. See active application description.

Q
query current plan (QCP) dialog. An ISPF dialog that displays information taken directly from the current plan. The information includes information on operations, workstations, and application occurrences. QSAM. Queued Sequential Access Method.

136

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

R
RACF. Resource Access Control Facility. read authority. Access authority that lets a user read the contents of a dataset, file, or storage area, but not change it. ready (R). The status of an operation indicating that predecessor operations are complete and that the operation is ready for processing. ready list. An ISPF display list of all the operations ready to be processed at a workstation. Ready lists are the means by which workstation operators manually report on the progress of work. receive. (1) To obtain a message or file from another computer. Contrast with send. (2) In Communications Manager, the command used to transfer a file from a host. record format. The definition of how data is structured in the records contained within a file. The definition includes record names, field names, and field attributes, such as length and data type. recovery. See automatic job and started-task recovery. remote job tracking. The function of tracking jobs on remote processors connected by VTAM links to a Tivoli OPC controlling processor. This function enables a central site to control the submitting, scheduling, and tracking of jobs at remote sites. remote processor. A processor connected to the Tivoli OPC host processor via a VTAM network. replan current period. A Tivoli OPC function that recalculates planned start times for all occurrences to reflect the actual situation. reporting attribute. A code that specifies how a workstation will report events to Tivoli OPC. A workstation can have one of four reporting attributes: A C N S Automatic Completion only Nonreporting Manual start and completion.

Y

The operation is eligible to be rerouted if the workstation becomes inactive. The operation will not be rerouted, even though the workstation has an alternate destination. The operation will be rerouted according to the WSFAILURE parameter on the JTOPTS initialization statement. This is the default.

N

blank

rerun. A Tivoli OPC function that lets an application or part of an application that ended in error be run again. Resource Object Data Manager. A licensed program that monitors resources and informs subscribing applications of their availability. restartable. If an operation is defined as restartable, Tivoli OPC can automatically restart that operation if the workstation that it is using becomes inactive. This option applies only to the operation while it has status S (started). The operation will be reset to status R (ready). return code. An error code that is issued by Tivoli OPC for automatic-reporting workstations. RODM. See Resource Object Data Manager. row command. An ISPF dialog command used to manipulate data in a table. rule. A named definition of a run cycle that determines when an application will run. run cycle. A specification of when an application is to run. The specification may be in the form of a rule or as a combination of period and offset.

S
SAA. See Systems Application Architecture. SAF. System Authorization Facility. schedule. (1) The current or long-term plan. (2) To determine the input arrival date and time of an occurrence or operation. selection button. One of the two mouse buttons. With default mouse settings, the selection button is mouse button 1, the button on the left. You use this button to select windows, menu choices, pages in a notebook, and buttons. Pressing the selection button twice when the pointer is on an object opens the object to the default view.

reroutable. Tivoli OPC can reroute operations if the workstation that they are scheduled to run on is inactive. An example of this can be if communication links to the system where the workstation is located fail. This option applies to operations only when they have status R (ready) or W (waiting). When you define an operation, you can specify one of the following reroutable options:

Glossary

137

send. (1) To send a message or file to another computer. Contrast with receive. (2) In Communications Manager, the command used to transfer a file to the host. server. The optional Tivoli OPC component that runs on the controlling system and handles requests from remote ISPF dialogs, remote PIF applications, and the Graphical User Interface for Application Description. service functions. Functions of Tivoli OPC that let the user deal with exceptional conditions, such as investigating problems, preparing APAR tapes, and testing Tivoli OPC during implementation. service level agreement. An agreement made between the data processing center and its user groups indicating the service hours and levels, as well as the kind of service the DP center will provide. Settings notebook. See Settings view Settings view. A view of an object that is used to specify properties of the object itself. shared DASD. Direct access storage devices that can be accessed from more than one processor. shared resource. A special resource or workstation resource that can be used simultaneously by more than one operation. slack. Refers to ‘spare’ time. This extra time can be calculated for the critical path by taking 'Deadline less the Input Arrival less the sum of Operation Durations'. SMF. System Management Facilities. An MVS component that collects and records system and job-related information. smoothing factor. A value in the range 0-100 that controls the extent to which actual durations are fed back into the application description database. SMP. System Modification Program. SNA. See Systems Network Architecture. special resource. A resource that is not associated with a particular workstation, such as a dataset. splittable. Refers to a workstation where operations can be interrupted while being processed. standard. User-specified open intervals for a typical day at a workstation. started (S). A Tivoli OPC reporting status, for an operation or an application, indicating that an operation or an occurrence is started.

started-task computer workstation. You can specify that a computer workstation will support started tasks by giving the workstation the STC option. Operations defined to this workstation will be treated as started tasks, not as jobs. started-task operations. Operations that start or stop started tasks. These operations are run at a computer workstation with the STC option specified. status. The current state of an operation or occurrence. status code. Codes that represent the current state of an operation. The status code is often associated with an extended status code. The status of an operation can be one of the following: A The operation is waiting for input to arrive. R The operation is ready for processing (all predecessors have been reported as complete). S Operation processing has started. C Operation processing has completed. D The operation has been deleted from the current plan. I * Operation processing has been interrupted. The operation is ready for processing. There is a predecessor at a nonreporting workstation, but all other predecessors are reported as complete.

E The operation has ended in error. W The operation is waiting for a predecessor to complete. U The operation status is not known. submit/release dataset. A dataset shared between the Tivoli OPC host and a local Tivoli OPC processor that is used to send job-stream data and job-release commands from the host to the local processor. subresources. A set of resource names and rules for the construction of resource names. Tivoli OPC uses these names when checking a user's authority to access individual Tivoli OPC data records. subsystem. A secondary or subordinate system, usually capable of operating independently of, or asynchronously with, a controlling system. successor. An operation in an internal or external dependency that cannot begin until its predecessor completes processing. SYSOUT. A system output stream, also an indicator used in data definition statements to signify that a dataset is to be written on a system output unit.

138

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

SYSOUT class. An indicator used in data definition statements to signify that a dataset is to be written on a system output unit. It applies only to print workstations. SYSPLEX. An MVS/ESA systems complex provides systems management enhancements for coordinating and controlling the data processing facility across multiple systems, while minimizing complexity. Implemented using the 9037 Sysplex Timer and the cross-system coupling facility (XCF) component of MVS/ESA. Systems Application Architecture (SAA). A formal set of rules that enable applications to be run without modification, in different computer environments. Systems Network Architecture (SNA). The description of the logical structure, formats, protocols, and operational sequences for transmitting information units through the networks and also operation sequences for controlling the configuration and operations of networks.

link between the MVS system that it runs on and the controller. tracking event log. A log of job-tracking events and updates to the current schedule. transport time. The time allotted for transporting materials from the workstation where the preceding operation took place to the workstation where the current operation is to occur. The transport time is used only for planning purposes. Operations will be started irrespective of the transport time specified. TSO. Time Sharing Option. turnover. A subfunction of Tivoli OPC that is activated when Tivoli OPC creates an updated version of the current plan.

U
undecided (U). A Tivoli OPC reporting status, for an operation or an application, indicating that the status is not known. update authority. (1) Access authority to use the ISPF/PDF edit functions of the Tivoli OPC dialog. The authority is given to the user via RACF. (2) Access authority to modify a master file or dataset with the current information.

T
tail plan. Created during the daily planning process, includes only tail work; that is, work that started during or before the current planning period and that extends beyond its end. TCP/IP. Transmission Control Protocol/Internet Protocol. A set of communication protocols that support peer-to-peer connectivity functions for both local and wide-area networks. temporary operator instructions. Operator instructions that have a specific time limit during which they are valid. They will be displayed to the workstation operator only during that time period. time dependent. Tivoli OPC attempts to start operations as soon as possible, when all dependencies have been resolved and processing resources are available. However, you can specify that an operation is time-dependent, so Tivoli OPC will not start it until a specific time. time zone support. A feature of Tivoli OPC that lets applications be planned and run with respect to the local time of the processor that runs the application. Some networks might have processors in different time zones. The controlling processor will make allowances for differences in time during planning activities to ensure that interacting activities are correctly coordinated. TP. See application transaction program. tracker. The Tivoli OPC component that runs on every system in your complex. It acts as the communication

V
validity period. The time interval defined by an origin date and an end date within which a run cycle or an application description is valid. variable table. A group of related JCL variables. Tivoli OPC can check these variable tables for substitution values for variables that occur in JCL. This substitution can occur during job setup or at job submit. versions. Applications with the same ID but different validity dates. VSAM. Virtual Storage Access Method. VTAM. Virtual Telecommunications Access Method.

W
waiting (W). A status indicating that an application is waiting for a predecessor operation to complete. waiting list. A list of jobs that have been submitted but still have uncompleted predecessors. Operations will be included in the waiting list if the JCL is not

Glossary

139

submitted by the Tivoli OPC controller and the Tivoli OPC tracker has been started with HOLDJOB(YES). work day. A day on which applications can normally be scheduled to start. work-days-only cyclic period. A cyclic period where only work days are counted when calculating the interval. work-day end time. The time when one Tivoli OPC work day ends and the next day begins. By default, this time is midnight. For example, if the work-day end time is 02:00, work for Friday can continue until 02:00 on Saturday morning, even if Saturday is a free day. If Saturday and Sunday are free days, no new work will be started until 02:00 on Monday. Workload Monitor/2. A part of Tivoli OPC. It runs on OS/2 Version 2 (or later) and communicates with a Tivoli OPC controller subsystem. It carries data about the subsystem's current plan from the host to a workstation, and can update operation status. workstation. (1) A unit, place, or group that performs a specific data processing function. (2) A logical place where work occurs in an operations department. Tivoli OPC requires that you define the following characteristics for each workstation: the type of work it does, the quantity of work it can handle at any particular time, and the times it is active. The activity that occurs at each workstation is called an operation. (3) See also personal workstation.

workstation description database. A Tivoli OPC database containing descriptions of the Tivoli OPC workstations in the operations department. workstation resource. A physical resource, such as a tape drive, that must be allocated among jobs. When you define a workstation, you can specify the quantity of each of two resources (R1 and R2) that are available to operations. When defining operations to that workstation, you can specify the number of these resources that must be available for the operation to start on that workstation. workstation type. Each workstation can be one of three types: computer, printer, or general. write-to-operator workstation. A general workstation that lets you use Tivoli OPC scheduling facilities to issue a write-to-operator (WTO) message at a specific operator console defined by the workstation destination. NetView can intercept the WTO message and take necessary action. WTO message. Write-to-operator message. WTO operations. Operations that consist of an operator instruction that Tivoli OPC passes to NetView. These operations are run at a general workstation with the WTO option specified.

X
XCF. MVS/ESA cross-system coupling facility. XRF. Extended recovery facility.

140

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Index A
adding users to a group 12 administrator user ID 12 AIX requirements, hardware and software automatic startup 62 111 environment variable (continued) EQQINSTANCE 48 PATH 48 eqqclean script 66 eqqdelete script 65 eqqdr_retry 52 eqqfilespace 52 EQQHOME variable 48 eqqinit 97 EQQINSTANCE variable 48 eqqmsgq 52 eqqshell 52 eqqshmkey 52 eqqtr_retry 52 eqqtw_retry 52 eqqverify 73 errors from jobs 61 errors, debugging 69 event logfile 52, 53, 72, 96 event_logsize 52 ew_check_file 52 exit codes 70

B
boot startup 62

C
checking the configuration parameter file 73 checkpoint file 54, 72, 96 CODEPAGE keyword of ROUTOPTS 9 codepage tables 109 communications checking 19 configuration file creating 48 configuration parameters controller 9 controller description 1 parameters 9 controller IP address parameter 52 controller port number parameter 52 controller_ipaddr 52 controller_portnr 52 controller_type 52 creating administrator user ID 12 directory 15 group ID (GID) 12 user ID 12

F
failures and problems files planning 15 fixing problems 69 69

G
group ID (GID) 12

H
hardware requirements Digital OpenVMS 119 for AIX 111 for Digital UNIX 123 for HP-UX 113 for OS/390 Open Edition 125 for Silicon Graphics IRIX 121 for Solaris 115 for SunOS 117 home directory 15 setting 48 host names 17 how to install fixes on a non-AIX tracker machine 127 HP-UX requirements, hardware and software 113

D
diagnosing problems 69 Digital OpenVMS requirements, hardware and software 119 Digital UNIX requirements, hardware and software directories 15 directory structure 15, 55 distribution media 3 123

E
enabler support loading for the Tracker Agent environment variable EQQHOME 48 7

© Copyright IBM Corp. 1995, 1999

141

I
initialization statements JTOPTS example of 10 OPCOPTS example of 10 ROUTOPTS CODEPAGE keyword 9 example of 10 TCP keyword 9 TCPIPID keyword 9 TCPIPPORT keyword 9 TCPTIMEOUT keyword 9 installation 11—57 installation tasks 11 installation tasks for the Tivoli OPC controller 5 installing loading Tracker Agent enabler software 7 loading Tracker Agent software 5 overview 2 installing required features 36, 38, 40, 42, 44 IP address parameter 52 ipc_base 52

local directory initializing 97 local_codepage 52 local_ipaddr 52 local_portnr 52 log directories, creating 48 log files cleaning 65 event logfile 53, 72, 96 submit checkpoint file 54, 72, 96

M
machine requirements for AIX 111 for Digital OpenVMS 119 for Digital UNIX 123 for HP-UX 113 for OS/390 Open Edition 125 for Silicon Graphics IRIX 121 for Solaris 115 for SunOS 117 messages 77

J
job output log actions parameter Job Scheduler introduction 2 job_log 52 JTOPTS initialization statement example of 10 52

N
naming convention for non-AIX fixes 127 national language support 109 NFS considerations 57 NIS 17 NIS considerations 17, 57 num_submittors 52 number of submittors parameter 52

K
KEEPALIVE parameters on a Digital UNIX system 105 on a MIPS ABI system 104 on a Sun Solaris system 104 on a SunOS system 103 recompile the kernel 103 on an AIX system 101 on an HP system 102 on the controller machine 101 unavailable on a Digital OpenVMS system 104 kernel considerations 15 key generator for IPC queues parameter 52

O
opc group 12 OPCOPTS initialization statement example of 10 operation 59 OS/390 Open Edition requirements, hardware and software 125 overview 1

P
PATH variable 48 port number 52 port numbers 17 prerequisites for AIX 111 for Digital OpenVMS 119 for Digital UNIX 123 for HP-UX 113 for OS/390 Open Edition 125 for Silicon Graphics IRIX 121 for Solaris 115

L
links to package directory creating 97 LoadLeveler 107 LoadLeveler, specifying 54 local code page parameter 52

142

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

prerequisites (continued) for SunOS 117 problems, solving 69 program requirements for AIX 111 for Digital OpenVMS 119 for Digital UNIX 123 for HP-UX 113 for OS/390 Open Edition 125 for Silicon Graphics IRIX 121 for Solaris 115 for SunOS 117 pulse functions enabling 101 on a Digital UNIX system 105 on a MIPS ABI system 104 on a Sun Solaris system 104 on a SunOS system 103 on an AIX system 101 on an HP system 102 on the controller machine 101 unavailable on a Digital OpenVMS system

104

software requirements (continued) for AIX 111 for HP-UX 113 for OS/390 Open Edition 125 for Silicon Graphics IRIX 121 for Solaris 115 for SunOS 117 Solaris requirements, hardware and software 115 solving problems 69 starting the Tracker Agent 48 storing scripts 59 submit checkpoint file 54, 72, 96 submittor parameters 52 subnn_check_file 52 subnn_retry 52 subnn_subtype 52 subnn_workstation_id 52 SunOS requirements, hardware and software 117 swinstall tool installing required features 34 symbolic links creating 97 symptoms of problems 69

R
README file 3 restarting the Tracker Agent 66 return codes 71 return codes from jobs 61 ROUTOPTS initialization statement CODEPAGE keyword 9 example of 10 TCP keyword 9 TCPIPID keyword 9 TCPIPPORT keyword 9 TCPTIMEOUT keyword 9

T
TCP keyword of ROUTOPTS 9 TCP/IP checking 19 port number 9, 52 TCP/IP environment verifying OS/390 27 Sun Solaris and SunOS 25 TCP/IP KEEPALIVE parameters 101 TCP/IP SO_KEEPALIVE option 101 TCPIPID keyword of ROUTOPTS 9 TCPIPPORT keyword of ROUTOPTS 9 TCPTIMEOUT keyword of ROUTOPTS 9 temporary files cleaning 65 Tivoli OPC introduction 2 Tivoli OPC controller 5 tools 93 trace level parameter 52 trace_level 52 Tracker Agent loading enabler software 7 loading software 5 Tracker Agent description 2 tracker fixes for OPC (non-AIX) 127 tracker user ID 12 translation, ASCII to EBCDIC 109 troubleshooting 71

S
SAM creating a user group 12 creating a user ID 13 scripts 98 storing 59 utility 93 service names 17 shell parameter 52 Silicon Graphics IRIX requirements, hardware and software 121 size of event log history parameter 52 SMIT creating a user group 12 installing required features 30 reading product installation media 31 software requirements Digital OpenVMS 119

Index

143

U
user group creating 12 user ID (UID) 12 utility programs 93

V
variable EQQHOME 48 EQQINSTANCE 48 PATH 48 verifying the configuration parameter file

73

W
workstation ID 54

Y
ypwhich command 17

144

Tivoli OPC Tracker Agents for AIX, UNIX, VMS, OS/390

Communicating Your Comments to IBM
Tivoli Operations Planning and Control Tracker Agents for AIX, UNIX**, VMS**, and OS/390 Open Edition Installation and Operation Version 2 Release 3 Publication No. SH19-4484-02 If you especially like or dislike anything about this book, please use one of the methods listed below to send your comments to IBM. Whichever method you choose, make sure you send your name, address, and telephone number if you would like a reply. Feel free to comment on specific errors or omissions, accuracy, organization, subject matter, or completeness of this book. However, the comments you send should pertain to only the information in this manual and the way in which the information is presented. To request additional publications, or to ask questions or make comments about the functions of IBM products or systems, you should talk to your IBM representative or to your IBM authorized remarketer. When you send comments to IBM, you grant IBM a nonexclusive right to use or distribute your comments in any way it believes appropriate without incurring any obligation to you. If you prefer to send comments by mail, use the reader's comment form (RCF) at the back of this book. If you wish, you can give the RCF to the local branch office or IBM representative for postage-paid mailing. If you prefer to send comments by fax, use this number, which is in Italy: 39+06+596+62077 If you prefer to send comments electronically, use this network ID: ROMERCF at VNET.IBM.COM Make sure to include the following in your note: Title and publication number of this book Page number or topic to which your comment applies

Help us help you!
Tivoli Operations Planning and Control Tracker Agents for AIX, UNIX**, VMS**, and OS/390 Open Edition Installation and Operation Version 2 Release 3 Publication No. SH19-4484-02 We hope you find this publication useful, readable and technically accurate, but only you can tell us! Your comments and suggestions will help us improve our technical publications. Please take a few minutes to let us know what you think by completing this form.
Overall, how satisfied are you with the information in this book? Satisfied Dissatisfied

How satisfied are you that the information in this book is: Accurate Complete Easy to find Easy to understand Well organized Applicable to your task

Satisfied

Dissatisfied

Specific Comments or Problems:

Please tell us how we can improve this book:

Thank you for your response. When you send information to IBM, you grant IBM the right to use or distribute the information without incurring any obligation to you. You of course retain the right to use the information in any way you choose.

Name

Address

Company or Organization

Phone No.

Help us help you! SH19-4484-02

IBM

Cut or Fold Along Line

Fold and Tape

Please do not staple

Fold and Tape

PLACE POSTAGE STAMP HERE

Tivoli OPC Information Development Rome Tivoli Laboratory IBM Italia S.p.A. Via Sciangai, 53 00144 Rome Italy

Fold and Tape

Please do not staple

Fold and Tape

SH19-4484-02

Cut or Fold Along Line

IBM

®

Program Number: 5697-OPC Printed in Denmark by IBM Danmark A/S

SH19-4484- 2

Sign up to vote on this title
UsefulNot useful