AS/400-iSeries Starter Kit

by Wayne Madden, iSeries NEWS Editor in Chief Updated chapters by Gary Guthrie, iSeries NEWS Technical Editor

Table of Contents
Please note that Starter Kit for the AS/400, Second Edition is copyright 1994. Although much of its content is still valid, much is also out-of-date. The good news is that iSeries NEWS technical editor Gary Guthrie has been working on an updated edition: Starter Kit for the IBM iSeries and AS/400. We've posted sample chapters of the new book here in place of the old ones. (Updated chapters are clearly labeled as such in the Table of Contents.) New Edition Now Available The new Starter Kit for the IBM iSeries and AS/400, co-authored by Gary Guthrie and Wayne Madden, is now available from 29th Street Press (April 2001). Completely updated for the iSeries and expanded to cover new topics such as TCP/IP and Operations Navigator, the new book includes a CD containing all the sample code and utilities presented in the book. For more information or to order, visit the iSeries Network Store.

Acknowledgments

Introduction

SETUP
Chapter 1: Before the Power is On
Before You Install Your System Develop an Installation Plan Plan Education Prepare Users for Visual and Operational Differences Develop a Migration Plan Develop a Security Plan System Security Level Password Format Rules Identifying System Users Develop a Backup and Recovery Plan Establish Naming Conventions What Next?

Chapter 2: That Important First Session

Signing On for the First Time Establishing Your Work Environment Now What?

Chapter 3: Access Made Easy
What Is a User Profile? Creating User Profiles USRPRF (User Profile) PASSWORD (User Password) PWDEXP (Set Password to Expired) STATUS (Profile Status) USRCLS (User Class) and SPCAUT (Special Authority) Initial Sign-On Options System Value Overrides Group Profiles JOBD (Job Description) SPCENV (Special Environment) Message Handling Printed Output Handling Documenting User Profiles Maintaining User Profiles Flexibility: The CRTUSR Command Making User Profiles Work for You

Chapter 4: Public Authorities
What Are Public Authorities? Creating Public Authority by Default Limiting Public Authority Public Authority by Design Object-Level Public Authority

Chapter 5: Installing a New Release
Planning is Preventive Medicine The Planning Checklist Step 1: Is Your Order Complete? Step 2: Manual or Automatic? Step 3: Permanently Apply PTFs Step 4: Clean Up Your System Step 5: Is There Enough Room? Step 6: Document System Changes Step 7: Get the Latest Fixes Step 8: Save Your System Installation-Day Tasks Step 9: Resolve Pending Operations Step 10: Shut Down the INS Step 11: Verify System Integrity Step 12: Check System Values Ready, Set, Go! Final Advice

Chapter 6: Introduction to PTFs
When Do You Need a PTF? How Do You Order a PTF? SNDPTFORD Basics Ordering PTFs on the Internet How Do You Install and Apply a PTF? Installing Licened Internal Code PTFs Installing Licensed Program Product PTFs Verifying Your PTF Installation How Current Are You?

Developing a Proactive PTF Management Strategy Preventive Service Planning Preventive Service Corrective Service

AS/400 OPERATIONS
Chapter 7: Getting Your Message Across: User to User
Sending Messages 101 I Break for Messages Casting Network Messages Sending Messages into History

Chapter 8: Secrets of a Message Shortstop by Bryan Meyers
Return Reply Requested A Table of Matches Give Me a Break Message Take a Break It's Your Own Default

Chapter 9: Print Files and Job Logs
How Do You Make It Print Like This? Where Have All the Job Logs Gone?

Chapter 10: Understanding Output Queues
What Is an Output Queue? How To Create Output Queues Who Should Create Output Queues? How Spooled Files Get on the Queue How Spooled Files Are Printed from the Queue A Different View of Spooled Files How Output Queues Should Be Organized

Chapter 11: The V2R2 Output Queue Monitor
The Old Solution A Better Solution The STRTFROUTQ Utility To Compile These Utilities A Data Queue Interface Facelift RCVDTAQE CLRDTAQ

Chapter 12: AS/400 Disk Storage Cleanup
Automatic Cleanup Procedures Manual Cleanup Procedures Enhancing Your Manual Procedures

Chapter 13: All Aboard the OS/400 Job Scheduler! by Bryan Meyers
Arriving on Time Running on a Strict Schedule Two Trains on the Same Track Derailment Dangers

Chapter 14: Keeping Up With the Past
System Message Show and Tell History Log Housekeeping Inside Information

SYSTEM MANAGEMENT
Chapter 15: AS/400 Save and Restore Basics by Debbie Saugen

Designing and Implementing a Backup Strategy Implementing a Simple Backup Strategy Implementing a Medium Backup Strategy Implementing a Complex Backup Strategy An Alternative Backup Strategy The Inner Workings of Menu SAVE Entire System (Option 21) System Data Only (Option 22) All User Data (Option 23) Setting Save Option Defaults Printing System Information Saving Data Concurrently Using Multiple Tape Devices Concurrent Saves of Libraries and Objects Concurrent Saves of DLOs (Folders) Concurrent Saves of Objects in Directories Save-While-Active How Does Save-While-Active Work? Save Commands That Support the Save-While-Active Option Backing Up Spooled Files Recovering Your System Availability Options [sidebar] Preparing and Managing Your Backup Media [sidebar]

Chapter 16: Backup Without Downtime by Debbie Saugen
An Introduction to BRMS Getting Started with BRMS Saving Data in Parallel with BRMS Online Backup of Lotus Notes Servers with BRMS Restricted-State Saves Using BRMS Backing Up Spooled Files with BRMS Including Spooled File Entries in a Backup List Restoring Spooled Files Saved Using BRMS The BRMS Operations Navigator Interface Terminology Differences Functional Differences Backup and Recovery with BRMS OpsNav Backup Policies Creating a BRMS Backup Policy Backing Up Individual Items Restoring Individual Items Scheduling Unattended Backup and Restore Operations System Recovery Report BRMS Security Functions Security Options for BRMS Functions, Components, and Items Media Management BRMS Housekeeping Check It Out

WORK MANAGEMENT
Chapter 17: Defining a Subsystem
Getting Oriented Defining a Subsystem Main Storage and Subsystem Pool Definitions Starting a Subsystem

Chapter 18: Where Jobs Come From
Types of Work Entries Conflicting Workstation Entries Job Queue Entries Communications Entries Prestart Job Entries

Autostart Job Entry Where Jobs Go

Chapter 19: Demystifying Routing
Routing Data for Interactive Jobs Routing Data for Batch Jobs Routing Data for Autostart, Communications, and Prestart Jobs The Importance of Routing Data Runtime Attributes Is There More Than One Way to Get There? Do-It-Yourself Routing

FILE BASICS
Chapter 20: File Structures
Structural Fundamentals Data Members: A Challenge Database Files Source Files Device Files DDM Files Save Files

Chapter 21: So You Think You Understand File Overrides
Anatomy of Jobs Override Rules Scoping an Override Overriding the Same File Multiple Times The Order of Applying Overrides Protecting an Override Explicitly Removing an Override Miscellanea Important Additional Override Information Overriding the Scope of Open File Non-File Overrides Overrides and Multi-Threaded Jobs File Redirection Surprised?

Chapter 22: Logical Files
Record Format Definition/Physical File Selection Key Fields Select/Omit Logic Multiple Logical File Members Keys to the AS/400 Database

Chapter 23: File Sharing
Sharing Fundamentals Sharing Examples

BASIC CL PROGRAMMING
Chapter 24:
CL Programming: You're Stylin' Now! CL Coding Suggestions

Chapter 25: CL Programming: The Classics
Classic Program #1: Changing Ownership The Technique Classic Program #2: Delete Database Relationships The Technique Classic Program #3: List Program-File References The Technique

Chapter 26: CL Programs and Database Files
Why Use CL to Process Database Files? I DCLare! Extracting Field Definitions Reading the Database File File Positioning What About Record Output? A Useful Example

Chapter 27:
CL Programs and Display Files CL Display File Basics CL Display File Examples Considerations

Chapter 28: OPNQRYF Fundamentals
The Command Start with a File and a Format Record Selection Key Fields Mapping Virtual Fields OPNQRYF Command Performance SQL Special Features OPNQRYF Special Features

Chapter 29: Teaching Programs to Talk
Basic Training Putting the Command to Work Knowing When To Speak

Chapter 30: Just Between Us Programs
Job Message Queues The SNDPGMMSG Command ILE-Induced Changes Message Types The Receiving End Program Message Uses Understanding Job Logs

Chapter 31: Hello, Any Messages?
Receiving the Right Message Note on the V2R3 RCVMSG Command Parameter Changes Receiving the Right Values Monitoring for a Message Working with Examples What Else Can You Do with Messages? RCVMSG and the MSGTYPE and MSGKEY

OTHER CONCEPTS
Chapter 32: OS/400 Commands
Commands: The Heart of the System Tips for Entering Commands Customizing Commands Modifying Default Values

Chapter 33: OS/400 Data Areas
Creating a Data Area Local Data Areas Group Data Areas

Chapter 1 - Before the Power Is On

I have outlined the installation process in the AS/400 setup checklist in Figure 1. These objects let you configure a finely tuned and productive machine.g.e. Other pending MIS projects should be examined and assigned a priority based on staff availability in light of the AS/400's installation schedule. It guides you and your staff and keeps you focused on the work ahead. the AS/400 Migrating from S/36 Planning Guide -.] An overall installation plan helps you put the necessary steps for a successful AS/400 setup into writing and tailor them to your organization's specific needs. or the AS/400 Migrating from S/38 Planning Guide -. Big Blue has succeeded -. and migrate to the new system. and show you how to customize your system.is thorough planning.1. especially for shops converting from a non-IBM system or migrating from an IBM system other than a S/38. Believe me. But even as the schedule changes and is refined.these requirements are well documented in IBM manuals. and configuring your AS/400.preferably even before it arrives. IBM has tried to graft the S/36's ease of use onto the S/38's integrated database and productivity features.Version 2 (GA41-0001).2 shows a sample installation plan that lists installation details and lets you track the schedule and identify the responsible person for each task. Your staff must also commit itself to completing all assigned tasks. space.Version 2 (GA41-9571). and preparation for a successful installation. recovery. address how to establish your work environment. configuring. many of which (e. the AS/400 requires thought. As a result. You must also answer an important question as part of your plan: Can you run the old and new systems parallel for a period of time? If you can run parallel. time spent verifying the migration or conversion of programs and data) may require extra hours. I take you through your first session on the machine. A successful AS/400 installation begins long before your system rolls in the door.especially a computer system -. [Although the installation plan includes important considerations about the physical installation -. and other functions. although the AS/400 has inherited many S/38 characteristics: a complex structure of system objects used to support security. but they do not readily lend themselves to education on the fly. performance tuning.g. The time frame outlined in your installation plan will probably change as the delivery date nears.the AS/400 provides extensive help text. Figure 1. planning. and I'd like to share what I've learned by suggesting a step-by-step approach for planning. and I do not discuss them here. The first section of the setup checklist in Figure 1. On the MIS side. I have experienced the AS/400 planning and installation process as both a customer and a vendor. configure. you can greatly reduce the time needed for the installation process. the AS/400 Physical Planning Guide and Reference -. work environment. but this work will save you and your company time and trouble when you finally begin installing. Even S/38 migration is not completely plug and go.. But the machine's friendliness stops short of "plug and go" installation. refer to the AS/400 Physical Planning Guide -. . In subsequent chapters.Version 2 (GC41-9624). Let's look at each item in this section of the checklist individually. and using your new system.1 lists tasks you should complete before you install your system -. installing. it provides a frame of reference for the total time you need to install. highly developed menu functions. electrical. In many respects. your staff must commit to learning about the AS/400 in preparation for installation and migration. These items may seem like a great deal of work before you ever see your system. The plan also helps you identify and involve the right people and gives you a schedule to work with. First I discuss the steps you can and should take before your system arrives. Management must commit itself to the installation process and must understand and agree to the project's priority. You might want to use this checklist as the cover page to a notebook you could put together to keep track of your AS/400 installation. foresight. securing. Identifying and involving the right people is critical to creating an atmosphere that assures a smooth transition to your new system. For details about physical installation. and cooling requirements -. Develop an Installation Plan A good installation plan serves as a road map..Version 2 (GC41-9623). and electronic customer support. on-line education. Before You Install Your System The first step in implementing anything complex -.With the AS/400. I know. Management and the departments you serve must understand and agree on these scheduling changes. backup.

the "current library. You'll also have to learn about the new program products and operations on the AS/400. If you move to an AS/400 from a S/36. the work environment objects. Command key differences. but you probably can't. but telling your users about these . and learning something about the AS/400's fast-path commands will help you feel more at home and productive in the native environment. You may encounter another potential problem in the panel interface differences between your former system and the SAA-compliant AS/400. Where can you get such education? Begin by asking your vendor for educational offerings. you will recognize the fast-path commands (with some minor changes). print-control screen differences. To supplement vendor training support. The Operation Assistant (OA) interface provided for end-user interaction with the AS/400 is friendly. the friendly menu format.3 lists the various audience paths available by using TSS lessons). the command entry display (once you find it). and others may cause some initial concern and confusion among your users. Matching ensures productive use of the time employees spend away from their daily duties. how to modify your work environment to improve performance. executives. then you're getting the point: You need system-specific education for a smooth transition to the AS/400. each AS/400 comes with Tutorial System Support (TSS) installed. Prepare Users for Visual and Operational Differences It would be nice if you could assure all your users that they will not find anything different when they sign on to the AS/400 for the first time.Running parallel also reduces the risk factor involved in your migration and conversion process. The key to successful education is matching education to the user. I'm also sure that those statements are absolutely true. for one. But you and your operations and programming staff will also need some training. you will need additional knowledge about how to implement new security options. But the AS/400 also has some unfamiliar territory: You must learn new security concepts. available menus. automated courses. S/38 users used to a single-level sign-on (just entering a password) may be surprised (and unhappy) to find they must sign on to the AS/400 with both a user profile name and a password. What key groups of personnel need training? The end users. Training in relational database design and implementation will improve the applications you migrate or write. and classroom training courses. For example. training support will vary from vendor to vendor. you could find yourself waist deep in phone calls and complaints on your first day of operation unless you tell your users what to expect. If you are moving from a S/38. If you buy from a third party. If all this sounds complicated. You would be wise to give some thought to the visual and operational differences and explain them to your users in advance." I'm sure this will be your response to the suggestion that you plan for training now. A communication describing the user profile and password and their roles on the system would go a long way toward smoothing the transition for such users. systems analysts." the Programming Development Manager (PDM). Another place to get AS/400 education is on the AS/400 itself. then. But education is a vital part of a successful AS/400 installation. You can also find a variety of educational offerings in seminars. Consequently. and others (Figure 1. you must schedule key personnel for education. Plan Education I can hear you now: "We don't have time for classes! We're too busy to commit our people to any education. and the extensive help text associated with the S/36. study guides. the relational database. Realistically. clerical workers. You may be able to begin TSS training before your AS/400 arrives by working through your hardware or software vendor. You can also arrange to attend courses at an IBM Guided Learning Center. help screen differences. However. Their education should focus on PC Support and on the AS/400 Office products they will work with. and how to control printer output. This on-line tutorial help provides self-paced lessons for programmers. one-on-one training sessions. and other new concepts. and the security concepts. you will see the familiar sign-on screen.

but you can successfully operate in the AS/400's S/36 or S/38 environment for as long as you need to.. The S/36 Migration Aid software identifies objects that will not migrate to the S/36 environment and keeps audit trails of what has and has not been migrated. as shown in Figure 1. testing and verifying each program as you go. and any custom software or utilities on your system.and should -. you would get errors on any S/38 commands that do not exist in the same form on the AS/400 (e. and protect your position. your staff must work through a complete education plan before even beginning to tackle the conversion process. you must complete your migration process on the first try. used in a way that provides educational benefits for your staff.. a good outside consultant. the AS/400 cannot cure bad software. Migration eases the transition considerably. Unfortunately. you must still migrate user profiles.g. be sure you know what you're getting into. the Migration Aid identifies the objects and products that will not migrate and helps keep track of the migration process. In fact. If you can't run parallel. The sooner you know what will not migrate. Although your goal ultimately should be to "go native. Again. but that approach doesn't educate your staff about the new system. Figure 1.5. One common problem in S/36 migration is expecting all applications to run better in the AS/400's S/36 environment. If you were to remove the suffix and attempt to recompile the CL program. . True. The key to a successful S/36 migration is knowing what will migrate and what won't. Nevertheless. particularly for S/36 shops. Even if you buy software written for the AS/400 and use your software vendor's expertise to migrate the data. you could simply pay a consultant to convert your database and programs for you.g. A migration plan organizes this process and. Whether you migrate from a S/36 or a S/38. If you decide to begin conversion immediately. the system uses S/38 environment commands in response to the suffix on the object attribute. You can migrate your applications in stages. IBM has made a commitment to maintain the S/36 environment on the AS/400. a much trickier proposition. for example. The AS/400 uses the suffix to identify the proper environment for the object." most shops choose to migrate first.gradually convert from the S/36 environment as you find applications that conversion will improve. the sooner you can start developing AS/400 solutions for those objects. Develop a Migration Plan The next step in pre-installation planning is to develop a migration or conversion plan. helps you become familiar with the AS/400 and the new features it offers. Badly written software that runs poorly on your S/36 will still run poorly in the AS/400's S/36 environment. DSPACTJOB). Successful S/38 migration also begins with the Migration Aid software. When a S/38 object is restored onto the AS/400. you can -. conversion could involve one week to six months of work for your staff. DSPOUTQ. As with the S/36. your system configuration.5).differences before installation will prepare them. and allows conversion to proceed at a more leisurely pace. The key to understanding the S/38 migration process is knowing that all S/38 objects are "object compatible" with the AS/400. For example. Depending on your current system. I recommend most shops migrate first and then convert as time permits. head off many complaints. when the AS/400 executes a CL program (e. running parallel for a while greatly reduces the risk involved. I recommend that you seek an experienced outside source for assistance in the migration and conversion process. SAMPLECL in Figure 1. the system attaches the suffix "38" to the object attribute. For this reason. the AS/400 may accentuate poor performance. In this case. Converted applications almost always make better use of system resources than migrated applications.4 shows a sample migration plan. Migration is thus a relatively simple process in which you save the objects from the S/38 and restore them onto the AS/400. as you carry out the plan. With S/36 conversions. could be an immense help.

you could even grant or revoke authorities to objects. resource security is not implemented. you are ready to tackle a security plan. Although you can tailor the user profile. or delete objects must be either specifically granted or received as a result of existing default public authority. System Security Level Figure 1. and presses Enter has full access to all system objects and functions. C/400. Although user profiles are not required at level 10. This method of systematically removing *ALLOBJ authority is pointless since. Security Level 20 -. a user must have a user profile and a valid password to gain access to the system. and therefore resource security is. the inherent weakness of level 20 remains: the fact that. requiring you to make significant changes to mimic what level 30 provides automatically. The first and most significant step in planning your security is deciding what level you need. the physical security measures you take. an MI program could access an unsecured object and use its authorized pointer as a passkey to an unauthorized object. Production machines require resource security to effectively safeguard corporate data.. 30.and that you haven't altered the security settings yet. On a production system.The need for level 40 security centers on a security gap on the S/38 that the AS/400 inherited. and the system will create a user profile for the session and allow the user to proceed. let me offer you a warning: If you plan to replace your existing system and completely remove it before installing your AS/400. thus deterring unauthorized access. and thus no way to enforce restricted special authorities or actual resource security. receives a sign-on screen. The authority to work with. However. 40. create. All production systems should be set at security level 30 or higher (levels 40 or 50). In this case. Imagine for a moment that you have your AS/400 fully installed and smoothly running -. Level 20 institutes minimum security by requiring that users know a user profile and password.6 shows a basic security plan.Level 30 by default supports resource security (users do not receive *ALLOBJ authority by default). This gap allowed languages that could manipulate Machine Interface (MI) objects (i. You could then tailor the user profiles to have the appropriate special authorities -. level 20 security is inadequate in the initial configuration. you must be able to explicitly authorize or deny user authority to specific objects. get help. Develop a Security Plan With your migration plan in writing. At level 20. Resource security allows objects to be accessed only by users who have authority to them. you must remove the *ALLOBJ special authority from any profiles that do not absolutely require it (only the security officer and security administrator need *ALLOBJ special authority). In other words. The profile the system creates in this case has *ALLOBJ (all object) special authority. programs.As I implied. which is sufficient for the user to modify or delete any object on the system. The *ALLOBJ special authority assigned by default to every user profile bypasses any form of resource security. and Pascal) to access objects to which the user was not authorized by stealing an authorized pointer from an unsecured object. Hire a consultant who has successfully migrated systems to the AS/400. by default.Security level 20 adds password security. You must then remember to remove this special authority every time you create a new user profile. and anyone who turns on a workstation. you need a security plan. the system is at security level 10. But there is no way to enforce the use of those assigned profiles. bypassed. Security Level 10 -. Security Level 40 -. The AS/400 provides five levels of security: 10. modify. . the default special authorities for each user class include *ALLOBJ special authority.Also. and you need to implement it as soon as possible after your system is installed. are all you have. 20. as with level 10. you could still create and assign them and ask each user to type in her assigned user profile at sign-on. system security level 10 might more aptly be called security level zero. level 30 security does this for you. Level 10 provides no security. Security Level 30 -. (s)he can simply press Enter. Therefore. or "physical security only": At level 10. To implement resource security at level 20. you are absolutely asking for trouble! If you find yourself forced into such a scenario.e. such as locking the door to the computer room. Obviously. and other production objects and to prevent unintentional data loss or modification. MI itself. by default. and 50. by default. If a user has access to a workstation with a sign-on screen.

In addition to all the security features/functions found at all prior OS/400 security levels (e.g. *USRIDX. review them to determine their source. IBM added specific features into OS/400 to comply with DOD C2 security as well as to further enhance the system integrity security introduced in level 40... However..g. But in the future. or place. Level 40 provides the security necessary to prevent a vendor or individual from creating or restoring programs on your system that might threaten system integrity at the MI level. level 40 adds operating system integrity security. what format restrictions should you have for passwords? Without format requirements. ." "sue. In other words. some system tools) will require access to restricted MI instructions and will fail. 30. operating-system integrity will become more important. Yet.To level 30's resource security." But are these passwords secret? Will they safeguard your system? You can strengthen your security plan's foundation by instituting some rules that encourage users to create passwords that are secret. hard to guess. and *USRQ) Validating parameters Restricting message handling between user and system state programs Preventing modification of internal control blocks Making the QTEMP library a temporary object If your shop requires DOD C2 compliance. The advantage of using level 40 is that you control that decision. Security Level 50 -.g. If violations are logged. and PC Implementation (GG24-4200). During installation. set your system level to 30. you are likely to end up with passwords such as "joe. thus ensuring an additional level of confidence when you work with products created by outside sources. If you find none. you can explicitly change the security level to 30. Communications. The primary purpose of security level 50 is to enable OS/400 to comply with the Department of Defense C2 security requirements. The following password rules will help establish a good starting point for controlling password formats: Rule 1 is that passwords must be a minimum of seven characters and a maximum of 10 characters. you can ask the vendor when his product will be compatible with level 40 and decide what to do based on his response. as you purchase more third-party software and as more systems participate in networks. most systems today could run at level 30 and face no significant problems. unguessable password. This rule makes passwords become more than just a familiar name. you can get more information concerning security level 50 and other OS/400 security features (e. Some packaged software (e. Release 3. and regularly changed. 40). and monitor the security audit journal for violations that level 40 guards against. Rule 2 builds on Rule 1: Passwords must have at least one digit. go to level 40 security. System integrity security strengthens level 30 security in four ways: • • • • By providing program states and object domains By preventing use of restricted MI instructions By validating job initiation authority By preventing restoration of invalid or modified programs You might wonder what level 40 buys you. Password Format Rules Your next task in security planning is to determine rules for passwords. word." "xxx. In truth. In these cases. Cryptography. auditing capabilities) in two new AS/400 publications: Guide to Enabling C2 Security (SC41-0103) and A Complete Guide to AS/400 Security and Auditing: Including C2. if the need arises to create a program that infringes upon system integrity security.IBM introduced security level 50 in OS/400 Version 2. This rule deters users who lack the energy to think past three characters when conjuring up that secret. level 50 adds • • • • • Restricting user domain object types (*USRSPC. you also must remember that sometimes "hard to guess" translates into "hard to remember" -." and "12345.and then users simply write down their passwords so they won't forget.

PGMR (programmer) grants save system and job control authorities. modify. you can feel confident that only sound passwords will be used on the system. The AS/400 has a special QSECOFR profile. you can assign each user to one of the following classes (the authorities discussed with each class are granted when the system security level is set to 30 or 40): • • • • • SECOFR (security officer) grants the user all authorities: all object. line traces and run diagnostics). service. When you use security level 30. Special authorities allow users to perform certain system functions. the AS/400 automatically assigns special authorities based on user class as shown in Figure 1. restore. The AS/400 provides six special authorities: • • • • • • • ALLOBJ (all object authority) lets users access any system object. and job control authorities. JOBCTL (job control authority) allows users to change. SECADM (security administrator authority) allows users to create and change user profiles. Using security officer authority to perform normal work is like playing with a loaded gun. and clear all jobs on the system. With these four rules in place. This authority alone. Identifying System Users Before you install the new machine. Laying the groundwork for user profiles is the next concern of your security plan. When you create user profiles. They can work within their own job and work with their own spooled files. job control." or even using their social security number. which you will create to define the users to the system after the AS/400 is installed. when the security officer must perform a duty. the functions are unavailable to the user. Passwords are a part of user profiles. and release their own spooled files and spooled files owned by other users. however. SYSOPR (system operator) grants save system and job control authorities. SECADM (security administrator) grants security administrator. SPLCTL (spool control authority) allows users to delete. such as operators and programmers. you should identify the people who will use the system. This prevents users from creating passwords such as "1111.Rule 3 can deter those who think they can remember only one or two characters and thus make their password something like "XXXXX6" or "X1X. or delete user profiles. the person responsible should sign on using the QSECOFR profile to perform the needed task. SAVSYS (save system authority) lets users save. release. 40. Your end users should all reside in the USER user class. only need to manipulate spooled files and execute applications from menus. display. On a similar note. save system. and free storage for all objects. One rule of thumb when assigning classes is that you should never set up your system such that a user performs regular work with SECOFR authority. Obtain each user's full name and department and the basic applications the user will require on the system. will need to control jobs and execute save/restore functions on the system. The user can also control spooled files in output queues where OPRCTL(*YES) is specified. cancel. without special authority. USER (user) grants no special authorities. or 50. Once you identify the users and determine which system functions they need access to. spool control.. such as accounts receivable personnel. display. You can set this time frame to allow a password to remain effective for from one to 366 days. The USER class carries no special authorities. Other users. security administrator. you can use the special authorities parameter to .g." Rule 3 simply states that passwords cannot use the same character more than once. a group of executable programs used for various service functions (e. Your MIS staff members normally will have either the SYSOPR or the PGMR user class. thus ensuring that users change their passwords regularly. does not allow the users to create. As you plan user profiles. hold.7. you also need to consider the special authorities you want to grant to the user profiles and user classes. and audit authorities (each of these special authorities is explained below). which is appropriate for most users. Rule 4 states that passwords cannot use adjacent digits. AUDIT (audit authority) allows users to start and stop security auditing as well as control security auditing characteristics. hold. Rule 5 says that passwords should be assigned a time frame for expiration." "1234. Some users. save system. SERVICE (service authority) means users can perform functions from the System Service Tools. But you can enhance your password security still further with one additional rule.

. A save file is a special type of physical file to which you can target your backup operation. the loss of any one disk can result in damage to every object on the system. User ASPs also protect save files from disk failures. The AS/400's single-level storage minimizes disk head contention and eliminates the need to track and manage the Volume Table of Contents. Because single-level storage fragments objects randomly among all the system's disks. if a disk fails. Journaling automatically creates a separate copy of file changes as they occur. while data residing in the user ASPs will be available without any restore. Figure 1. your restore time is reduced to a minimum time of restoring the operating system and the objects in the system ASP. job descriptions.such as orders taken over the phone -. But single-level storage can also create recovery problems. but should give you an idea of how valuable it can be. If you do not journal to a user ASP. Thus. Because checksum installation on an installed system requires that you save your entire system and reload everything. the objects that hold all file changes recorded by journaling) to media regularly and frequently. This description of how checksum works is (obviously) not complete. you should strongly consider journaling as a part of your backup and recovery plan. The second advantage is . If you lose a disk unit in a user ASP. You can customize your disk storage configuration by partitioning some auxiliary storage into one or more user ASPs (Figure 1.e. they do make recovery (which is difficult under the best of circumstances) easier. Like checksum. Your AS/400 will be delivered with only the system ASP (ASP 1) installed. which grants the user special authorities for job control and save/restore functions.checking your applications for security provisions -.. You must also plan specific authorities. programs. user ASPs provide protection from disk failures. Save files have two major advantages over backing up to media. An auxiliary storage pool (ASP) is another of those features that are much easier to implement when you install your system rather than later.. After complete backup.g.8a shows auxiliary storage configured only as the system ASP.that lacks backup files for the data entered. You can use user ASPs for journaling and to hold save files. ignoring the normal defaults for the *SYSOPR user class. I assure you it is not. First. with no more than one unit from each disk device in a single checksum set. you should not assume that the backup and recovery plan for your existing system will still work with the AS/400. because you can segregate specific user data or backup data onto user ASPs. you can instruct the system to grant only this special authority. which control the objects a user can work with (e. By entering only *SAVSYS for the special authorities parameter. Checksum is a case in point. Second. data files. since you don't have to change diskette magazines or tapes. checksum is your best protection against this weakness in single-level storage. allowing you to tailor authorities as appropriate for specific users. Although you do not need user ASPs to implement journaling.override the authorities granted by the user class. the system can compare the data in the failed unit in each checksum set with the data in the other (intact) units and can reconstruct the data on the failed unit. If you have on-line data entry -. the AS/400 has a variety of powerful backup and recovery options that you may not be familiar with. Develop a Backup and Recovery Plan Although it may seem premature to plan for backup and recovery on your as-yet-undelivered AS/400. you configure disk units (i. The first is that you can back up unattended.will also help you decide which users need which specific authorities and help you finish laying the groundwork for user profiles on your new system. An ASP is a group of disk units. menus). don't pass up this opportunity to consider installing checksum when you install your new AS/400. Then. one disk actuator arm and its associated storage) into checksum sets. Going through the remainder of the pre-installation security planning process -. Some of these options are difficult and time-consuming to install if you wait until you've migrated your applications and data to the new system. thus letting you recover every change made to journaled files up to the instant of the failure.e. your restore time will include only the time it takes to restore the user data in that user ASP. if you lose a disk unit in the system ASP. For instance. The system ASP holds all system programs and most user data. a user profile might have a user class of SYSOPR. you should save your journal receivers (i. With checksum.8b).

DSP03. For more information about save/restore. RAID 5 protection is similar to OS/400's checksum. and other objects. however. The drawback to this method is that it results in profiles that are easily guessed and thus provides a door for unauthorized sign-ons. but he did not know his user profile or password. and PRT02. RAID 5 stores parity information on additional disk space and uses that parity information to reconstruct the data in the event that one of the disks in a RAID 5 set fails. The conventions you choose should result in names that are syntactically correct and consistent. they are more meaningful and useful than the names the AS/400 would assign automatically. MJONES. Within one minute we were signed on using her first name as the profile and her initials as the password. So instead. Atlanta. There are those who believe that a user profile name should be as similar as possible to the name of the person to whom it belongs (e. ALDSP02. The naming convention you choose should be meaningful and should allow for growth of your enterprise. programs. yet easily remembered and understood by end users and programmers alike. which would result in names such as W1. Bad profile and password. JOHNZ). The point of this discussion is that you need to plan ahead and decide which type of disk protection you will employ so you can be ready to implement your plan when the system is first delivered. A friend of mine was bragging about his new LAN one evening and wanted to show me how it worked. The major (and probably obvious) disadvantage is that save files require additional disk storage. display station 01. by configuring the devices yourself and assigning meaningful names. "AS/400 Save and Restore Basics. If your naming conventions are in place before you install your system. except that the disk subsystem handles all the necessary read/write operations instead of OS/400. You will also need a standard for naming user profiles. resulting in names such as C01DSP01 to identify a control unit for branch office 01. But. One last option to consider is RAID protection. ALDSP01. Florida. and an introduction to a working save/restore plan. MARYM. Let's look at an example: • • • You have three locations for order entry: Orlando. W2. Under such a strategy. Alabama. You have one printer at each location. User data is placed on various user ASPs. it also helps you efficiently locate and identify objects and devices on your system. see Chapters 15 and 16." and "Backup Without Downtime. your devices can have names such as GADSP01.g. and P1. Each ASP uses a set of mirrored disk drives. and when they are. Nevertheless. A good standard does more than simply help you name files. User ASPs are required as part of the disk-mirroring feature the AS/400 offers. to allow for growth of the enterprise. . and the fact that ASPs are used protects the larger system from a complete loss due to any one single disk failure.. which simplifies design and administration of the security system and lets operations personnel identify employees by their user profiles. only one profile is needed per user. you might incorporate the branch office number into the names. Such a naming convention would help your operations personnel locate and control devices in multiple locations. RAID 1 is similar to OS/400's system mirroring option. when the disk drives are not yet full of information you would have to save before making any storage configuration changes. the disk subsystem handles all the read/write operations. leaving only the password to guess. While disk mirroring has a substantial initial investment for the additional disk drives. the protection offered is significant for companies that rely on providing 24-hour service. WMADDEN." Establish Naming Conventions Naming conventions vary greatly from one MIS department to the next. and Montgomery. GADSP02. If one disk fails. We were sitting at his secretary's desk. But this convention would pose a problem if you had two offices in the same state. FLPRT01. You could let the AS/400 automatically configure all your workstations and printers. Good guess? No. Because these names contain a two-letter abbreviation for the state.that backing up to disk is much faster than backing up to tape or diskette. You have five order entry clerks at each location. Georgia. save files are worthwhile in many cases. isolating save files in a user ASP provides that extra measure of protection. they will help installation and migration go smoothly and quickly. the system still has access to the mirrored disk. or DSP02. IBM and other AS/400 DASD vendors currently offer either RAID 1 or RAID 5 disk protection. You duplicate each disk drive to protect against a single disk drive failure. so I asked him what her name was. This method can work well when there are only a few end users. The mirroring protects the user data in the ASP.

The use of meaningless names makes profiles difficult to guess and does not link the name to a department or location that might change as the employee moves in the company. and the user's full name formatted for use in outgoing invoices or order confirmations. library list. a "meaningless" profile name is best. When profile names provide this type of information.g. you might assign one of the following profiles: GAJSMITH In this profile. who find their profiles difficult to remember. Under this strategy. you have made it this far. but it often meets with resistance from the users. 2LR50M3ZT4) and should be maintained in some type of user information file. each user profile name identifies the user's location and perhaps function in order to sharpen the ability to audit the system security plan. To implement this strategy. SYS23431. but the branch is followed by the department (OE) and the user's initials. As a result. initial values for programs (e. and you know how you will name the objects on your system. If they will. But after your vendor helps you install . For instance. auditing is more effective because you can easily spot departmental trends. Q83S@06Y7B. and backup and recovery.Another opinion holds that user-profile names should be completely meaningless (e. you must consider any limitations the other systems in the network place on user profile names and apply those limitations in creating the user profiles for your system. For example. and the remainder consists of the first letter of the user's first name followed by as much of the last name as will fit in the remaining seven characters. For instance. You have planned for migration. GAOEJES This example is similar. both your security plan and your initial program drivers can be dynamic. scheduled classes. B12OEJES This example is identical to the second. User profiles that consist of the end user's name are the least secure and are often used in small shops where everyone knows (and is on good terms with) everyone else. Such a file makes it easy to maintain up-to-date user-profile information such as initial menus.g. What Next? Okay. another platform in your network may limit the number of characters allowed for user profile names. This method is the most secure. your applications can retrieve information from the file and use it to establish the work environment. flexible. The user information file documents security-related information such as the individual to whom the profile belongs and the department in which the user works. This leaves only the password to guess. this approach enables you to quickly identify users and the jobs they're doing. the first two letters represent the location (GA for Georgia). You feel ready to begin the installation. or group. As you will discover in Chapter 3. For the most secure environment. and initial menu for a user. thus rendering the system less secure. A third approach is to use a naming standard that aids system administration. security. department. To allow your user profiles to be valid across the network. you will have to abide by that limitation. I also believe in maintaining user profiles in a user information file. initial branch number. However. You have planned education. You need to determine what user profile naming convention will work best for your environment. if you monitor the history log or use the security journal for auditing. your naming convention should incorporate the user's location or department and a unique identifier for the user's name. A final consideration in choosing a naming convention for user profiles is whether or not your users will have access to multiple systems. This method provides more departmental information while reducing the unique name identifier to initials. and started to prepare your users for the differences they will encounter with the AS/400. A convention that incorporates the user's location and function is a compromise between security and system management and implementation that suits many shops. you can simplify Display Station Passthrough functions by using the same name for each user's profile on all systems. A user information file helps you keep what amounts to a user profile audit trail. and then planned some more. such profiles are less secure than meaningless profiles because they are easy to guess once someone understands the naming scheme. if John Smith works as one of the order entry clerks at the Georgia location. You have planned and prepared.. When a user transfers to another location or moves to a new department. you should deactivate the old profile and assign a new one to maintain a security history. but the Georgia branch is numbered (B12).. programs in your system that supply user menus or functions can resolve them at run time based on location. and easily maintained. and user profile organization and maintenance are enhanced by having a naming standard to follow. In addition. To do this. Furthermore. department number).

Obviously. User ASPs and checksum configuration. The vendor should know which is the latest PTF level available and can help you determine whether or not that level exists on your system. verify that the program products you ordered are installed on the system. hear the low hum of the disk drives. you remove a segment of a disk or one or more disk units from the single-level storage area.1) to put your carefully made plans into action. you can begin breathing easier knowing you are already prepared for disasters. Thus. and all your program products are loaded on the system. Therefore. First. and the system must re-initialize the system ASP and start from scratch. For more information about PTFs and installing PTFs. after creating a user ASP or checksum. And if ASPs or checksum are part of your backup plans. I strongly suggested that you operate your machine at a minimum of security level 30. The vendor should assist you in loading these program products if they are not already preloaded on the system. programs and data) in auxiliary storage equally over the disk to increase performance during retrieval. see Chapter 6. The microcode is all there. Your AS/400 is shipped with the security level set at 10. you are now ready to sign on to your AS/400. and planned some more for your new AS/400. Don't wait until you move into a production environment. make sure you follow up on their delivery. and bring the magical screen to life -.the preset password for that profile. planned. watch the little lights start blinking. "Introduction to PTFs. With ASPs and checksum configured and the latest PTFs installed. But don't start playing with your new system yet! You have some important chores to do during your first session. My experiences with AS/400 installation have taught me that you should take some immediate steps (Figure 2. If you don't have the latest release. and presses the Enter key has full access to all system objects and functions.That Important First Session Your shiny new AS/400 is out of the box. Verify software installation and PTF levels." Signing on. Planning is a significant portion of the total installation process. This same situation exists when you reserve storage on your disk unit for checksum operations. order the tape now so you can apply the PTFs before you move your AS/400 into the production phase of installation. the AS/400 has a S/38-like single-level storage architecture that spreads objects (i. and enter QSECOFR -. but it is much easier during installation than when your machine is working in its production environment. examine your backup and recovery plan to see whether you have decided to use Auxiliary Storage Pools (ASPs) or checksum. grab your vendor installation team before they leave because the preloaded software on your system is about to be destroyed! As I discussed in Chapter 1. switching levels will be too much trouble for you and a pain for your users. how do you go about implementing all those carefully made plans? In the next chapter. you must reload the microcode. to implement checksum. you lose a portion of your objects. Set the security level.) Reconfiguring your storage and reloading your software may be a pain. If so. With the advent of preloaded software. the operating system.e. Work with the installation team to create user ASP(s). The vendor has finished installation and is packing up the tools.. but it isn't nearly as much fun as that moment when you turn on the power. In the previous chapter. you need to reset the security level as the first step in implementing your security plan. Use the user profile QSECOFR to sign on. I'll go into what happens once the power is on. and to reload everything afterward. Chapter 2 . That's the moment you live for as a midrange MIS professional! Signing On for the First Time Once the power is on. Next. But that's not the case. Change the security level now by keying in the command . the operating system is installed. and each program product. With level 10. the software media may not have been shipped to you with the system. (If you don't have your program products and manuals. (Make sure you have all the software product tapes you need.giving you access to your new toy (I mean business machine). you might think your previous S/3X experience would let you just feel your way around the system menus and functions. anyone who turns on a workstation.the hardware. Up to this point (if you have done your homework) you have committed.) Then determine whether or not the latest available cumulative Program Temporary Fix (PTF) release is installed on your system. receives a sign-on screen. by then. When you create a user ASP.

For Rule 3. Because you must perform IPLs to implement a number of settings on your AS/400. Implement Rule 5. The value can range from 1 to 366. passwords must be a minimum of seven characters and a maximum of 10 characters. To implement Rule 2. you can feel confident your AS/400 will operate in a secure environment. 40. The password expiration interval can also be set individually for user profiles using the PWDEXPITV parameter of the user profile. You should already have decided on the password rules. enter CHGSYSVAL SYSVAL(QPWDLMTREP) VALUE('1') Setting the system value QPWDLMTREP (Limit Character Repetition) to 1 prevents characters from being repeated in immediate succession within a password. you might as well practice one now to put level 30 into action. For example. which means you cannot sign on as that user profile. In Chapter 1. enter the code CHGSYSVAL SYSVAL(QPWDMINLEN) VALUE(7) CHGSYSVAL SYSVAL(QPWDMAXLEN) VALUE(10) The system value QPWDMINLEN (Password Minimum Length) sets the minimum length of passwords used on the system. the default-owner user profile QDFTOWN doesn't have a password because the profile receives ownership of . This is helpful because there are certain profiles. Change system-supplied passwords. passwords cannot use adjacent digits. and system value QPWDMAXLEN (Password Maximum Length) specifies the maximum length of passwords used on the system. To implement Rule 1. I recommended five rules to guarantee the use of secure passwords on your system. Make sure the key is in the AUTO position and then power down the system with an automatic restart by keying in PWRDWNSYS OPTION(*IMMED) RESTART(*YES) When the system is re-IPLed. The next important step in implementing your security plan is setting the system values that control password generation. and changing the system values to enforce those rules is relatively easy. Some of these profiles do not have passwords. passwords should be assigned a time frame for expiration. by entering the command CHGSYSVAL SYSVAL(QPWDEXPITV) VALUE(60) System value QPWDEXPITV (Password Expiration Interval) specifies the length of time in days that a user's password remains valid before the system instructs the user to change passwords. passwords cannot use the same character more than once. The change will take effect when you IPL the system. or 50. Enforce password format rules. enter CHGSYSVAL SYSVAL(QPWDLMTAJC) VALUE('1') This prevents users from creating passwords with adjacent numbers. For Rule 4.CHGSYSVAL SYSVAL(QSECURITY) VALUE(XX) where XX is either 30. that are particularly sensitive and should require a password change more often for additional security. enter CHGSYSVAL SYSVAL(QPWDRQDDGT) VALUE('1') Setting the system value QPWDRQDDGT to 1 requires all passwords to include at least one digit. passwords must have at least one digit. OS/400 provides several user profiles that serve various system functions. such as their social security number or phone number. such as the QSECOFR profile.

. and -. After changing the passwords for the system-supplied profiles. The system identifies the device type.objects when no other owner is available. P1 and P2 for printers). Type GO SETUP and then select the "Change Passwords for IBM-supplied Users" option (option 11) to work with the panel shown in Figure 2.. You can reset auto-configuration by executing the command CHGSYSVAL SYSVAL(QAUTOCFG) VALUE('0') This change takes effect when you re-IPL the system.g.. and better documentation. DSP01 and DSP02 for workstations. and these passwords are preset to the profile name (e. and assigns a name to the device. password rules. and autoconfiguration. Although automatic configuration gives you an easy way to configure new devices (you can plug in a new terminal. W1 and W2 for workstations. but the subject is well documented in IBM's AS/400 Device Configuration Guide (SC41-8106). a better naming convention. A system value is an object type found in library QSYS. the next important action concerns the system value QAUTOCFG."Poof!" -.the system configures it). Therefore.) You must now configure devices yourself when needed. auto-configured devices are named according to the standard specified in the system value QDEVNAMING. you should re-IPL the system now to put into effect the changes you have made for security level. the system value QAUTOCFG is preset to 1.2. The possible values for QDEVNAMING are *STD or *S36. which controls device auto-configuration and helps you establish your naming convention. after the system has been IPLed and the initial configuration is complete. However. If the system value is left at the default value of *STD. better logic. which allows the system to configure devices (e. Several times now.g. But I recommend this approach because it usually requires more planning. better structure. the preset password for the QSECOFR profile is QSECOFR). you must change the passwords for these profiles: • • • • • • QSECOFR (security officer) QPGMR (programmer) QUSER (user) QSYSOPR (system operator) QSRVBAS (basic service representative) QSRV (service representative) To enter new passwords. let's take a look at a few of the most significant system . Configuring devices is beyond the scope of this chapter. you should reset the value of QAUTOCFG to 0. the AS/400 automatically names devices according to S/36 naming conventions (e. you have set AS/400 system values. Admittedly. attach the cable. PRT01 and PRT02 for printers). and the AS/400 has many of these useful objects to control basic system functions. configuring devices is much more of a pain than letting the system configure for you. every AS/400 is shipped with passwords for the systemsupplied profiles listed below. which instructs the system not to auto-configure devices. After you have taken steps to secure your system. When your system is delivered. When the QAUTOCFG system value is set at its default value of 1. it would be wise to write the new passwords down and store them in a safe place for future reference. Set auto-configuration control. sign on as the QSECOFR profile and execute the following command for each of the above user profiles: CHGUSRPRF USRPRF(user_profile) PASSWORD(new_password) This can also be accomplished using the SETUP menu provided in OS/400.g. Setting general system values. it can frustrate your efforts to establish a helpful naming convention for your new machine. To further familiarize you with your new system.g. terminals) automatically when the power is turned on. (If you haven't done so already. You can assign a password of *NONE (you cannot change QSECOFR password to *NONE). Therefore. or you can assign new passwords that conform to the password rules you have just implemented. If the option *S36 is specified. the AS/400 assigns device names according to its own standard (e.. creates a description for that device. Having QAUTOCFG set to 1 is necessary because the AS/400 then configures itself for your initial sign-on session.

you will see a spike in your response time. some hardware error that stopped the system. you will add some work overhead to the system. you waste memory. QSPL is the spooling subsystem that provides the operating environment (memory and processing priorities and parameters) for programs that read jobs onto job queues to wait for processing and write files from an output queue to an output device.3 shows the formula for calculating the machine pool size. if the system value setting the machine pool size is too low. and communications jobs. The values '5' '5' would instruct the system to attempt recovery five times and wait five seconds between those attempts. because error recovery has a high priority on the AS/400. it may not be effective or efficient. the operator will be prompted with a system message asking whether error recovery should be attempted. These are just a few of the system values available on the AS/400. you can change the value of QPRTDEV to SYSPRINT. Setting QMAXSIGN to 3 means that after three unsuccessful attempts at signing onto the system (because of using an invalid user profile or password). you have covered a lot of ground so far. However. The benefit of this system value is that during IPL. the output will default to this printer (unless a particular output queue or printer device is specified). this system value will be 1.values. reset this system value to the initial value of '0' '0' and respond manually to the failure messages. your work environment is simple. if the value is too high. In other words. This instructs the system to perform no error recovery when a communications line or control unit fails. The initial value is PRT01. When the system is shipped. and subsystem QSPL. and Figure 2. When a user profile is created. QMAXSIGN specifies the number of invalid sign-on attempts to allow before varying that device off. If the value is 1. For a list of system values and their initial values. Only at the end of those attempts would the operator be prompted with a system message if recovery has not been established. While this simple arrangement is functional. Let's look at the most important work management objects. If you have a printer device named SYSPRINT. Thus. Examine this value and compare it with the calculated value you arrive at based on the configuration you are operating. this system value contains a 0 if the previous end of system was NORMAL (meaning you powered the system down and there was no error). The second number indicates how many seconds will expire between attempts at recovery.) QABNORMSW is not a value that you modify. Establishing Your Work Environment Okay. Memory is divided into the machine pool. You will have to enable the device or user profile again to make it available. The initial value is 15. You've made the system secure. your initial start-up program can check this value. if the previous end of system was ABNORMAL (meaning there was a power outage that caused system failure. meaning the previous end of system was ABNORMAL. If left in this mode. But it's not time for fun and games yet. or any other abnormal termination of the system). QPRTDEV specifies which printer device is the default system printer. subsystem QBASE. QCMNRCYLMT controls the limits for automatic communications recovery. the system will disable either the device or user profile being used (the action performed depends upon the value of the QMAXSGNACN system value). (You can use the WRKSYSVAL (Work with System Values) command to examine and modify system values. the system itself maintains the proper value. you might want to handle the IPL and the start-up of the user subsystems differently. but I recommend a value of 3 for tighter security. The initial values are '0' '0'.4 shows a sample calculation that . The first number controls how many attempts will be made at error recovery. reset the auto-configuration value. It is worth your time to read about each of these values and determine which ones need to be modified for your particular installation. batch. Subsystem QBASE is a memory pool used to execute all the interactive. or its AS/400 Programming: Work Management Guide (SC41-8078). if a communications line or control unit fails and error recovery kicks in. and looked at some general system values. you need to customize your work environment for your organization. consult IBM's AS/400 Programming: System Reference Summary (SX41-0028). This system value is composed of two numbers. If you experience severe communications difficulties. For example. QMCHPOOL is the system value that specifies the amount of memory allocated to the machine pool. A word about the use of QCMNRCYLMT: If you decide to use the system error recovery by setting this system value. performance is slow. Now you should establish your work environment. When your system IPLs. The system uses the machine pool to interface with the hardware. Figure 2.

Enter the WRKSYSSTS (Work System Status) command to see the amount of storage the machine has reserved for these functions (the reserved value will appear on the display as RESERVED).600 KB. you should set the QBASPOOL value higher (a good rule of thumb is to add 400 KB for each activity level QBASPOOL supports) because many more system jobs will be active under normal working conditions. Setting this value too low may result in delays if your system needs additional jobs. Memory not allocated to any other storage pool stays in the base storage pool. if the number of subsystem activity levels exceeds the value in QMAXACTLVL. As with QMCHPOOL. For example. if the reserved size is 1.g. For example. for jobs or workstations) and provides more efficient performance. four SDLC communications lines.g.e. see Chapter 13 of the Work Management Guide). set the QACTJOB value at 55. QADLACTJ specifies the additional number of active jobs for which the system should allocate storage when the number of active jobs in the QACTJOB system value is exceeded. Setting QACTJOB and QTOTJOB to values that closely match your requirements helps the AS/400 correctly allocate resources for your users at the system start-up time instead of continually having to allocate more work space (e. Monitor the performance of the base pool to determine whether additional memory or another activity level is required. you should make sure that you adjust this value to allow for one activity level for each batch job that you will allow to process simultaneously.000 KB. This pool supports system jobs (e. The initial value for QBASACTLVL depends on your AS/400 model. you must increase QMAXACTLVL if you increase the total number of activity levels in your subsystems or if you add subsystems. The resulting machine pool size is 4. any user or system job that has started executing but has not ended) that you expect to have on the system. QSYSWRK. Therefore. resulting in unnecessary waiting for your users..918 KB. By examining each subsystem. which will let you increase activity levels for individual subsystems for tuning purposes without having to increase QMAXACTLVL. and one Token-Ring adapter. QTOTJOB specifies the initial number of jobs for which the system should allocate auxiliary storage during IPL. if you elect to run batch jobs in this pool (instead of creating a separate private pool for batch processing). discussed below. The amount of storage allocated for each active job is approximately 110 KB (this is in addition to the auxiliary storage allocated due to the QTOTJOB system value.assumes you have an AS/400 with a main storage size of 32 MB. jobs in the job queue. You can use this value as a minimum value for QBASPOOL. you can establish the total number of activity levels. monitor the value of QBASPOOL to make sure it remains adequate.) I suggest you set this number to approximately 10 percent above the average number of active jobs (i. if you have an average of 50 active jobs. the system executes only the number of levels specified in QMAXACTLVL. However. I suggest you set the QMAXACTLVL value to five above the total number of activity levels allowed in all subsystems. two controllers on each line. and subsystem monitors) and system transients (such as file OPEN/CLOSE operations). QSPLMAINT. SCPF. . active jobs. save/restore operations. which you might round off to 5. this value must at least equal that number or be set higher. (For basic pool performance tuning information. QBASPOOL specifies the minimum size of the base storage pool. Fudging a little on the calculations won't hurt if you monitor the performance of this pool under normal work loads and adjust either way. QMAXACTLVL sets the maximum activity level of the system by specifying the number of jobs that can compete at the same time for main storage and processor resources. an estimated 150 active jobs. QSYSARB. This default value should be adequate.. and jobs having spooled output in an output queue). and setting it too high increases the time needed to add the additional jobs.g.. QBASACTLVL sets the maximum activity level of the base storage pool. QACTJOB is the system value that specifies the initial number of active jobs for which the system should allocate storage during IPL.. The number of jobs is the total possible jobs on the system at any one time (e. however. but I recommend being a little more generous.

and QSPL still supports its normal functions as the spooling subsystem. Although the QCTL subsystem only supports sign-on at the console. which is the system value that specifies what the controlling subsystem will be. The initial value for that system value is QSTRUP QSYS. the controlling subsystem for operations is QBASE. But when long-running batch jobs run with interactive workstations that compete for the same memory. you may want to further divide the QINTER subsystem into one subsystem for remote interactive jobs and another for local interactive jobs.5) by executing the command RTVCLSRC PGM(QSYS/QSTRUP) SRCFIL(QGPL/QCLSRC) . It supports interactive. and communications jobs and can be the basis for various customized subsystems. For example. setting this value too low may result in delays and interruptions when your system needs additional jobs. Thus. The QINTER subsystem supports interactive jobs. and more information about the CRTSBSD command in the Control Language Reference (SC41-0030). However. This ensures that your system's configuration remains consistent. Selecting your controlling subsystem is the next task in establishing your work environment. you may want to modify QSTRUP to perform custom functions. or you may want to run a job that checks the QABNORMSW system value each time the system is started. You will need to document changes to these objects. My experience with AS/400s has taught me that establishing separate subsystems for batch. (The above CHGSYSVAL command changes the value of QCTLSBSD. batch. One memory pool can support all activities. When the system IPLs. the controlling subsystem QBASE or QCTL. Using QCTL as the controlling subsystem establishes separate subsystems for batch. QBATCH. I suggest you record any commands that change the work management system values (or any other IBM-supplied objects) by keying the same commands into a CL program that can be run each time a new release of the operating system is loaded. but the subsystem description contains the parameters that control these functions. This program starts the appropriate subsystems and starts the print writers on your system. submits an auto-start job that runs the program specified in the system value QSTRUPPGM. you must specify the memory allocation and the number of activity levels. and setting it too high slows the system when new jobs are added. if your system supports large numbers of remote and local users. interactive. QCMN supports communication jobs. You can find more information about subsystem descriptions in Chapters 17. Making QCTL the controlling subsystem will also help if you decide to create your own subsystems. For instance. I recommend implementing separate subsystems for each type of job to provide separate memory pools for each activity. you can establish appropriate execution priorities. Retrieving and modifying the start-up program QSTRUP. You can thus allocate memory to each subsystem based on the need for each type of job and set appropriate activity levels for each subsystem. 18. QBATCH supports batch jobs. QCTL also begins an auto-start job at IPL. and communications jobs gives you much more control. As with QADLACTJ. and memory allocations for each type of job and greatly improve performance consistency. and QSPL (the descriptions for these subsystems are in the QGPL library). time slices. and 19. When your system is shipped. Use the following command to change the controlling subsystem from QBASE to QCTL: CHGSYSVAL SYSVAL(QCTLSBSD) VALUE('QCTL QGPL') or you can use the WRKSYSVAL command to modify the system value.QADLTOTJ specifies the additional number of jobs for which the system should allocate auxiliary storage when the initial value in QTOTJOB is exceeded. interactive. No system values control the memory pools and activity levels for individual subsystems. and communications jobs in the same memory storage pool. you may have created additional subsystems that need to be started at IPL. This default configuration is simple to manage because these two subsystems are used apart from the machine pool and the base pool. When you IPL your system. and in the Work Management Guide.) This will be effective after the next IPL. whichever you decide to use. For instance. QCMN. However. when you create a subsystem description with the CRTSBSD (Create Subsystem Description) command. Establishing your subsystems. system performance is poor and the fight for activity levels and priority becomes hard to manage. The auto-start job then starts four system-supplied subsystems: QINTER. Retrieve the CL source code for QSTRUP (Figure 2. QBASE is started and an autostart job also starts the spool subsystem QSPL.

WDAVIS or PGMR0234) is what you normally think of as the user profile. Once you have performed this task.e. For this aspect of setting up your AS/400. QDFTOWN (Default Owner). you first need to understand user profiles and their attributes. and QSRV (Service Profile used by the Customer Engineer). Make sure you test your new start-up program before replacing the original program. What Is a User Profile? To the AS/400. The next step is to set up user profiles. just in case someone deletes the library that contains your new start-up program.6 shows a sample user-modified start-up program that uses QCTL as the controlling subsystem for the additional subsystems of QPGMR. recompile the program into library QSYS under a different name or to a different library. turn over to a program the job of creating profiles for your users. But before you create any user profiles. a user profile is much more than a name.1 represents all the available parameters for creating a user profile. This is a required parameter and you will enter the name of the user profile you are creating.After retrieving the source. CHGUSRPRF (Change User Profile). Figure 2. I stressed the importance of developing a strategic naming convention for user profiles. While the object's name (e. it's time to start thinking about putting it to work. and QLOCAL. You should restrict authority to the CRTUSRPRF (Create User Profile).Access Made Easy If you have followed my recommendations about AS/400 setup to this point. security. Once you have modified QSTRUP. (I suggest you leave the program in library QSYS. each parameter has a default value that will be accepted unless you supply a specific value to override that default. QREMOTE. or delete user profiles. Following are the key user profile parameters that you will frequently change to customize a user profile. The sample program also checks the status of the QABNORMSW system value. education. such as QSECOFR (Security Officer). IBM supplies a few user profiles with which to maintain the AS/400. and DLTUSRPRF (Delete User Profile) commands to those responsible for the creation and maintenance of user profiles on your system. you need profiles for your users so they can sign on to the system and access their programs and data. With that knowledge you can. and recovery before you ever received your system.) Then change QSTRUPPGM to use your new program. job) for that user at signon. Now What? Chapter 3 . Only the security officer profile (QSECOFR) or another profile that has *SECADM (security administrator) special authority can create. migration. which contains the user profile name you decided on. you are ready to create a user profile for each person who needs access to your system. If you prompt the CRTUSRPRF command and then press F10. you've carefully planned for installation. In addition to these profiles. The CRTUSRPRF and CHGUSRPRF commands have a parameter for each user profile attribute. enabling it to establish a custom initial session (i.1). Except for the user profile name (USRPRF) parameter. To make the best use of user profiles. change. use the SEU editor to change QSTRUP to perform other start-up functions for you. In Chapter 1.. The attributes of a user profile object define the user to the system. backup..g. if you wish. a user profile is an object. PASSWORD (User Password) . USRPRF (User Profile) The first parameter is USRPRF. the system will display the command's parameters (Figure 3. Creating User Profiles Figure 3. You've established consistent and meaningful naming conventions for system objects and have established your work environment. You create a user profile using the CRTUSRPRF (Create User Profile) command. you should first decide how to name them. Now that you have powered on the AS/400. you must understand those attributes and how they can help you control access to your system.

The AS/400 does not allow even the security officer to view existing passwords. This would violate the first rule of passwords -. *SECADM (security administrator). You should not use PASSWORD(*USRPRF). you may want to specify PWDEXP(*YES) to prompt new users to choose a secret password the first time they sign on. which I'll discuss later in this chapter. PWDEXP (Set Password to Expired) The PWDEXP parameter lets you set the password for a specific user profile to the expired state. the system allows the user to sign on to the system. hard to guess. Figure 3. *SYSOPR . passwords should be secret. and you can make sure passwords are changed regularly. An authorized user must reset the value to *ENABLED before the user profile can be used again. or the password itself. You can control the format of passwords by using one or more of the password-related system values discussed in Chapter 2 or by creating your own password validation program (see the discussion of the QPWDVLDPGM in IBM's Security Reference manual (SC41-8083).2 shows the five classes of user recognized on the AS/400: *SECOFR (security officer). As I said in Chapter 1. USRCLS (User Class) and SPCAUT (Special Authority) These two parameters work together to specify the special authorities granted to the user. use system value QPWDEXPITV to specify the maximum number of days a password will remain valid before requiring a change. job manipulation. Require at least one digit (use the QPWDRQDDGT system value). This discussion assumes you allow users to select and maintain their own passwords. which means that the user profile cannot sign on to the system. Do not allow adjacent numbers in a password (use the QPWDLMTAJC system value). *USRPRF. such as save/restore functions. To ensure that users change their passwords regularly. which would require all users system-wide to change passwords every two or three months. A good value for QPWDEXPITV is 60 or 90 days. When you create new user profiles. The primary use of this parameter is in conjunction with the QMAXSGNACN system value. I suggest the following guidelines: • • • • Enforce a minimum length of at least seven characters (use the QPWDMINLEN system value). but you can help make them hard to guess by controlling password format. Special authorities allow users to perform certain system functions. and user profile administration (see the discussion of user classes and special authorities in Chapter 1). No one in MIS needs to know user passwords. You can specify a different password expiration interval for selected individual profiles using CRTUSRPRF's PWDEXPITV parameter. a value of *USRPRF.that they be secret! The PASSWORD parameter lets you specify a value of *NONE. spool file manipulation. users who have been terminated but cannot be deleted at the time of termination. You cannot ensure that users keep their passwords secret. the system does not allow the user to sign on until an authorized user re-enables it (changes the value to *ENABLED). When the value of STATUS is *ENABLED. If the value is *DISABLED. If QMAXSGNACN is set to 2 or 3. profiles of users who are on vacation and do not need access for a period of time. When a user is disabled. otherwise. The default value. and for other situations in which you want to ensure that a profile is not used. The format you impose should encourage users to create hard-toguess passwords but should not result in passwords that are so cryptic users can't remember them without writing them down within arm's reach of the keyboard. dictates that the password be the same as the user profile name. you will forfeit the layer of security provided by having a password that differs from the user profile name. is recommended for group profiles.As I mentioned in Chapter 1. The same is true when you reset passwords for a users who forgot theirs. The USRCLS parameter lets you classify users by type. the system will disable a profile that exceeds the maximum number of invalid sign-on attempts (the QMAXSIGN system value determines the maximum number of sign-on attempts allowed). STATUS (Profile Status) This parameter specifies whether a user profile is enabled or disabled for sign-on. Do not allow an alphabetic character to be repeated in a password (use the QPWDLMTREP system value). *NONE. *PGMR (programmer). the system changes the value of status to *DISABLED. and regularly changed.

After sending a message that the special authorities assigned do not match the user class. Figure 3. 40.3 lists the values allowed for the SPCAUT parameter and what each means. The *SAVSYS special authority lets a user profile perform save/restore operations on any object on the system without having the authority to access or manipulate those objects. If . Having *SERVICE special authority enables a user profile to use the System Service Tools. *SERVICE is another special authority that should be guarded. Special authorities should be given to only a limited number of user profiles because some of the functions provided are powerful and exceed normal object authority. you can classify users based upon their role on the system. The QSRV. This is why the security level of any development or production machine should be at least 30. These tools provide the capability to trace data on communications lines and actually view user profiles and passwords being transferred down the line when someone signs on to the system. So be stingy with *SERVICE special authority. CRTUSRPRF USRPRF(B12ICJES) PASSWORD(password)USRCLS(*PGMR)+ SPCAUT(*NONE) In this case. By specifying a user class for each user profile. Your security implementation should be designed so it does not require *ALLOBJ authority to administer most functions. While you can override these special authorities using the SPCAUT (Special Authority) parameter. Reserve this special authority for QSECOFR.a user with *ALLOBJ special authority can perform any function on any object on your system. and 50.2 also shows the default special authorities associated with each user class under security levels 30. the profiles inherit the special authorities associated with their class. The default for the SPCAUT parameter is *USRCLS. Here are two examples: CRTUSRPRF USRPRF(B12ICJES) PASSWORD(password) USRCLS(*PGMR) User profile B12ICJES will have *SAVSYS and *JOBCTL special authorities. and QSECOFR profiles provided with OS/400 have *SERVICE authority. These classes represent the groups of users that are typical for an installation. system operators could access such sensitive information as payroll and master files. For instance. *SAVSYS shows clearly how the AS/400 lets you grant only the authority a user needs to do a job. you can assign an individual to perform these functions without having to assign that profile to the *SECOFR user class. The danger in letting that power get into the wrong hands is clear. which instructs the system to refer to the user class parameter and assign the predetermined set of special authorities that appear in Figure 3. where resource security and *ALLOBJ special authority can be controlled with confidence.(system operator). *SAVSYS avoids that authorization problem while providing operators with the functional authority to perform save/restore operations. *ALLOBJ special authority gives the user unlimited access to and control over any object on the system -. no profile other than QSECOFR should have *ALLOBJ authority. and use that profile to make any changes that require that level of authority. *SECADM special authority enables the user profile to create and maintain the system user profiles and to perform various administrative functions in OfficeVision/400. You can override this default by typing from one to five individual special authorities you want to assign to the user profile. When you assign user profiles to classes. and *USER (user). These tools also provide the capability to display or alter any object on your system. You should check whether or not your systems still have the default passwords for the system profile QSRV or QBASSRV. The *SECADM special authority is helpful in designing a security system that gives users no more authority than they need to do their job. Using *SECADM.2. Generally speaking. What would it do to your system security if your operations staff needed *ALLOBJ special authority to perform save/restore operations? If that were the case. Figure 3. user profile B12ICJES will be in the *PGMR class but will have no special authorities. QBASSRV. the system will create the user profile as you requested. often the default authorities are sufficient.

and INLMNU parameters determine the user profile's current library. . the system executes the menu. . . . . . . . .they do. . . and program ICUSERON in library SYSLIB is executed. . INLMNU . . . . INLPGM. Library . . . . Users can also execute OS/400 commands from the command line provided on AS/400 menus. specified in the INLPGM parameter. When a user signs on. the current library is set to ICLIB and the user receives menu ICMENU in library ICLIB. INLPGM . if any. The value of *SIGNOFF for the INLMNU parameter is worth some discussion. . Thus. . . INLPGM. The CURLIB. ICLIB ICUSERON SYSLIB *SIGNOFF When USER signs on to the system. CURLIB . . . INLPGM . which has the following values: Current library Initial program Library . Library . and this is where the LMTCPB parameter enters the picture. . initial program. change the passwords to *NONE. and initial menu. . allowing all users these capabilities is not a good idea from a security point of view. . . . If the user or user program has not actually executed the SIGNOFF command when the initial program ends. ICLIB is the current library in the library list. . . . to call . . . . . . . . . OS/400 signs the user off the system. any other menus or programs accessible through ICUSERON and to which the user is authorized are also available. . respectively. Why are these parameters significant to security? They establish how the user interacts with the system initially. When *SIGNOFF is the value for INLMNU. CURLIB . . define (using the CHGUSRPRF command) or change (at sign-on) his own initial menu. . to call . . . . . Again. . Initial menu . . if any. . . . . The CURLIB. . . . . Initial menu . if the default value MAIN were given for INLMNU and program SYSLIB/ICUSERON were to end without signing the user off. and the menu or program executed at sign-on determines the menus and programs available to that user. . and INLMNU parameters are significant to security because users can modify their value at sign-on. Example 2 Current library Initial program Library . . ICLIB *NONE ICMENU ICLIB When USER signs on to the system. . . . . Obviously. . . Any other menus or programs that can be accessed through ICMENU and to which USER is authorized are also available. OS/400 executes the program. . specified in parameter INLMNU. INLMNU . the system would present the main menu. Initial Sign-On Options CURLIB (Current Library) INLPGM (Initial Program) INLMNU (Initial Menu) LMTCPB (Limit Capabilities) Three user profile parameters work together to determine the user's initial sign-on options. . . . LMTCPB controls the user's ability to • • define (using the CHGUSRPRF command) or change (at sign-on) his own initial program. . . and assign a password only when a Customer Engineer needs to use one of these profiles. Let's look at a couple of examples: Example 1 Consider the user profile USER. . . .

The OWNER parameter specifies who owns the objects created by the group profile. • Figure 3. The default lets the system value control these functions. the authorization applies to all profiles in the group. There is an advantage to having the group profile own all objects created by its constituent user profiles. specify the desired values in the user profile parameters.• • define (using the CHGUSRPRF command) or change (at sign-on) his own current library. The GRPPRF. The profiles that typically need LMTCPB(*NO) are MIS personnel who frequently use the command line from OS/400 menus. If you create the group profile DEVPGMR. Production systems usually enforce LMTCPB(*YES) for most user profiles. and GRPAUT parameters to place them in the DEVPGMR group. System Value Overrides DSPSGNINF (Display Sign-on Information) PWDEXPITV (Password Expiration Interval) LMTDEVSSN (Limit Device Sessions) The system values QDSPSGNINF. This . they could still change their initial menu. The group profile should specify PASSWORD(*NONE) to prevent it from actually being used to sign on to the system -. and GRPAUT parameters let you associate an individual with a group of user profiles via a group profile. Then for each user profile belonging to a member of the staff. How is this accomplished? You create a user profile for the group. QPWDEXPITV.4 shows the effect of the possible values for the LMTCPB parameter. then every member of the group has *ALL authority to the objects. You will notice that LMTCPB(YES) prevents changing any of these values or executing any commands. The available choices are the same as those for the system values themselves. execute OS/400 or user-defined commands from the command line on AS/400 native menus. These user profiles can still be secured from sensitive data using resource security. which would be executed at the conclusion of the initial program. you would specify DEVPGMR as the GRPPRF value for the user profiles you put into that group. OWNER. Group Profiles GRPPRF (Group Profile) OWNER (Owner) GRPAUT (Group Authority) All the parameters discussed to this point are used to define profiles for individual users. and QLMTDEVSSN can be overridden through user profile parameters that control these functions. For instance. OWNER. To override the system values. The GRPPRF parameter names the group profile with which this user profile will be associated. you might create a profile called DEVPGMR to be the group profile for your programming staff. define (using the CHGUSRPRF command) or change (at sign-on) his own attention key program. use the CHGUSRPRF command and the GRPPRF. When you authorize a group profile to objects on the system. You'll notice that each of these parameters has a default value of *SYSVAL. The parameter value determines whether the user profile or the group profile will own the objects created by profiles that belong to the group.all members of the group should sign on using their own individual profiles. Although you could specify LMTCPB(*PARTIAL) for those MIS personnel and thus ensure that they cannot change their initial program. When the group profile owns the objects.

but values in the job description determine the attributes of the user profile's workstation session. including the group profile. the initial library list that you specify for the job description becomes the user portion of the library list for the workstation session. individual user profiles own the objects they create. the user profile will enter the S/36 environment at sign-on. If you specify *NONE. and the other members of the group. extended math functions. The first four of these values are authority classes. all members of the group will share that authority. *36. The job description specifies a set of attributes that determine how the system will process the job. even those they don't need. Message Handling MSGQ (Message queue) DLVRY (Delivery) SEV (Severity code filter) When you create a user profile. You can use department name or some other trigger kept in a database file to determine library need. and users can always find the programs and data they need.g. The approach I recommend is to specify only general-purpose user libraries in QUSRLIBL. you can specify the appropriate job description for the JOBD parameter. and messages from other system . Each profile's initial program should then add only the application libraries needed by that particular user profile. Another approach to setting up user libraries is to create a job description for each user type on the system. this approach disregards security because it lets all users access all libraries. The GRPAUT parameter specifies the authority to be granted to the group profile and to members of the group when *USRPRF is specified as the owner of the objects created. *EXCLUDE. SPCENV (Special Environment) CRTUSRPRF's SPCENV parameter determines which operating environment the user profile is in after signing on. One way to manage the user portion of the library list is to use QUSRLIBL to establish all user libraries. or *NONE.is helpful. *CHANGE. The user receives job completion messages. If a user profile owns an object. have the specified set of authorities to the object. there is a way to provide authority to group members without giving them *ALL authority. and that job description's library list becomes the user library list when that profile signs on to the system. the initial program can manipulate the library list. the individual user profile that creates an object owns it. an IBM-supplied job description that uses the QUSRLIBL system value to determine the user portion of the library list. Not only is the job description you specify used when the user profile submits a batch job to the system. These libraries should contain only general utility programs (e. *NONE is the value required when *GRPPRF is specified as the owner of objects created by the user. each of which represents a set of specific object and data authorities that will be granted. However. After the user profile signs on. the user profile will be in the native environment at sign-on and the user will have to enter either a STRS36E or CALL QCL command to enter the S/36 or S/38 environment. these values are discussed in detail in Chapter 4 as part of the discussion of specific authorities. Then when you create the user profile. Then when someone signs on to the system. and *NONE. For instance. JOBD (Job Description) The JOBD parameter on the CRTUSRPRF command determines the job description associated with the user profile. If you specify OWNER(*USRPRF). a random number generator). If you don't specify a particular job description for the user profile on the JOBD parameter.. QUSRLIBL supplies all possible libraries. The JOBD parameter does not affect any other portion of the library list. system messages. Because the group profile automatically owns the object. The values for SPCENV are *SYSVAL. *USE. Valid values are *ALL. the group profile and other members in the group have only the authority specified in the GRPAUT parameter to the object. However. If you specify one of the authority class values for the GRPAUT parameter. in a programming environment where more than one programmer works on the same projects. the system automatically creates a message queue by the same name in library QUSRSYS. for instance. the system defaults to JOBD(QDFTJOBD). date routines. If you specify *S36. The value *SYSVAL indicates that the system value QSPCENV will be referenced to retrieve the operating environment.

the system ignores the parameter PRTDEV for this user profile and places into the specified output queue any printed output not specifically directed (via an OVRPRTF or CHGJOB command during job execution) to another output queue or printer. I don't direct spooled output by user profile. the system directs printed output according to the values of these two parameters. I follow two basic rules to determine who is on the system and to direct printed output. CHGPRTF (Change Printer File). if the OUTQ parameter contains the name of a valid output queue (or *WRKSTN refers to an actual output queue). system operators and other users can more easily remember the message queue name when sending messages. I use the user profile to determine who is on the system and the resources (e.e. If you keep the message queue name the same as the user profile name. Although PRTDEV refers to a specific device. menus. If the value of PRTDEV is *SYSVAL. because of their operational responsibilities. the printed output file is placed on the output queue named in the DEV attribute of the current printer file (this attribute is determined by the DEV parameter of the CRTPRTF (Create Printer File). meaning that the user will receive all messages. for instance. When the OUTQ parameter has the value *DEV. If the user does not specifically direct a particular spooled output file to an output queue or device via an override statement (i. information messages are ignored. Printed Output Handling PRTDEV (Print Device) OUTQ (Output Queue) The PRTDEV and OUTQ parameters are important to a basic understanding of directing printed output on the AS/400. But if you do not want certain users. Second. Three CRTUSRPRF parameters relate to handling user messages. Except in very unusual circumstances. The value *BREAK specifies that the message will interrupt the user's job upon arrival. programs. to be bothered by a lot of low-severity messages. with an OVRPRTF (Override with Printer File) command. The default severity code is 00. This interruption may annoy users. specifies the lowest severity code of a message that the system will deliver when the message queue is in *BREAK or *NOTIFY mode. SEV. authority) that user needs. they need to see their own menus. Here again the default value of *WRKSTN instructs the system to get the name of the output queue from the workstation device description. you should use the default value (*USRPRF) for this parameter. The OUTQ parameter takes precedence over the PRTDEV parameter. the S/36 environment procedure statements PRINT or SET. You should usually leave the SEV value at 00. This might be an actual printer name or the default value of *WRKSTN that instructs the system to get the name of the printer from the workstation device description. These two rules are good standards for setting up your system. The value *DFT specifies that the system will answer with the default reply any message that requires a response. The DLVRY parameter specifies how the system should deliver messages to the user. libraries. and if no output queue is specified in the OUTQ parameter of the user profile. . PRTDEV specifies the name of the printer to which output is directed. then spooled output is sent to the default system printer specified in the system value QPRTDEV. Regardless of where users sign on to the system. In other words. If a user signs on to a terminal in another department because his or her workstation is broken.users via this message queue. The OUTQ parameter specifies the qualified name of the output queue the profile will use. yet they give you the flexibility to handle special cases. and have the same authority they always do. or by naming a specific output queue in a job description or print file). The value *HOLD causes the queue to hold messages until a user or program requests them. but by the workstation being used. Those resources relate directly to the user's function. First. The MSGQ parameter specifies the message queue for the user. spooled output should print according to the user's location. an output queue with the same name as the device specified for PRTDEV must exist on the system. If the device specified does not exist (and thus no output queue exists for that device). such as sending output to a printer that can handle a special form. work with their usual objects. you can assign another value (up to 99). The last parameter of the message group. output also goes to the default system printer.g. Users can then view messages at their convenience.. but it does help to ensure that they notice messages. or OVRPRTF (Override Printer File) command).. The value *NOTIFY specifies that the system will notify the job of a message by sounding the alarm and displaying the message-waiting light. Messages of lower severity are delivered to the user profile's message queue but do not sound the alarm or turn on the message-waiting light. the OS/400 CHGJOB (Change Job) command.

except that the CHGUSRPRF command does not have an AUT (authority) parameter. if a programmer owns some objects privately and you want to delete that programmer's profile. you might specify . I specified the output queue in this example so the directory will know where to send output when the user is using directory functions such as E-mail. consider each parameter and develop a plan to best use it. Figure 3. TEXT gives you 50 characters in which to meaningfully describe the user profile. The OS/400 version of the DLTUSRPRF command has a parameter OWNOBJOPT that tells the system how to handle any objects owned by the user profile you asked to delete. output queue. Notice that I specified an output queue for the user profile in spite of my rule that the user's location at sign-on should control spooled output. A table such as this serves as part of your security strategy and as a reference document for creating user profiles. MIS operations. You can retrieve. so that the system prompts the user to choose a new secret password at the next sign-on. purchasing. The CHGUSRPRF command is the same as the CRTUSRPRF command. Typically. The remaining option for OWNOBJOPT is *CHGOWN. network spooled files.5 creates a sample user profile for an order-entry clerk at branch location 01. If there are no automated methods for deleting or transferring objects owned by the former user profile. execute the command CHGUSRPRF USRPRF(profile_name) PASSWORD(password) PWDEXP(*YES) This command resets the password to a known value and sets the password expiration to *YES. and the parameter default values for CHGUSRPRF are the parameter values you assigned when you executed the CRTUSRPRF command. Maintaining User Profiles After you set up your user profiles. you must have *SECADM special authority to use CHGUSRPRF. use the DLTUSRPRF (Delete User Profile) command. To delete a user profile. When an employee leaves. I could use the same code to create user profiles for all order-entry clerks.6 is a sample table that lists values you could use if your company had order entry. Figure 3. devise standards for creating your user profiles. However. which instructs the system to transfer ownership of any objects owned by the profile you want to delete. or display this text to identify who requests a report or uses a program. you might employ CHGUSRPRF when a user forgets a password. Once you determine your company's needs. and text. The information you include and its format should be consistent for each user profile to ensure readability and usability. you would need to use CHGUSRPRF to change the forgetful user's password temporarily and then require the user to choose a new password at the next sign-on. you can specify *DLT to delete those objects. Avoid the option *DLT unless you have used the DSPUSRPRF (Display User Profile) command to identify the owned objects and are sure you want to delete them. This command has been much improved since its introduction in S/38 CPF. accounting. Remember: A backup of these objects is an easy way to cover yourself in case of error. To accomplish this. You must specify the new owner of these objects in the second part of this parameter. With minor changes in the user profile name. The system will not delete a profile that owns objects if you specify the default *NODLT for OWNOBJOPT. inventory control. Before you actually create any user profiles. Many S/38 shops share a mutual problem when a user leaves. It is not uncommon to delete a user profile. For instance. You can change a user profile with the CHGUSRPRF (Change User Profile) command. Before you create your user profiles. or at least set the password to *NONE.Documenting User Profiles TEXT (Text Description) The last parameter we will look at on the CRTUSRPRF command is TEXT. print. especially an MIS staff member: The user profile cannot be deleted if it owns any objects. As with CRTUSRPRF. it helps to chart the various profile types and the parameter values you will use to create them. this cleanup process can take several hours. you will need to maintain them as users come and go or as their responsibilities change. Because the system won't display a password. or network messages. and MIS programming departments. the security administrator should promptly remove the employee's user profile from the system.

middle initial.are all determined from the user's department. By modifying this database file. and &NAME. and last name and stores the value in variable &NAME. After concatenating the user profile name. the program concatenates the user's first name. you can maintain one or more records for every system user profile. the code might display a certain menu. the code segment in Figure 3.7 shows the prompt screen for RTVUSRPRF. Doe is a branch employee at the Kalamazoo branch office (office number 12). you can also use it to automate user profile creation and establish a session when a user signs on to the system. which is stored in variable &USRPRF. her user profile name becomes B12JPD. Flexibility: The CRTUSR Command One option for retaining certain information about past and current user profiles is to use a database file for that purpose. the group profile becomes MIS and variable &LMTCPB is assigned the value *NO. For example.DLTUSRPRF USRPRF(profile_name) OWNOBJOPT(*CHGOWN MIS) to transfer ownership of the objects to your MIS group profile. or corporate). CRTUSRCL. Then the CPP uses variable &INITS and the variables &LEVEL (user's company level) and &LOCATION to create the user profile name. if Jane P. you may find the RTVUSRPRF (Retrieve User Profile) command helpful. The prompt lists the length of each variable next to the parameter whose value is retrieved in that variable. You might use this command to retrieve the user's actual user profile name for testing. the user profile name is R20JJJ. You can use the information in this file not only for audit purposes. &LOCATION. which will be used to create the TEXT parameter for the CRTUSRPRF command. &LMTCPB. Jones at the Sacramento (20) office. The CPP sets up the TEXT parameter by combining the values from variables &LEVEL (branch. When this condition is met. You can use RTVUSRPRF to retrieve into a CL variable one or more of the parameter values associated with a user profile.) Figure 3. The next three variables -.11 — 32 KB) begins by deriving the user's initials from the user's name and putting them into variable &INITS. The program further determines an MIS user's class by testing whether the user works in . Figure 3. Figure 3. regional. and return variables cannot be accepted when you enter a command from an interactive command line. based on user location or department. If you write a program to help you maintain user profiles.&GRPPRF. You can also prompt this command on your screen and then use the help text to learn more about each variable you can retrieve. Or you could use the test to determine what application libraries to put in the user's library list.8 retrieves the current user profile name into the variable &USRPRF and tests the first character to see whether or not it is the letter B.10 shows the source for a user-written CRTUSR command. thus providing consistent text for every user profile and making it easy to identify one particular user profile from a list. If the user works in the MIS department. (See IBM's AS/400 Programming: Control Language Reference (SC41-0030) for details about this command's parameters. For example. actually creates the user profile on the AS/400 and calls RPG program CRTUSRR to write a record to file USRINF.9 shows sample Data Description Specifications for file USRINF. The command processing program (CPP). For regional employee Jack J. The CPP (Figure 3. This command is valid only within a CL program because the parameters actually return variables to the program. The USRPRF and AUTEDT fields together serve as the primary key. and &USRCLS -. As a result. but to track authorized users and to establish initial values for programs (such as providing the correct branch location number in an inquiry program) or to identify the user requesting printed output (the name can then be placed on the report).

The program then creates the user profile by executing the CRTUSRPRF command. You will have a consistent method for creating and maintaining user profiles. There are three categories of authority to an object: • • • Object authority defines the operations that can be performed on an object. Next. and the program ends. the program sets field OFLAG (output flag) to 1. it is important to plan. Resource security defines users’ authority to objects. the global message monitor passes control to label DIAG. you will be able to retrieve information from file USRINF via a high-level language program. When you set up your AS/400. Figure 1A – Object authorities Authority Description Allowed operations . The CPP assigns non-MIS personnel to the USERS group profile and to the *USER user class. the program calls the RPG program CRTUSRR (Figure 3. When the command is successful. who does have the proper authorities. the CPP checks whether or not a personal library and output queue already exist for the user profile being created. the program must be compiled with the attribute USRPRF(*OWNER) to adopt the authority of the owner. Field authority defines the operations that can be performed on data fields. Keep in mind there will be exceptions you will have to handle individually. You can create your own CHGUSR and DLTUSR commands and programs to maintain the records in USRINF and to change or delete the user profile on the system.The Facts About Public Authorities by Gary Guthrie and Wayne Madden High among the many strengths of the AS/400 and iSeries 400 is a robust resource security mechanism.operations (OP) or on the programming staff (PG) and then assigning the appropriate value (*SYSOPR or *PGMR) to variable &USRCLS.12). The CRTUSR command ensures that you create each user profile similarly. library list. Careful planning saves literally hundreds of hours during the system's lifetime. If an error occurs on the WRITE statement. and make your user profiles work for you! Chapter 4 . and initial menu for a user profile. Then the diagnostic routine reads any messages from the program queue and sends them to the requester. substituting the variables established in the program. and you can use that information in applications to establish the work environment. Only in an exceptional case should you directly use the OS/400-supplied commands. The CPP requires the user to have *SECADM special authority and authority to the CRTUSRPRF command. If you maintain a database file like USRINF with the appropriate user information. If an error occurs during the execution of the CRTUSRPRF command. The system sends the appropriate message to the requester based on the value of the field &FLAG. it provides essential historical data for auditing and a way to extract significant information about the user profile during a workstation session. take the time to examine your current standards for establishing user profiles. Data authority defines the operations that can be performed on the object’s contents. according to shop standards. Moreover. and control returns to the CPP. Figure 1C describes field authorities. the CPP creates them and transfers their ownership to the group profile. if &DEPT is equal to OP or PG. You should usually use these commands to create and maintain user profiles. If the user does not have these authorities. and you can easily train others to create and maintain user profiles for their departments. If these objects do not exist. Figure 1A describes object authorities. Making User Profiles Work for You Whether you create user profiles with CL commands or employ user-written commands. Figure 1B describes data authorities. which establishes the correct date for variable AUTBDT (authorization beginning date) and writes the record to disk.

the public authorities . It’s important to examine the potential risks — as well as the benefits — of resource security’s default public authority to ensure you maintain a secure production environment. clear. either by restoring an object to the system or by using one of the many CrtXxx (Create) commands. or private. and reorganize database file members Alter and add database file attributes Add and remove triggers Change SQL package attributes *ObjExist Object existence *ObjAlter Object alter *ObjRef *AutLMgt Object reference Authorization list management  Specify referential constraint parent  Add and remove users and their authorities from authorization lists Figure 1B – Data authorities Authority *Read *Add *Upd *Dlt *Execute Description Read Add Update Delete Execute Allowed operations  Display object’s contents  Add entries to object  Modify object’s entries  Remove object’s entries  Run a program. initialize. the users have no specific authority granted for their user profiles. That is.*ObjOpr *ObjMgt Object operational Object management  Examine object description  Use object as determined by data authorities  Specify security for object  Move or rename object  All operations allowed by *ObjAlter and *ObjRef         Delete object Free storage for object Save and restore object Transfer object ownership Add. and are not part of a group profile with specific authority. When you create an object. What Are Public Authorities? Public authority to an object is that default authority given to users who have no specific. resource security is reasonably complex. If an object is restored to the system. public authorities are established. are not on an authorization list that supplies specific authority. authority to the object. or SQL package  Locate object in library or directory Figure 1C – Field authorities Authority *Mgt *Alter *Ref *Read *Add *Update Description Management Alter Reference Read Add Update Allowed operations  Specify field’s security  Change field’s attributes  Specify field as part of parent key in referential constraint  Access field’s contents  Add entries to data  Modify field’s existing entries Because of the number of options available. service program.

OS/400 offers a means of creating public authorities. and execute data authority. he or she cannot change the attributes of the object. It can have the value *All. Public authority is granted to users in one of several standard authority sets described by the special values *All. read the records. *Use. The user can control the object’s existence. the Aut (Authority) parameter of that command establishes the public authorities that will be granted to the object. Following is a description of each of these values: • • • • *All — The user can perform all operations on the object except those limited to the owner or controlled by authorization list management authority. he or she can’t transfer ownership of the object. Change authority provides object operational authority and all data authority when the object has associated data. however. doesn’t control the public authority of objects created on the system. and object reference authority.g. though. This authority provides object operational authority.stored with that object are the ones granted to the object. *Change — The user can perform all operations on the object except those limited to the owner or controlled by object management authority. However. open a file. . or *Exclude.. object existence authority. he or she will be prevented from updating or deleting data records or entries. unless the user is also the owner of the object. and execute a program). object alter authority. *Exclude — The user is specifically denied any access to the object. *Use — The user can perform basic operations on the object (e. System value QCrtAut provides a vehicle for systemwide default public authority. *Change. and the Aut (Public authority) parameter on each of the CrtXxx commands that exist in OS/400. This default implementation uses the QCrtAut (Create default public authority) system value. change the object. If a CrtXxx command is used to create an object. *Change. grant and revoke authorities for the object. QCrtAut alone. However. read data authority. and use the object. Figure 2B shows the individual data authorities. although the user can read and add associated data records or entries. The user can perform basic functions on the object. add data authority. *Change is the default for system value QCrtAut when OS/400 is loaded onto your system. *Use. and *Exclude. Figure 2A – Individual object authorities Authority set Object authorities *ObjOpr *All *Change *Use *Exclude X X X *ObjMgt X *ObjExist X *ObjAlter X X *ObjRef Figure 2B – Individual data authorities Authority set Data authorities *Read *All *Change *Use *Exclude X X X *Add X X X *Upd X X *Dlt X X *Execute X X X Creating Public Authority by Default When your system arrives. the CrtAut (Create authority) attribute of each library. Figure 2A shows the individual object authorities defined by the above authority sets.

*LibCrtAut instructs OS/400 to use the default public authority defined by the CrtAut attribute of the library in which the object will exist. When implementing default public authorities.The library attribute CrtAut found on the CrtLib (Create Library) and ChgLib (Change Library) commands defines the default public authority for all objects created in that library. *Exclude.. however. Although the possible values for CrtAut include *All. You can elect to override the *LibCrtAut value on the CrtXxx command and still define the public authority using *All. which is the default value for most of the CrtXxx commands. The default value for the CrtAut library attribute for new libraries will be *SysVal. the QCrtAut system value. provided the object being created specifies *LibCrtAut as the value for the Aut parameter of the CrtXxx command. *Exclude. just the library objects themselves) as long as each library uses the default value Aut(*LibCrtAut). The values specified in Figure 3 for the QCrtAut system value. by default. the QSys definition of the CrtAut attribute controls the default public authority for every library on the system (not the objects in the libraries. or an authorization list name.) If you look closely at Figure 3. The lines and arrows on the right show how each object’s Aut attribute references. users will have the default public authority defined originally in the QCrtAut system value.e. Unless you change those defaults. which reveals that *SysVal is the value for CrtAut. Therefore. Therefore. at this point the Aut parameter on the CrtLib command is defining only the public authority to the library object. the authority of the existing object is used rather than the CrtAut value of the library. *Exclude. it’s a little tricky to grasp. consider these facts: • • • • You can use the CrtAut library attribute to determine the default public authority for any object created in a given library. the CrtAut attribute might have a specific value defined at the library level. You should note. *Change. Remember. *Use. As you can see in Figure 3. *All. by default. (If you use the Replace(*Yes) parameter on the CrtXxx command. *Use. or an authorization list name). the CrtAut attribute of the library in which the object exists. In turn. the system uses the default value *SysVal. *Use. the default for CrtAut is *SysVal. when you create a library and don’t specify a value for parameter CrtAut. every object you create on the system with the default value of Aut(*LibCrtAut) will automatically grant *Change authority to the public. or it might simply reference system value QCrtAut to get the value. The Aut parameter of the CrtXxx commands accepts the values *All. *Exclude. for each new object created in a library. the Aut(*LibCrtAut) value tells the system to use the default public authority defined by the CrtAut attribute of the library in which the object will exist. *Use. The value found in system value QCrtAut is then used to set the default public authority for objects created in the library. you’ll see that although this method may seem to make object authority easier to manage. *Change. as well as the special value *LibCrtAut. First of all. the CrtAut library attribute. and an authorization list name. instructing the system to use the value found in system value QCrtAut (in effect. the public authority of the existing object is used. Instead. Figure 3 shows the effect of the new default values provided for the CrtAut library attribute and the Aut object attribute. that the CrtAut value of the library isn’t used when you create a duplicate object or move or restore an object in the library. Limiting Public Authority . and the Aut parameter are the shipped default values. *Change. if you create a new library using the CrtLib command and specify Aut(*LibCrtAut). which references the value specified in system value QCrtAut. Executing the command DspLibD QSys displays the library description of QSys. consider that all libraries are defined by a library description that resides in library QSys (even the description of library QSys itself must reside in library QSys). The lines and arrows on the left show how each CrtAut attribute references. *Change. controlling new object default public authority at the system level). Therefore. and an authorization list name. You can choose to replace the default value *SysVal with a specific default public authority value for that library (i.

the level of public authority at the object level coincides with the public authorities established at the library level. date-conversion programs. hoping that every system object can then be “easily” accessed. if any. As a general rule. you may want to look at default values differently. The good news is that this process is “a piece of cake” today compared with the effort it . a binary-to-decimal conversion program. However. The first step in effectively implementing public authorities is to examine your user-defined libraries and determine whether the current public authorities are appropriate for the libraries and the objects within those libraries.Installing a New Release One task you’ll perform at some time on your AS/400 is installing a new release of OS/400 and your IBM licensed program products. use the level of public authority given to the library object (the Aut library attribute) as the default value for the CrtAut library attribute. Default values that define the public authority for objects created on your system are effective only if planned as part of your overall security implementation. group profile. By doing so. There is no doubt that changing this system value to *All would simplify security. examine your user-defined libraries and objects to determine which. Aut(*LibCrtAut) will work well as the default for each object you create. you would change the CrtAut attribute to *Exclude. you’re providing the public authority at the library level instead of using the CrtAut(*SysVal) default. Chapter 5 . The point is this: Public authority at the library level and public authority for objects created in that library must be specifically planned and implemented — not just implemented by default via the QCrtAut system value. Another tendency might be to change this system value to *All. modify the CrtAut attribute of your libraries to reflect the default public authority that should be used for objects created in each library. which references the QCrtAut system value. In terms of security. You probably want to restrict access to this library and secure it by user profile. Then. but doing so would simply eliminate security for new libraries and objects — an unacceptable situation for any production machine. We hope you now recognize the significance of public authorities and understand the process of establishing them. The existence of default values indicates that they are the “suggested” or “normal” values for parameters. In this case. Object-Level Public Authority If you follow the suggestions above concerning the QCrtAut system value and the CrtAut library attribute.. Therefore. Public Authority by Design The most significant threat of OS/400’s default public authority implementation is the possible misuse of the QCrtAut system value. Perhaps a library contains only utility program objects that are used by various applications on your system (e. Unfortunately. or an authorization list. Consider this example. In many cases.g. However. Any new objects created in this library should probably also have *Exclude public authority unless the program or person creating the object specifically selects a public authority by using the object’s Aut attribute. changes to public authority are needed. let us warn you that doing so could cause problems with some IBMsupplied functions. a check object or check authority program). it’s important to plan this rather than simply use the default value to save time. Suppose the library you’re working with contains all the payroll and employee data files. Because all the programs should be available for execution. leave this system value as *Change. this would be like opening Pandora’s box! Here are a few suggestions for effectively planning and implementing object security for your libraries and the objects in those libraries. Your first inclination may be to change QCrtAut to *Use or even *Exclude to reduce the amount of public authority given to new libraries and objects. This is a good starting point for that library. it’s logical that the CrtAut attribute of this library be set to *Use so that any new objects created in the library will have *Use default public authority.The fact that public authority can be created by certain default values brings us to an interesting point. If you’ve already installed OS/400.

with little time for unexpected problems. Figure 1 Installation planning checklist Pre-installation-day tasks Step 1: When you receive the new release. and the list of steps in Figure 1. they may contain additional items you'll need to order before the installation. A few days before installation. order the latest cumulative PTF package if you don't have the latest. A day before or on the same day as the installation. there’s one other important preventive measure to note: Unless it’s impossible. remove unused objects from the system. A few days before installing the new release. planning for the installation of a new release offers the benefits of any preventive medicine — and it’s painless! You’ll no doubt be on a tight upgrade schedule. Step 10: If your system has an active Integrated Netfinity Server for AS/400. resolve any pending database resynchronizations.required back when IBM first announced and delivered the AS/400 product family. first install the new release on your older hardware. save the system. The Planning Checklist Every good plan needs a checklist. You can find a similar list in IBM’s AS/400 Software Installation (SC41-5120). If your system uses a 3995 optical library. Determine whether you'll perform the automatic or manual installation. and review the appropriate installation documents shipped with the release. the right products on the media. If these documents weren't shipped with the release. you should avoid mixing a hardware upgrade and a software upgrade — don’t perform both tasks at the same time. document or save changes to IBM-supplied objects. verify your order (make sure you have the correct release. you should order them. Step 11: Verify the integrity of system objects (user profiles QSECOFR and QLPINSTALL. and then upgrade your hardware at another time to avoid compounding any problems you might encounter. here’s a step-by-step guide to planning for and installing a new release of OS/400 and new IBM licensed program products. Verify disk storage requirements. No longer must you IPL the system more than a dozen times to complete the installation. Permanently apply any temporarily applied PTFs. below. When you load a new operating-system release today. Planning Is Preventive Medicine Just as planning is important when you install your AS/400 system the first time. check for and resolve any held optical files. is your guide in this case. you can have the system perform an automatic installation or you can perform a manual installation — and either method normally requires only one machine IPL. Step 2: Step 3: Step 4: Step 5: Step 6: Step 7: Step 8: Installation-day tasks Step 9: If your system participates in a network. and software keys for any locked licensed programs). To prepare you for today’s approach. A few days before installation. If a new AS/400 model requires a particular release of OS/400 and that release is compatible with your older hardware. I cover the essential planning tasks you should accomplish before the installation. as well as the installation process itself. . You should also order the latest appropriate group packages. By planning ahead and following the suggestions in this chapter. Before I describe the specific steps that will ensure a successful system upgrade. you can avoid having to tell your manager that the AS/400 will be down longer than expected while you recover the operating system because something was missing or damaged and prevented completing the installation. deactivate the server. as well as the database cross-reference files). particularly the HIPER PTF group package.

To ensure you have the latest information about installing a new release. such as licensed internal code and base OS/400. To obtain it. you should review any such documents because they may contain information unique to a product that could affect its installation. r = release. Modification Level 0) on the report. Read the chapter entirely to get a complete overview of the process before performing the items on the checklist. In addition to reviewing the deliverables listed above. One of the panel’s options compares the programs installed on your system with those on the CD-ROMs. You can use this memo to prepare for changes in the release. An automated procedure. Note: If IBM’s instructions conflict with those given here. As of this writing. the release number and modification level are represented as R05M00 (Release 5. The panel in Figure 2. “Introduction to PTFs. The report contains no entries for these items. 5769SS1. generating a ..g. You also may receive additional documentation for some individual products. the Media Distribution Report identifies the version. and preferably as soon as possible. follow IBM’s instructions. Because IBM makes minor changes and improvements to the installation process for each release of the operating system.”) After reviewing this information. Some features. feature number (e. The Read This First document is just what it sounds like: a document IBM wants you to read before you install the release. you should read this chapter along with the manual. Examine the CD-ROMs to make sure they’re not physically damaged. You’ll find a specific section pertaining to licensed programs that you have installed or plan to install on your system. 5769RG1). For V4R5. and m = modification level for the new release. which involved considerable manual effort. This document lists additional preventive service planning documents you may want to order. and modification level. where v = version. This document contains any last-minute information that may not have been available for publication in the Memo to Users for OS/400 or in any manual. order PTF SF98 vrm. you may want to review pertinent information found in the AS/400 Preventive Service Planning Information document. licensed program name. For each item on the CD-ROMs. nor does it contain entries for locked products. (For information about PTF ordering options. release. and language feature code. you should receive these items: • • • • • • • distribution media (normally CD-ROM) Media Distribution Report Read This First Memo to Users for OS/400 AS/400 PTF Shipping Information Letter individual product documentation AS/400 Software Installation Don’t underestimate the importance of each of these items. each new release means a new edition of the Software Installation manual. you should verify not only that you can read the CD-ROMs but also that they contain all necessary features.Step 12: Verify and set appropriate system values. Prepare for Install (available through an option on the Work with Licensed Programs panel). see Chapter 6. shows the installation-preparation procedures supported by Prepare for Install. Note that the Media Distribution Report lists only priced features. you’ll find the version number listed as V4 (Version 4) in the product name. greatly simplifies this verification process compared with earlier releases. Step 1: Is Your Order Complete? One of the first things you’ll do is check the materials IBM shipped to you to make sure you have all the pieces you need for the installation. You’ll want to read the AS/400 PTF Shipping Information Letter for instructions on applying the cumulative program temporary fix (PTF) package. The Memo to Users for OS/400 describes any significant changes in the new release that could affect your programs or system operations. are shipped with no additional charge. and then use the Media Distribution Report to determine whether all listed volumes are actually present. below.

. press Enter. the Work with Licensed Programs for Target Release panel will display a list of the licensed programs that are on the distribution media and installed on your system. Select the option “Work with licensed programs for target release. load the first CD-ROM specify 1 (Distribution media) for the Generate list from prompt specify the appropriate value for the Optical device prompt specify the appropriate value for the Target release prompt press Enter 6. If you have more CD-ROMs. 5. You should refer to this table not only for sequencing information but also for any potential special instructions. take these steps: 1. Figure 2 Prepare for Install screen Prepare for Install Type option. e. You can inspect this list to determine whether you have all the necessary features. When the system has read the CD-ROM. Once you’ve processed all the CD-ROMs. CHGMSGQ QSYSOPR *BREAK SEV(95) 3. otherwise. 7. You’ll see the Work with Licensed Programs for Target Release panel. you’ll receive a message asking you to load the next volume. d. execute GO LICPGM 4. execute the following CHGMSGQ (Change Message Queue) command to put your message queue in break mode: 2. load the next volume and reply G to the message to continue processing. c. You should a.list of preselected programs that will be replaced during installation. reply X to indicate that all CD-ROMs have been processed. Chapter 3 of AS/400 Software Installation contains a table specifying the correct order. You’ll see the Work with Licensed Programs panel. and press Enter. 1=Select Opt _ _ _ _ _ _ _ Description Work with user profiles Work with licensed programs for target release Display licensed programs for target release Work with licensed programs to delete List licensed programs not found on media Verify system objects Estimated storage requirements for system ASP Bottom F12=Cancel System: AS400 F3=Exit F9=Command line F10=Display job log To perform this verification. From the command line.” and press Enter. Select option 5 (Prepare for install). b. From the command line. Arrange the CD-ROMs in the proper order.

you’ll use command DLTPTF. you should use the manual installation process instead. Make sure you didn’t omit any CD-ROMs during the verification process. you have all the media necessary to replace your existing products. If you didn’t omit any CD-ROMs. • • • • • adding a disk device using device parity protection. Step 4: Clean Up Your System In addition to permanently applying PTFs. For more specific information about applying PTFs. and reclaim associated storage.Preselected licensed programs (those with a 1 in the option column) indicate that a product on the distribution media can replace a product installed on your system. These changes differ from the others listed here because you can make them either during or after the new-release installation. you must determine whether they’re necessary. These tasks not only promote overall tidiness but also help ensure you have enough disk space for the installation. system values. mirrored protection. You’ll see the Licensed Programs Not Found on Media panel. The automatic installation process is the recommended method and the one that minimizes the time required for installation. You can use F11 to display alternate views that provide more detail and use option 5 (Display release-to-release mapping) to see what installed products can be replaced. Press Enter until the Prepare for Install panel appears. it’s best to automatically install the release and then manually make these changes. If products do appear in the list. 9. Step 2: Manual or Automatic? Before installing the new release. reclaim spool storage using command RCLSPLSTG. Step 3: Permanently Apply PTFs One step that will save you time later is to permanently apply any PTFs that remain temporarily applied on your system.. To delete these items. or user auxiliary storage pools (ASPs) changing the primary language that the operating system and programs support (e. or configuration values.g. If they’re not. Consider these tasks: • • Delete PTF save files and cover letters. Typically. After deleting these spooled files. Exit the procedure. Delete unnecessary spooled files. 8. you need to determine whether you’ll perform an automated or a manual installation. However. changing from English to French) creating logical partitions during the installation using tapes created with the SAVSYS (Save System) command changing the environment (AS/400 or System/36). but now is an opportune time to perform cleanup tasks. you should complete several other cleanup procedures. compare your media labels with the product tables in AS/400 Software Installation and check the Media Distribution Report to determine whether the products were shipped (or should have been shipped) with your order. see Chapter 6. If the products are necessary. you’ll issue this command only for products 5769999 (licensed internal code) and 5769SS1 (OS/400). . That disk space may not be much. Select the option “List licensed programs not found on media. if you’re performing any of the tasks listed below. Check all output queues for unnecessary spooled files. 10. you can delete them (I describe this procedure later when I talk about cleaning up your system). A prime candidate for housing unnecessary spooled files is output queue QEZJOBLOG. To simplify the installation. If no products appear in the panel’s list. you must obtain them before installation. The automatic installation will install the new release of the operating system and any currently installed licensed program products. 11.” and press Enter. Doing so cleans up the disk space occupied by the temporarily applied PTFs.

message queues. Include storage requirements only for software that will be stored in the system ASP.” you can continue with the installation. 4.• Have each user delete any unnecessary objects he or she owns. You’d be surprised just how much storage some users can unnecessarily consume. To determine whether you have adequate storage. Step 6: Document System Changes When you load a new release of the operating system. The “Work with licensed programs to delete” option preselects licensed programs to delete. To review candidates for deletion. Press Enter to continue. third-party vendor software) that you’ll be installing. The installation procedure saves any changes you’ve made in libraries QUSRSYS (e. The Prepare for install option on menu LICPGM also offers procedures for cleaning up user profiles. perform these steps: 1. 3. If for any reason you’re unable to use this procedure to delete licensed programs. Exit the procedure. execute GO LICPGM 2. You’ll see the second Estimated Storage Requirements for System ASP panel. you must make additional storage available by removing items from your system or by adding DASD to your system. • • Delete unnecessary user profiles. • Step 5: Is There Enough Room? Once you’ve cleaned up your system.g. output . consider taking care of it now. I rarely see a system that doesn’t contain unused licensed programs or licensed program parts. this one is much easier than in earlier releases. you should repeat these steps. Use the automatic cleanup options in Operational Assistant. you can use option 12 (Delete licensed programs) from menu LICPGM. Otherwise. display menu LICPGM (GO LICPGM) and select option 5 (Prepare for install). 5. enter storage requirements for any additional software (e. You’ll see the Estimated Storage Requirements for System ASP panel. you can use the Prepare for Install panel’s “Work with licensed programs to delete” option. It’s rarely necessary to delete user profiles as part of installation cleanup. Select option 5 (Prepare for install). At the Additional storage required prompt. Compare the value shown for “Storage required to install target release” with the value shown for “Current supported system capacity. You can use F11 (Display reasons) to determine why licensed programs are selected for deletion. and press Enter.” and press Enter. but if this action is appropriate in your environment. This panel displays information you can use to determine whether enough storage is available. You’ll see the Work with Licensed Programs panel. it’s not uncommon to see systems with many unused language dictionaries or unnecessary double-byte character set options.” If the value for “Current supported system capacity” is greater than the value for “Storage required to install target release. Prepare for Install’s “Work with licensed programs to delete” option won’t preselect such unnecessary options because they are valid options. Select the option “Estimated storage requirements for system ASP. Like most installation-related tasks today. If at all possible. all IBM-supplied objects are replaced on the system. 6. For instance. From the command line. To reach this option.. If you make changes to your system that affect the available storage. Some licensed programs may be unnecessary for reasons such as lack of support at the target release. Delete unnecessary licensed programs or optional parts. These options provide a general method for tidying your system on a periodic basis.g. have users perform a bit of personal housekeeping by deleting spooled files and owned objects they no longer need.. you should verify that you have enough storage to complete the installation.

together with the installation process itself. you should at least order the HIPER group package. such as command defaults. ensures that the databases remain synchronized. During this elapsed time. used when an application updates database files on more than one system. I recommend performing a complete system save (option 21 from the SAVE menu). any changes you make to objects in library QSYS are lost because all those objects are replaced. issue the following WRKCMTDFN (Work with Commitment Definitions) command: WRKCMTDFN JOB(*ALL) STATUS(*RESYNC) . In some cases. (IBM releases HIPER. This approach guarantees that you have a current copy of all your most critical information in case any problems with the new installation require you to reinstall the old data and programs. Installation-Day Tasks Once you’ve completed step 8. They. Such problems might occur. see Chapter 6. see “Installing from Tape?” (below) for some additional tips. if your system participates in a network and runs applications that use two-phase commit support. When possible. To ensure you have the latest of these PTFs during installation. you should save your system. PTFs to the operating system and licensed program products usually become available. The CL program that customizes IBM-shipped objects should therefore first duplicate each object (when appropriate) and then change the newly created copy. if IBM adds a parameter to a command. Unless you duplicate the new command and then apply your customization. subsystem descriptions. PTFs regularly — often daily — as necessary to correct high-risk problems. To determine whether your system uses two-phase commit support.g. Step 8: Save Your System Just before installing the new release (either on installation day or the day before).. you can then execute this program with each release update. you should regenerate the custom objects it contains to avoid potential problems. job queue descriptions. Two-phase commit support.) For more information about ordering PTFs. order PTFs for the new release the week before you install the release. However. some time passes between the time you order and receive a new release and the date when you actually install it. you’re nearly ready to start installing the new AS/400 release. To minimize the possible loss of modified system objects. I strongly suggest maintaining a CL program that contains code to reinstate customized changes. you should document any changes you make to these objects so that you can reimplement them after installing the new release. but this isn’t a requirement. for example. are the focus of the remainder of this chapter.) Step 9: Resolve Pending Operations First. Of the group packages. you should resolve any pending database resynchronizations before starting the installation.queues) and QGPL (e. The remaining steps (9 through 12) are best performed on the day of the installation (if they apply in your environment). To be safe. I advise performing at least these two types of saves: • • SAVSYS — saves OS/400 and configuration and security information SAVLIB LIB(*IBM) — saves all IBM product libraries It’s also wise to schedule the installation so that it immediately follows your normally scheduled backup of data and programs. implement these customizations in a user-created library rather than in QSYS. (If you’ll be using a tape drive on installation day. other work management–related objects). you’ll be operating with an outdated command structure. or High-Impact PERvasive. Obtain the latest cumulative PTF package and appropriate group packages. Step 7: Get the Latest Fixes Normally. Although the installation won’t replace the user-created library’s contents. this difference could be critical.

files that haven’t yet been successfully written to media. check for and resolve any held optical files — that is. the system will issue message “CPI3DA3 Database cross-reference files are in error. the installation may fail. execute command GO LICPGM.If the system responds with a message indicating that no commitment definitions are active. and press Enter. To do so. From the command line. you can use the Prepare for install option on menu LICPGM. select the Verify system objects option. 1=Select Opt _ _ _ _ _ _ _ System: AS400 Description Work with user profiles Work with licensed programs for target release Display licensed programs for target release Work with licensed programs to delete List licensed programs not found on media Verify system objects Estimated storage requirements for system ASP . I don’t provide details about database resynchronization here. User profile QSECOFR can’t contain secondary language libraries or alternate initial menus. you need do nothing further. Use the WRKHLDOPTF (Work with Held Optical Files) command to check for such files and either save or release the files. To verify the integrity of these objects. Step 10: Shut Down the INS If your system has an active Integrated Netfinity Server for AS/400 (INS). To use the option. and press Enter. Step 11: Verify System Integrity You should also verify the integrity of system objects required by the installation process. Exit the procedure. Select option 5 (Prepare for install). follow these steps: 1. You should therefore deactivate this server before starting the installation. This option adds user profiles QSECOFR and QLPINSTALL to the system distribution directory if necessary and checks for errors in the database cross-reference files. press Enter. 2. refer to AS/400 Software Installation (SC415120). below). Database cross-reference files can’t be in error. Among the requirements for the installation process are • • • System distribution directory entries must exist for user profiles QSECOFR and QLPINSTALL. if your system has a 3995 optical library. If errors exist in the database cross-reference files. 5. The Work with Licensed Programs panel will appear.” Follow the instructions provided by this message to resolve the errors before continuing. Next. For this information. 4. From the resulting panel (Figure 3. Because the typical AS/400 environment isn’t concerned with two-phase commit support. 3. Figure 3 Prepare for Install screen Prepare for Install Type option. access the Network Server Administration menu (GO NWSADM) and select option 3.

Step 3. check to see whether user profile QSECOFR has a menu or program specified. In addition. QUSRSYS. and configuration while replacing these items: • • • • IBM licensed internal code OS/400 operating system licensed programs and optional parts of licensed programs currently installed on your system language feature code on the distribution media that’s installed as the primary language on the system If. ensure that the program doesn’t add a secondary language library to the system library list. set the mode to Normal. which is the recommended method. reset the QALWOBJRST value to ensure system security. If so. such as reply to a message or make a device ready. set system value QALWOBJRST (Allow Object Restore) to *ALL. system values. the process retains the current operating environment (AS/400 or System/36). or QTEMP from either of these system values. you’re ready to install your new release! The rest of this chapter provides basic instructions for the automatic installation procedure. you must remove the menu or program from the user profile before installing licensed programs. Load the CD-ROM that contains the licensed internal code. the System Attention light on the control panel appears. Wait for the CD-ROM In-Use indicator to go out. you should refer to Chapter 5 of AS/400 Software Installation for a list of system reference codes (SRCs) and instructions about how to continue. Also. Do not remove library QSYS. Arrange the CD-ROMs in the order you’ll use them. Step 2. Ready. QGPL. user profile QSECOFR can’t have a secondary language library (named QSYS29 xx) at a previous release in its library list when you install a new release. Execute the following PWRDWNSYS (Power Down System) command: PWRDWNSYS *IMMED RESTART(*YES) IPLSRC(D) . If QSECOFR has an initial program. If you must use the manual method (based on the criteria stated in planning step 2). The A6 codes indicate that the system is waiting for you to do something. Once the installation is complete. Set.m F3=Exit F9=Command line F10=Display job log Botto F12=Cancel A couple of items remain to check before you’re finished with this step. Go! With the planning behind you. At the control panel. see AS/400 Software Installation for detailed instructions about this process. Step 4. When you perform an automatic installation of a new release of the operating system and licensed program products. take the following steps. during the installation process. The only exception is if the attention light comes on and the SRC begins with A6. To install the new release. Step 12: Check System Values Your next step is to check and set certain system values. Remove from system values QSYSLIBL (System Library List) and QUSRLIBL (User Library List) any licensed program libraries and any secondary language libraries (QSYS29xx). If you’re operating in the System/36 environment. Step 1.

. If you receive the message “Automatic installation not complete. and scan the messages found on the History Log Contents display. . . choose option 50 (Display log for messages). “Recovery Procedures. The process is hands-free until the Sign On panel appears. When installation is complete. . One of these panels. Next. . verify the status and check the compatibility of the installed licensed programs. .This command will start an IPL process. you may see panels with status information. Step 5. the display may be blank for approximately five minutes and the IPL in Progress panel may appear. 1980. loading additional products. *PRINT F3=Exit F12=Cancel (C) COPYRIGHT IBM CORP. If any messages indicate a failure or a partially installed product. On the Work with Licensed Programs display. Start date . You needn’t respond to any of these panels. You’ll receive this prompt several times during the installation process. refer to “Recovery Procedures” in AS/400 Software Installation. Step 8. Licensed Internal Code IPL in Progress. you must respond to the prompt to continue processing. loading PTFs. Upon 100 percent completion of the install. After loading the volume. . . be prepared to wait for quite some time while the installation process continues. Step 7. execute the GO LICPGM command. refer to the “Installed Status Values” section of Appendix E in AS/400 Software Installation. If you see a different status value for any licensed program. The response value you specify depends on whether you have more volumes to process: A response of G instructs the installation process to continue with the next volume. . lists several IPL steps. To verify the installation. You’ll see the Licensed Internal Code – Status panel. . and a response of X indicates that no more volumes exist. you’ll see the Sign On panel. . A status of *COMPATIBLE indicates a licensed program is ready to use. . . 07/17/00 09:32:35 *______ MM/DD/YY HH:MM:SS *. The amount of time needed depends on the amount of recovery your system requires. Start time . Press Enter on this panel. . press Enter. As the installation process proceeds. . . 1998. some of which can take a long time (two hours or more). below) will appear. . . the installation process loads the operating system followed by licensed programs. Output . you needn’t respond to the status information panels you see. Figure 4 Display Install History screen Display Install History Type choices. sign on using user profile QSECOFR and continue by verifying the installation. Load the next volume when prompted to do so. Once all your CD-ROMs have been read.” you should sign on using the QSECOFR user profile and refer to Appendix A. use option 10 (Display licensed programs) from menu LICPGM to display the release and installed status values of the licensed programs.” in AS/400 Software Installation for instructions about how to proceed. If the automatic installation process was completed normally. Step 6. During this process. Note that SRC codes will continue to appear in the display area of the control panel. and updating software license keys. To do so. Next. The Display Install History panel (Figure 4. Verify the installation.

refer to the “INZSYS Recovery Information” section of Appendix A in AS/400 Software Installation. If you see neither message. The license information is part of the upgrade media. and continue. Return to the Work with Licensed Programs menu. You must install license keys within 70 days of your release installation. you should install any group PTFs you have — particularly the HIPER.Host Servers Mor F12=Cancel F19=Display e. and select option 11 (Install licensed programs).Online Information OS/400 . install the cumulative PTF package (either the one that arrived with the new release or a new one you ordered. F3=Exit F11=Display release trademarks (C) COPYRIGHT IBM CORP.Library QUSRSYS OS/400 .S/36 Migration *COMPATIBLE OS/400 .. 1998. 1=Install Licensed Option Program _ 5769SS1 _ 5769SS1 _ 5769SS1 _ 5769SS1 _ 5769SS1 Support _ 5769SS1 _ 5769SS1 _ 5769SS1 _ 5769SS1 _ 5769SS1 _ 5769SS1 Support _ 5769SS1 Assistant _ 5769SS1 Installed Status *COMPATIBLE *COMPATIBLE *COMPATIBLE *COMPATIBLE *COMPATIBLE System: AS400 Description OS/400 . Update software license keys. PTFs group package. Figure 5 Install Licensed Programs screen Install Licensed Programs Type options. Select a licensed program to install. You should look for a message indicating that INZSYS has started or a message indicating its completion. wait a few minutes and try option 50 again. but for most systems it’s completed in a few minutes..) After the IPL is completed. For each product. To install software license keys. follow the specific instructions delivered with the distribution media containing the new product. An IPL is required to start the Initialize System (INZSYS) process (the INZSYS process can take two hours or more on some systems. The shipping letter that accompanies the PTF tape will have specific instructions about how to install the PTF package.Load additional products. or High-Impact PERvasive.) In addition to installing a cumulative PTF package. below. as suggested in the planning steps above).Library QGPL OS/400 . see Chapter 6. Note: To complete the installation process. If you don’t see a desired product in the list. press Enter. you must install a cumulative PTF package or perform an IPL. use the WRKLICINF (Work with License Information) command. You’ll see the Install Licensed Programs display that appears in Figure 5. . Next. 1980.Extended Base Support OS/400 . If the message doesn’t appear in a reasonable amount of time. The installation steps for loading additional products are similar to the steps you’ve already taken. (For information about installing PTFs. sign on as QSECOFR and check the install history (using option 50 on menu LICPGM) for status messages relating to the INZSYS process. Load PTFs. You’re now ready to load any additional licensed programs and secondary languages. Continue checking the install history until you see the message indicating INZSYS completion. update the license key and the usage limit to match the usage limit you ordered.Extended Base Directory OS/400 OS/400 OS/400 OS/400 OS/400 OS/400 S/36 and S/38 Migration System/36 Environment System/38 Environment Example Tools Library AFP Compatibility Fonts *PRV CL Compiler *COMPATIBLE *COMPATIBLE *COMPATIBLE OS/400 .

in the OS/400 operating system. there are three ways to determine when you need one or more PTFs. loading each package fairly soon after it becomes available. the focus here is on IBM distribution of PTFs. Before starting the save. Your hardware maintenance vendor is typically responsible for providing microcode PTFs. determine whether system jobs that decompress objects are running. Chapter 6 . Just think how much trouble it would be if you had a disk crash soon after loading the new release and. where v = OS/400 version. when it has a large audience or is critical to operations. To order the latest cumulative PTF package. and m = modification. To make sure you don’t suffer this fate. while your software service provider furnishes system software PTFs. what PTFs you need. Final Advice The only risk you take when installing a new release is not being prepared for failure. and therefore included in a cumulative package. a PTF is deemed significant. In this introduction to PTFs.Introduction to PTFs Updated June 2000 Here's a step-by-step guide to ordering and installing PTFs — and to knowing when you need them Much as we'd like to think the AS/400 is invincible. A PTF. or "permanent. IBM releases cumulative packages on a regular basis. You should also load the latest cumulative package any time you load a new release of OS/400. installing new releases is only an inconvenience in time. When Do You Need a PTF? Perhaps the most difficult hurdle to get over in understanding PTFs is knowing when you need one. Because IBM is both the hardware and the software provider for most shops. The fixes are called "temporary" because a PTF fixes a problem or adds an enhancement only until the next release of that code or product becomes available. and you should stay up-to-date with them. is one or more objects (most often program code) that IBM creates to correct a problem in the IBM licensed internal code. Each group . The installation of your new release is now complete! The only thing left to do before restarting production activities is to perform another SAVSYS to save the new release and the new IBM program products. A group PTF is a logical grouping of PTFs related to a specific function. were forced to restore the old release and repeat the installation process. such as database or Java. If you take the precautions mentioned in the planning suggestions and turn to “Recovery Procedures” in AS/400 Software Installation in the event of trouble." Hardware and software service providers distribute PTFs. The first way is simple: You should regularly order and install the latest cumulative PTF package. Basically. you won’t find yourself losing anything but time should you encounter an unrecoverable error. group PTFs. r = release. from time to time even the best of systems needs a little repair. the fix becomes part of the base product itself. In general. use the WRKACTJOB (Work with Active Jobs) command and check the status of QDCPOBJx jobs (more than one may exist). You should start your save only if these jobs are in an inactive state. Client Access service pack. x jobs will become active again when the system is no longer in restricted state. with no current SAVSYS. You might wonder what criteria IBM uses to determine whether a PTF is significant.Step 9. A cumulative PTF package is an ever-growing collection of significant PTFs. you'll learn the necessary information to determine when PTFs are required on your system. For the most part. and necessary individual HIPER PTFs. and how to install and apply those PTFs. IBM uses PTFs to add function or enhance existing function in these products. To make this determination. at that time. or program temporary fix. Don’t worry — the QDCPOBJ I. In addition to issuing PTFs to correct problems. or in an IBM licensed program product. you use the special PTF identifier SF99vrm. but it does happen. how to order PTFs. perform the SAVSYS and the SAVLIB LIB(*IBM) operations now. IBM provides such assistance for the AS/400 in the form of PTFs. It’s rare that a new-release installation must be aborted midway through. You can ensure these jobs are inactive by placing the system in restricted state.

if you specify a particular product. PTFs are released regularly (often daily) as necessary to correct high-risk problems.g. Figure 1 shows the prompted SNDPTFORD command. (To learn more about PSP documents and for helpful guidelines for managing PTFs. IBM send the PTFs via mail on selected media. The next SNDPTFORD parameter. 5769PW1) to limit your order to PTFs specific to that product. fax. MF98440). From time to time. If you report a system problem to IBM based on your analysis. which determines whether the PTF order is for a specific product or for all products installed on your system. The parameter actually has three elements or parts. *ONLYPRD. Each PTF you receive has two parts: a cover letter that describes both the PTF and any prerequisites for loading the PTF. SF98440. HIPER. you may want to download only a cover letter to determine whether a particular PTF is necessary for your system. you enter one. indicates that the order is for all products installed or supported on your system.. Two restrictions apply to the Product and Release elements of the PTFID parameter. First.has a single PTF identifier assigned to it so that you can download all PTFs for the group by specifying only one identifier. Second. PTF identifiers (e. Use . First is the actual PTF identifier. which may or may not be the current release level installed for your products. a required entry. The default value you see in Figure 1. and you chance catastrophic consequences.. you also must specify a particular release level.g. Like a group PTF. For example. For parameter PTFID.) How Do You Order a PTF? You can order individual PTFs. The second element is the Product identifier. if you specify *ONLYPRD for the product element. or up to 20. If you prefer. To identify and analyze the problem. however. you can enter a specific product ID (e. Client Access service packs are important if you access your system using Client Access. You can use Electronic Customer Support (ECS) and the CL SNDPTFORD (Send PTF Order) command. You have two choices when ordering PTFs electronically. a group PTF). PTFPART (PTF parts). V4R4M0. and PSP information from IBM by mail. a brief introduction here may point out a couple of the command's finer points to simplify its use.g. The third PTFID element. When electronic means are not practical. 5769RG1. such as data loss or a system outage. and the actual fix. A Release value of *ONLYRLS indicates that the order is for the release levels of the products installed or supported on your system. or you can order PTFs on the Internet. a set of PTFs (e. makes this possible. as it does for PTFs ordered by non-electronic means. telephone. you may receive a PTF immediately if someone else has already reported the problem and IBM has issued a PTF to resolve it. see the section "Developing a Proactive PTF Management Strategy" near the end of this article. You can download PSP information by ordering special PTFs. Instead of this value.. Ignore these important PTFs.. or electronic communications. A second way you may discover you need a PTF is by encountering a problem. Release. The third way to discover you might need particular PTFs is by regularly examining the latest Preventive Service Planning (PSP) information. V4R3M0) to limit the PTF order to that release. you might order a different release-level PTF for products you support on remote AS/400s. or High-Impact PERvasive. a cumulative PTF package. you can enter a specific release identifier (e. you also must specify *ONLYRLS for the release element. you might use the ANZPRB (Analyze Problem) command. SNDPTFORD Basics The SNDPTFORD command is a simple command to use. Electronically ordered PTFs are delivered electronically only when they're small enough that they can be transmitted within a reasonable connect time.g. determines whether the PTF order is for the current release levels of products on your system or a specific release level. or you might investigate error messages issued by the system. a service pack is a logical grouping of multiple PTFs available under a single PTF identifier for easy download.

RMTCPNAME (Remote control point) and RMTNETID (Remote network identifier). Also note that for systems with logical partitions. along with detailed instructions. If you permit REORDER to default to *NO. see "Working with the AS/400 iPTF Function". Parameter RMTNETID must correctly identify the remote service provider network. You should change parameter RMTCPNAME (default value *IBMSRV) only if you are using a service provider other than IBM or are temporarily accessing another service provider to obtain application-specific PTFs. you simply continue normal PTF application procedures. Valid values are *NO and *YES. The value *NETATR causes the system to refer to the network attributes to retrieve the local network identifier (you can view the network attributes using the DSPNETA.ibm. 3. 4. Most PTFs ordered using SNDPTFORD are downloaded immediately using ECS. for example. be sure to read . Register for the service. Your network provider can give you the correct RMTNETID if the default does not work. Make sure. The value *ANY specifies that the PTFs should be delivered by any available method. and start the appropriate services. applying the PTFs you're ordering. If you change the local network identifier in the network attributes. For a more detailed description of the Internet PTF process. specifies whether you want to reorder a PTF that is currently installed or currently ordered.com. Value *REQUIRED requests the PTFs you're ordering as well as any other required PTFs that accompany the ordered PTFs. however. http://www. Interrupting a step can cause problems significant enough to require reloading the current version of the licensed internal code or the operating system. identify the PTFs you want to download. When you visit the site. After you've downloaded the PTFs. Ordering PTFs on the Internet IBM provides a detailed overview of the Internet PTF download process. you've received only the cover letter. if you have such a system. Configure your AS/400. at the IBM Service Web site. specifies whether only the PTFs ordered are sent or also any requisite PTFs that you must apply before. Test your PC's Internet browser to ensure it supports the JavaScript programs used in the download process. identify the remote service provider and the remote service provider network.value *ALL to request both PTF(s) and cover letter(s) or value *CVRLTR to request cover letter(s) only. command). Note one caution concerning the process of loading and applying PTFs: You must not interrupt any step in this process. The next parameter. 2. in fact. REORDER. SNDPTFORD's DELIVERY parameter determines how PTFs are delivered to you. The last parameter. Value *PTFID specifies that only those PTFs you are ordering are to be sent. the PTF process differs in some critical ways. The next two parameters. The service is free and available to all AS/400 owners. PTFs that are too large are instead shipped via mail. Note that REORDER(*YES) is necessary if you've previously sent for the cover letter only and now want to order the PTF itself. How Do You Install and Apply a PTF? Installing a PTF includes two basic steps: loading the PTF and applying the PTF. OS/400 won't order the PTF because it thinks it has already ordered it when. A value of *LINKONLY tells ECS to deliver PTFs only via the electronic link. and you're ready to download PTFs: 1. expand the "Fixes Downloads and Updates" branch and then select "Internet PTF Downloads" to reach the AS/400 Internet PTF Downloads (iPTF) page. Then simply complete the following few steps. and begin the download. 5. or along with. you may then have to override this default value when you order PTFs. that your electrical power is protected with a UPS. or Display Network Attributes.as400service. ORDER. Log on. The process outlined here performs both the loading and the application of the PTF.

This lets your system maintain one permanent copy while you temporarily apply changes (PTFs) to the other area. Only when you're certain you want to keep the changes are those changes permanently applied to the control copy of the licensed internal code. Installing Licensed Internal Code PTFs Step 1. To determine which storage area you're currently using. The system now handles this IPL process automatically during the PTF install and apply process. the system should always run using storage area B. . Select the "Install program temporary fix package" option. execute the following PWRDWNSYS (Power Down System) command before continuing with your PTF installation: PWRDWNSYS OPTION(*IMMED) + RESTART(*YES) + IPLSRC(B) Step 3. the system must actually IPL from the A storage area and then IPL again on the B storage area to begin using those applied PTFs. and the copy considered temporary is stored in system storage area B. If you currently see a B in the Data portion of the control-panel display. The system maintains two copies of all the IBM licensed internal code on your system. Then press Enter. You can do this by entering the DSPPTF (Display Program Temporary Fix) command and specifying the parameters COVERONLY(*YES) and either OUT PUT(*) or OUTPUT(*PRINT). enter the name of the device you're using. When the system is running. If you are not running on the B storage area. Print and review any cover letters that accompany the PTFs. you had to manually IPL to the A side. The permanent copy is stored in system storage area A. To apply PTFs to the B storage area. such as when serious operating system problems occur. For example. Then we'll examine the process for loading and applying PTFs for licensed program products. First."PTFs and Logical Partitioning (LPAR)" (below) for more information. Look especially for any specific preinstallation instructions. execute the command DSPPTF 5769999 and check the IPL source field to determine which storage area is current. it uses the copy you selected on the control panel before the last IPL. and then manually IPL to the B side again. Supply the correct value for the Device parameter. Step 4. enter the value *SERVICE. we'll look at loading and applying PTFs for the IBM licensed internal code. Enter GO PTF and press Enter to reach the Program Temporary Fix (PTF) panel. You will see either ##MACH#A or ##MACH#B. If you received the PTF(s) on media. apply PTFs. respectively. to print the cover letter for PTF MF12345. If you received the PTF(s) electronically. On older releases of OS/400. this means that the next system IPL will use storage area B for the licensed internal code. depending on whether you received the PTF(s) on media or electronically. Determine which storage area your machine is currently using. which tells you whether you are running on storage area A or B. you would enter the following DSPPTF command: DSPPTF LICPGM(5769999) SELECT(MF12345) COVERONLY(*YES) OUTPUT(*PRINT) + + + Note: You can also access cover letters at the IBM Service Web site by selecting from the Tech Info & Databases branch. Except for rare circumstances. depending on whether you want to view the cover letter on your workstation or print the cover letter. Step 2.

look for any messages regarding PTF installation. before individual PTFs. and Problem Handling (SC41-5206) for more information about what to do when your PTF installation fails. The message log will display messages that indicate whether the install was successful. those starting with TA indicate that HIPER PTFs and HIPER licensed internal code fixes have been applied. The panel lists PTFs in decreasing sequence. Review any cover letters that accompany the PTFs. To determine the level of licensed internal code fixes on your system. Step 2. Then press Enter. Step 3. execute the command DSPPTF LICPGM(5769999) . After the IPL is complete. showing cumulative package information first. When installing a cumulative PTF package. Administration. execute the command DSPPTF LICPGM(5769SS1) The ensuing display panel shows the identifiers for PTFs on your system. verify the PTF installation (see "Verifying Your PTF Installation"). Verifying Your PTF Installation After installing one or more PTFs. enter the name of the device you're using. If you received the PTF(s) electronically. Use the system-supplied history log to verify PTF installations by executing the DSPLOG (Display Log) command. Once the IPL is complete. issue the command GO LICPGM). specifying the time and date you want to start with in the log: DSPLOG LOG(QHST) + PERIOD((start_time start_date)) Be sure to specify a starting time early enough to include your PTF installation information. you should verify the installation process before resuming either normal system operations or use of the affected product. you can also use option 50. "Display log for messages. If you have messages that describe problems. If you received the PTF(s) on media. Enter GO PTF and press Enter to reach the Program Temporary Fix (PTF) panel. On the Display Log panel. The abbreviated process for licensed program products is as follows: Step 1. Step 4. Supply the correct value for the Device parameter.Step 5. depending on whether you received the PTF(s) on media or electronically. The system then performs the necessary steps to temporarily apply the PTFs and re-IPL to the B storage area. PTF identifiers that start with TC indicate that the entire cumulative package has been applied. Select the "Install program temporary fix package" option. Look especially for any specific pre-installation instructions. enter the value *SERVICE. Installing Licensed Program Product PTFs Installing PTFs for licensed program products is almost identical to installing licensed internal code PTFs except that you don't have to determine the storage area on which you're currently running. To determine your current cumulative PTF package level. Cumulative packages start with the letters TC or TA and end with five digits that represent the Julian date (in yyddd format) for the particular package. verify the PTF installation (see the section "Verifying Your PTF Installation"). How Current Are You? One last thing that will help you stay current with your PTFs is knowing what cumulative PTF package you currently have installed." on the Work with Licensed Programs panel (to reach this panel. see AS/400 Basic System Operation. The separate storage areas apply only to licensed internal code.

In addition to reviewing PSP documents. In years past. go to http://www. Released on a periodic basis. and maximizing availability. TA. Your PTF maintenance strategy should include provisions for preventive service planning. as well as with Client Access service packs if appropriate.Identifiers beginning with the letters TL and ending with the five-digit Julian date indicate the cumulative level. In conjunction with cumulative PTF packages. However." you often must do a little more work to determine the nature of problems described in the PSP documents by referring to PTF cover letters. you should be aware of certain guidelines when evaluating your environment and establishing scheduled maintenance procedures. At a minimum. they should be applied soon after they become available -. you want the levels for TC. You should order and review SF98vrm and MF98vrm at least monthly. and TL packages to match. Preventive maintenance includes regular application of cumulative and group PTF packages and Client Access service packs. This service notifies you weekly about HIPER problems. Developing a Proactive PTF Management Strategy The importance of developing sound PTF management processes cannot be overstated. These documents contain information about critical PTFs.) Following are some minimum recommendations for PSP review. A proactive PTF management strategy lessens the impact to your organization that can result from program failures by avoiding those failures. This circumstance indicates that you've applied the cumulative package to licensed program products as well as to licensed internal code. simply order and apply the packages. Avoid problems. consider subscribing to IBM's AS/400 Alert offering. Because all of these are collections of PTFs. you may need to order individual PTFs critical to sound operations. review this information weekly. Instead. defective PTFs. and the latest cumulative PTF package. and corrective service. that's no longer the case. review the information for HIPER PTFs and Defective PTFs. Typically. (The easiest and fastest way to obtain these documents is from the IBM Service Web site. respectively. With problem descriptions such as "Data Integrity" and "Usability of a Product's MAJOR Function. as well as a list of the other PSP documents from which you can choose.com/services. preventive service.. IBM publishes several Preventive Service Planning documents in the form of informational PTFs. You can find Client Access . no single strategy applies to all scenarios. To learn more about this service. Preventive Service Planning Planning your preventive measures is the first step to effective PTF management. You should start with the software and hardware PSP information documents by ordering SF98vrm (Current Cumulative PTF Package) and MF98vrm (Hardware Licensed Internal Code Information). This rule of thumb is especially true if you're using the latest hardware or software releases or making significant changes to your environment."? Suffice it to say I've seen situations where PTFs would have saved tens of thousands of dollars. Unfortunately. you should stay current with any group PTF packages applicable to your environment.usually every three to four months. To help you with planning. Preventive Service Preventive measures are instrumental to your system's health. These documents contain service recommendations concerning critical PTFs or PTFs that are most likely to affect your system. There's no need to wade through thousands of PTFs to determine those you need. Because environments vary. If you review no other additional PSP documents. You can receive this information by fax or mail. and you avoid their associated high costs.. your work is actually quite easy. Remember the old adage "An ounce of prevention . ensuring optimal performance. Cumulative PTF packages are your primary preventive maintenance aid. PSP documents contained enough detail to let you determine the nature of the problems that PTFs fixed. Between releases of cumulative PTF packages.ibm.

. to be published in the spring of 2001 by NEWS/400 Books. 5. For systems with logical partitions. shut down all secondary partitions before installing the PTFs. However. Your goal should be to minimize the corrective measures required.as400.service pack information and download service packs by following the links at http://www. With robust preventive service planning and preventive service measures. There are also partition-sensitive PTFs that apply specifically to the lowest-level code that controls logical partitions. are only the beginning with respect to the differences imposed by logical partitioning. Perform normal-mode IPLs of all secondary partitions from side B. change the automatic IPL parameter from its default value of *YES to *NO unless the secondary partitions are powered down. Gary Guthrie is a technical editor for NEWS/400.ibm. Apply the PTFs temporarily on all logical partitions using the APYPTF (Apply PTF) command. When you experience problems.com/clientaccess. Fail to account for these differences when you apply PTFs. if you take the time to learn your way around PSP information and PTF cover letters. you'll be able to find timely resolution to your problems. Load the PTFs on all logical partitions using the LODPTF (Load PTF) command. These PTFs have special instructions that you must follow exactly. Perform a power down and IPL of the primary partition from side B in normal mode. Corrective Service Even the most robust and aggressive scheduled maintenance efforts can't thwart all possible problems. Perform an IPL of all partitions from the A side. 2. You can reach him by e-mail at gguthrie@ as400network. Ferreting out PSP information about individual problems and fixes is without a doubt the most detailed of the tasks in managing PTFs. 3. When using the GO PTF command on the primary partition. and you could find yourself with an inoperable system requiring lengthy recovery procedures. In doing so. 7. your environment will be dramatically more stable operationally. 4. heed the following warnings: When you load PTFs to a primary partition. PTFs and Logical Partitioning (LPAR) Although the basic steps of installing PTFs are the same for a system with logical partitions. Permanently apply any PTFs superseded by the new PTFs. This article is excerpted from a new edition of Wayne Madden's Starter Kit for the AS/400. Do not use the GO PTF command. your corrective service issues will be minimal. some important differences exist. These warnings.com. however. you need to find the corrective PTFs. These instructions include the following steps: 1. Power down all secondary partitions. 6.

or they might require a response (e. — G. you will want to use messages on the AS/400. you might need to have a program communicate to a user or workstation to request input. SNDBRKMSG (Send Break Message). What should we do???"). or select option 4 on the Operational Assistant menu.g. They might simply convey information.) User_profile_name -. or simply update the user or system operator on the status of the program (e. escape. select option 3 on the User Task menu. or application auditing. Apply all the PTFs permanently using command APYPTF. Sometimes called impromptu messages. aliens have just landed and taken the programming manager hostage. you use one of three commands: SNDMSG (Send Message). diagnostic. (This option may be best for end users because Operational Assistant provides the most user-friendly interface to the SNDMSG command.to request that the message be sent to the message queue of every user currently signed on to the system. you can enter a user profile name in the TOUSR parameter. To access the SNDMSG command. each of which aids in developing program function.to request that the message be sent to the interactive user's external message queue or to the system operator's message queue when the command is executed from within a program.Getting Your Message Across: User to User Sooner or later.to request that the message be sent to the system operator's message queue (QSYS/QSYSOPR). you can • • • • key SNDMSG on a command line. Sending Messages 101 To send user-to-user messages. Another time. When you receive partition-sensitive PTFs. *ALLACT -. "Joe..g. select option 5 on the System Request menu. report a problem.1. Chapter 7 . Program-to-program messages can include informational. a co-worker. user-to-user messages are not predefined in a message file to system users.8. if you simply want to inform John. "File YOURLIB/YOUROBJ not found" is an example of a diagnostic program-to-program message. TOUSR can have any of the following values: • • • • *SYSOPR -. You or your users can also send messages to one or more users or workstations on the spur of the moment. and request messages. status.) The message string you enter in the MSG parameter can be up to 512 characters long. The SNDMSG prompt screen is shown in Figure 7. problem determination. of a meeting. For example. SNDMSG is the most commonly used (you can use it even if LMTCPB(*YES) is specified on your user profile) and the easiest to learn. always refer to any accompanying special instructions before loading the PTFs onto your system. your application might need to communicate with another program. (*ALLACT is not valid when MSGTYPE(*INQ) is also specified. To specify the message destination. or SNDNETMSG (Send Network Message). notification.G. For instance. *REQUESTER -.to request that the message be sent to the user's message queue (which may or may not have the same name as the user profile). "Processing today's invoices"). you could enter . User-to-user messages can serve as a good introduction to AS/400 messaging..

SNDMSG MSG('John . it is the message queue attributes that define how that message will be received. Because the default for RPYMSGQ is *WRKSTN. the message queue receiving the command handles messages in break mode.2 shows the SNDBRKMSG prompt screen. However. see "Sending Messages into History. There are two other differences between the SNDBRKMSG command and the SNDMSG command. or system history log (QHST) message queue (for more about sending messages to QHST. an inquiry message appears on the destination message queue as text. The specified message queue can be any external message queue on your system. only workstation message queues can be named as destinations). the default) or *INQ (inquiry) message. Specifying more than one message queue is valid only for informational messages. A delivery mode of *HOLD does not notify the user or workstation about a message received. including the workstation. However. The MSGTYPE parameter lets you specify whether the message you are sending is an *INFO (informational. the SNDMSG command provides a simple way to send a message or inquiry to someone else on the local system. you could enter SNDMSG MSG('John . John's reply will return to your (the sender's) workstation message queue. . Although SNDBRKMSG provides the same function as the SNDMSG command. the SNDBRKMSG command lets you specify the value *ALLWS (all workstations) in the TOMSGQ parameter to send a message to all workstation message queues. the SNDBRKMSG command has only the TOMSGQ parameter on which to specify a destination (i. Figure 7. user profile. As you can see. Jim') TOUSR(JSMITH) + + Another way to specify the message destination is to enter up to 50 message queue names in the TOMSGQ parameter. If the message queue delivery mode is *BREAK and no break-handling program is specified.Will 4:00 be a good time for our meeting today? Jim') TOUSR(JSMITH) MSGTYPE(*INQ) + + The RPYMSGQ parameter on the SNDMSG command specifies which message queue should receive the response to the inquiry message.. A delivery mode of *NOTIFY causes a workstation alarm to sound and illuminates the "message wait" block on the screen. Like the informational message. Although SNDMSG can send a message to a message queue. the message is presented as soon as the message queue receives it. I Break for Messages The SNDBRKMSG command offers a solution for messages that must get through regardless of the message queue's delivery mode or break-handling program or the message's severity. First. it has one quirk.e. an inquiry message supplies a response line and waits for a reply. Second. If you want to schedule a meeting with John and be sure he receives your message.Our meeting today will be at 4:00. regardless of the message queue's delivery mode.").

You can create a distribution list using the CRTDSTL (Create Distribution List) command and add the appropriate network user IDs to the list using the ADDDSTLE (Add Distribution List Entry) command.3). Sue.using a distribution list. The value you specify must be either a valid network user ID or a valid distribution list name (i. (You can start SNADS by starting the QSNADS subsystem. There are two situations for which the SNDNETMSG command is more appropriate than SNDMSG or SNDBRKMSG. you can add network user IDs to the system network directory using the WRKDIR (Work with Directory) command. Each network user ID is associated with a user profile on a local or remote system in the network. If necessary.e. the message is distributed to the message queue of each network user on the list.The following is a sample message intended for all workstations on the system: SNDBRKMSG MSG('Please sign off the system + The system unavailable 30 minutes. When you specify a distribution list as the message destination. TOUSRID. you can type an impromptu message up to 512 characters long in the MSG parameter.including users on your local system -. Jim.) . if distribution list PGMRS consists of network user IDs for Bob. If a workstation is not active.') + + for the next + immediately. As with SNDMSG and SNDBRKMSG. For example.. the message simply will be added to the queue and displayed when the workstation becomes active and the message queue is allocated. will be TOMSGQ(*ALLWS) This message will go immediately to all workstation message queues and be displayed on all active workstations. First. you might need this command if your system is in a network because SNDMSG and SNDBRKMSG can't send messages to a remote system.') TOUSRID(PGMRS) + The only requirements for this method are that user profiles have valid network user IDs on the network directory and that System Network Architecture Distribution Services (SNADS) be active. Casting Network Messages The third command you can use to send a message to another user is SNDNETMSG (Figure 7. Second. Go home early today and + + + enjoy a little time off. a list of network user IDs). The distinguishing feature of the SNDNETMSG command is the destination parameter. you could send the same message to each of them (and give them reason to remember you on Bosses' Day) by executing the following command: SNDNETMSG MSG('Thanks for your hard work on the order entry project. you can use SNDNETMSG to send messages to groups of users on a network -. and Linda.

The computer dispatches messages to a problem log so the operator can analyze any problems the system may be experiencing.As you can see. The system uses messages to communicate between processes. for example. The system reply list lets you specify that the operating system is to respond automatically to certain predefined inquiry messages without requiring that the user reply to them. You can modify the supplied reply list by adding your own entries using the following CL commands: • • • • WRKRPYLE (Work with Reply List Entries) ADDRPYLE (Add Reply List Entry) CHGRPYLE (Change Reply List Entry) RMVRPYLE (Remove Reply List Entry) . You can design screens and reports that use messages instead of constants. users can send impromptu messages to and receive them from other workstation users on the system.you must specify that the reply list will handle inquiry messages. thus enabling multilingual support. but these are topics for another day. The reply list and the break handling program have similar functions and can. IBM provides several facilities to organize and handle messages. the system sends the listed reply without intervention from the user or the system operator. The reply list tends to be easier to implement. Messages tell when a job needs some attention or intervention. under some conditions. There is only one reply list on the system (hence the official name: system reply list). and you can create programs to further define how to process messages. When the system finds no match. which we'll cover later). indicate INQMSGRPY(*SYSRPYL) within any of the following CL commands: • • • • • BCHJOB (Batch Job) SBMJOB (Submit Job) CHGJOB (Change Job) CRTJOBD (Create Job Description) CHGJOBD (Change Job Description) IBM ships the AS/400 with the system reply list already defined as illustrated in Figure 8. You send requests in the form of messages to the command processor when you execute AS/400 commands. I'll explore three methods of message processing: the system reply list. Return Reply Requested The general concept of the system reply list is quite simple. it sends the message to the user (for interactive jobs) or to the system operator (for batch jobs). you have more than one option when sending user-to-user messages on the AS/400. It sends messages noting the completion of jobs or updating the status of ongoing jobs. The reply list primarily consists of message identifiers and reply values for each message. To do this. Note that the reply list uses the same convention as the MONMSG (Monitor Message) CL command for indicating generic ranges of messages. OfficeVision uses a message to sound an alarm when a calendar event is imminent. it's important to have some means of catching those that relate to you -. lets you predefine an action that the computer will take when it encounters a specific message. the reply becomes a built-in part of the message description. This introduction to messages should get you started and whet your appetite for learning more. while a break handling program can be much more flexible in the way it handles different kinds of messages. In this chapter. break handling programs.1. This predefined reply list issues a "D" (job dump) reply for inquiry messages that indicate a program failure. When a job using the reply list encounters a predefined inquiry message. The third message handling technique. A break handling program lets you receive messages and process them according to their content.and that might require some action. accomplish the same result. With hundreds of messages flying around your computer at any given moment. of course. Now you're ready to move on to program-to-user and program-to-program messages. "RPG0000" matches all messages that begin with the letters "RPG. A job does not automatically use the system reply list -." from RPG0001 through RPG9999. Chapter 8 .Secrets of a Message Shortstop What makes the OS/400 operating system tick? You could argue that messages are really at the heart of the AS/400. When a matching entry exists. and default replies. OS/400 searches the reply list for an entry that matches the message ID (and the comparison data. And. the default reply.

If you use it.. checking for certain inquiry messages. When the system encounters this message. If you have two message files with a message ID USR9876. otherwise. use the DSPMSGD (Display Message Descriptions) command. For example. it checks the message data for the name of the printer device. or you can use *ANY as the message identifier for an entry that will match any inquiry message.. RPG1241) or a range of messages (e. Let's look at each component individually. Figure 8. it continues to search the list. You use the reply value portion of the list entry to indicate how the system should handle the message in this entry. Figure 8. Use the *ANY message identifier with great care.2). otherwise. Use *DFT (Default) to have the system send the message default reply from the message description. and a dump attribute (DUMP). Use *RQD (Required) to require the user or system operator to respond to the message.3 shows portions of a CL program that temporarily changes the system reply list and then uses the changed list for message handling. If you indicate a starting position in the system reply list. If the message data comparison value matches the list entry comparison value. If you don't use *ANY. You may request a job dump no matter what you specified for a reply value. for example (usually not a good idea). The dump attribute in the system reply list tells the system whether or not to perform a job dump when it encounters the message matching this entry.2 lists some possibilities to consider for your own reply list. D. You should probably limit this approach to programs run on a dedicated or at least a fairly quiet system. It is a catch-all entry that ensures the system reply list handles all messages. a message identifier (MSGID). .g. The comparison data is an optional component of the reply list. it should be at the end of your reply list.2 shows three list entries for the CPA4002 (Align forms) message. any other jobs that use the reply list may use the changed reply list while this program is active. the system reply list treats both messages the same. The format of the message data is defined when you or IBM creates the message. I. If the device name matches either the "PRT3816" or "PRTHPLASER" comparison data. R. you can use it with a narrower focus. The message identifier can indicate a specific message (e. Since the program temporarily changes the system reply list. and G in Figure 8. When a reply list entry contains comparison values. the program returns the system reply list to its original condition. the system sends unmonitored messages to the operator. The dump then serves as a snapshot of the conditions that caused a particular inquiry message to appear.Figure 8. with sequence number 9999.. and issuing replies appropriate to the program. The system dumps the job before it replies to the message and returns control to the program that originated the message. The reply list message identifiers are independent of the message files. RPG1200 for any RPG messages from RPG1201 through RPG1299). the system uses the one with the lowest sequence number. If the system reply list gets control of any message other than the listed ones. To look at the format. it performs a dump and then replies to the message with the default reply from the message description. At the end. Specify DUMP(*YES) or DUMP(*NO) for the list entry. However. just as if the job were not using the reply list. Your three choices are: • • • Indicate a specific reply (up to 32 characters) that the system automatically sends back to the job in response to the message (e. A Table of Matches The system searches the reply list in ascending sequence number order. this technique does work well for such tasks as software installation and nighttime unattended operations. optional comparison data (CMPDTA) and starting position (START).g. the system uses the list entry to reply to the message. regardless of their message identifier. Therefore. You should also be confident that the reply in the entry will be appropriate for any error condition that might occur. regardless of its identifier. it requires the user or the system operator to respond to the message. a reply value (RPY). if you have two list entries that would satisfy a match condition. according to the contents of the message data. the system compares the values with the message data from the inquiry message. the comparison begins at that position in the message data. You use comparison values when you want to send different replies for the same message.g. the system automatically replies with the "I" (Ignore) response. Although the reply list is a system-wide entity. Each entry consists of a unique sequence number (SEQNBR).

and then it clears all but unanswered messages from the queue and activates the break handling program. be transparent to the user. OS/400 passes it three arguments: • • • the name of the message queue the library containing the message queue the reference key of the received message The only requirement of the break handler is that it must receive the referenced message with the RCVMSG (Receive Message) command. it then returns control to the job. The system reply list has a specific purpose: to send a reply back to a job in response to a specific message. which processes messages arriving at a message queue in *BREAK mode.5 shows a portion of an initial program that puts a break handler into action. Like the reply list. such as running out of DASD space. the break handler interrupts the job in which the message occurs and processes the message. while a break handler can process any type of message. it's the same command processing program used by the DSPMSG (Display Messages) command. it can convert messages to status messages. The interruption can. The break handler's function. it activates the break handler for the system operator message queue. Note that the initial program also checks whether the user is the system operator. For any other messages. If you use a break handler in a job that is already using the system reply list. The example in Figure 8. it can initiate a conversational mode of messaging between workstations. To make the break handler work. such as a completion message or an informational message. it can process command request messages. allowing you to send a reply.Give Me a Break Message Another means of processing messages is to use a break handling program. In addition. the job. It also checks for any calendar alarms sent by OfficeVision and displays them. . if so. + + Take a Break Figure 8. Figure 8. a break handler does not take control of break messages unless you first tell it to do so. however. IBM supplies a default break handling program. the reply list will get control of the messages first. and it will pass to the break handler only those messages it cannot process. use the following CL command: CHGMSGQ MSGQ(library/msgq_name) + DLVRY(*BREAK) PGM(program_name) SEV(severity_code) OS/400 calls the break handler if a message of high enough severity reaches the message queue. but there are several differences. if appropriate. Both the system reply list and a break handling program customize your shop's method of handling messages that arrive on a message queue. is limited only by your programming ability. The initial program first displays all messages that exist in a user's message queue. To turn control over to a break handling program. it monitors for and displays messages that could indicate potentially severe conditions. on the other hand.it can perform any number of functions. But you can write your own break handling program if you want break messages to do more than just interrupt your normal work with the Display Messages screen. The system reply list handles only inquiry messages. it can redirect messages to another message queue -. which appears quietly at the bottom of the user's display without interrupting work (unless display of status messages is suppressed in the user profile. Unlike the system reply list. or the system value QSTSMSG).4 shows a sample break handling program. It can send customized replies for inquiry messages.4 displays any notify or inquiry messages. it simply resends the message as a status message. You can then do nearly anything you want with the message before you end the break handler and let the original program resume.

the print file has a set of defined attributes from the CRTPRTF command but only one record format). using the CHGMSGQ (Change Message Queue) command No messages are put in a message queue when the queue is in *DFT delivery mode. In this chapter. output directly to the printer. for which default replies may be appropriate. you must understand a few basic concepts about print files to make printing operations run more smoothly. (For specifications you can . overflow line number.. characters per inch (CPI). QSUPPRT. almost 100 percent of the time you will use OS/400 print files to format and direct output. Another good use for default replies is to have one message queue handle all printer messages. You can also use WRKMSGD (Work with Message Descriptions) to manage message descriptions. and QQRYPRT. however. which you can tell the system to use when the message occurs. You specify the message's default reply using either the ADDMSGD (Add Message Description) or CHGMSGD (Change Message Description) command. you specify a source member that describes the various record formats your program will use for printing. This technique may prevent your overnight batch processing from hanging up because of an unexpected error condition. and output queue. in the system history log (QHST). You can easily set up an unattended environment for your computer to use every night by having your system operator execute the following command daily when signing off: CHGMSGQ MSGQ(QSYSOPR) DLVRY(*DFT) Your system will then use default replies instead of sending messages to an absent system operator. such as unattended backup procedures or program installation procedures. This basic understanding of how to define print files and job logs and of the functions they provide will increase your power to customize your system by controlling output. You should be careful.the systemgenerated job log.. IBM ships the system with many print files. These tips are especially helpful if you have migrated from the S/36 or equipment other than the S/38. By defining default replies to these messages and placing that queue in *DFT delivery mode. The second type of print file is externally described: When you use the CRTPRTF command.Print Files and Job Logs There is certainly nothing mysterious about printing on your AS/400. You might also consider including the CHGMSGQ command within key CL programs.e. The default reply must be among the valid replies for the message. and edit codes used for printing. you can create two types of print files within your applications. The first type uses the CRTPRTF (Create Print File) command to define a print file that has no external definition (i. which ties up a workstation or job while the printer device completes the task). which the system uses when you compile a CL program. Chapter 9 . however. however. Messages will be logged. The default reply is used under the following circumstances: • • when you use the system reply list and the list entry's reply for the message is *DFT when you have changed the delivery mode of the receiving message queue to *DFT. These print files have predefined attributes that control such features as lines per inch (LPI). In addition to the print files IBM provides. You can display a message's default reply using the DSPMSGD command. positions. which the system uses when you print a listing from the source file.It's Your Own Default One of the easiest methods of processing message replies automatically is also one of the most often overlooked. informational messages are ignored.e. you can have the system automatically respond to forms loading and alignment messages. How Do You Make It Print Like This? The AS/400 does support direct printing (i. such as QSYSPRT. to ensure the suitability of the default replies for any messages that might be sent to the queue. The message descriptions for inquiry or notify messages can contain default replies. form size. Any program using this type of print file must contain output specifications that describe the fields. which the system uses when you run a query. I cover two items concerning print files: modifying attributes of print files and handling a specific type of print file -. however.

. So how do you instruct the AS/400 to print correctly on the short.g. The SCHEDULE parameter specifies when to make the spooled output file available to a writer for printing. you do not need to have program logic count lines to control page breaks). you see the display represented in Figure 9. To do so. receipts. PAGESIZE(66 132). refer to IBM's Data Management Guide (SC41-9658). you define certain print file attributes (e. allocating the writer as soon as data is available can tie up a single writer for a long time. and the overflow parameter.) Whether you create an externally described print file or a print file that must be used with programs that internally describe the printing. the overflow worked just right. On your previous system. All the IBM-supplied print files are predefined for use with paper that is 11 inches long.e. Let's examine a problem that often occurs when an AS/400 installation is complete. Selecting this value can be useful for long reports you want available for printing only after the entire report is generated. or other output that is printed quickly. One benefit of selecting this value is that you can ensure that all reports one job generates will be available at the same time and therefore will be printed in succession (unless the operator intervenes). you need to find out what the default values for printing are. However. Remember that changing the LPI.. but you weren't around when someone set the system up. Notice the page size parameter. those controlling LPI. This approach can be advantageous for short print items. the 14 1/2-by-8 1/2-inch size) and generate output (using DSPLIB OUTPUT(*PRINT) or a QUERY/400 report) with a system-supplied print file. your paper is only 8 1/2 inches long. 11) the system considers to be a single page on the system-supplied objects. the system will print the report through the page perforations. you type in the DSPFD (Display File Description) command for the print file QSYSPRT: DSPFD QSYSPRT When you execute that command. OVRFLW(60). such as invoices.g. If you have been using paper that is shorter (e. when you generate long reports. but place the value *ALL in the parameter FILE: CHGPRTF FILE(*ALL/*ALL) PAGESIZE(51 132) OVRFLW(45) Another approach is to change the LPI parameter to match a valid number of lines per inch for the configured printer and then calculate the new form size and overflow parameters based on the new LPI you specified. These default parameters combine to determine the number of inches (i. LPI(6). . The page size can vary from one form type to the next. and the overflow line number does not require programming changes for programs that let the system check for overflow status (i. and form size) as part of the print file object definition. so you need to modify the form size and overflow of each print file (including all system-supplied print files and those you create yourself) that generates reports on this short-stock paper. the page length.e.. CPI. the file is available for a writer to begin printing the data as soon as the records arrive in the spooled file.make in DDS. you can execute the same command. The *JOBEND value for SCHEDULE makes the spooled output file available only after the entire job (not just a program) is completed. you can start thinking about controlling when that job will print. The two parameters you can use to ensure that spooled data is printed at the time you designate are SCHEDULE and HOLD. Once you have set up the page size you want and determined how a given job will print. wide paper? First. Such programs use the new attributes of the print file at the next execution. Entering a *FILEEND value for SCHEDULE specifies that the spooled output file is available to the writer as soon as the print file is closed in the program.1. but you can easily compensate for differences by modifying the appropriate print files.. If the system finds the *IMMED value for SCHEDULE. You can accomplish this task by identifying each print file that needs to be modified and executing the following command for each: CHGPRTF FILE(library_name/file_name) PAGESIZE(51 132) OVRFLW(45) + If you need to change all print files on the system. But in this example. the LPI parameter.

or QPJOBLOG using the CRTOUTQ (Create Output Queue) command. the output file stays on the output queue with a *HLD status until an operator releases the file to a writer. you can manage job logs. For further reading on these parameters. diagnostic. Remember that you can also override these parameters at execution time using the CL OVRPRTF (Override with Print File) command. The second method for redirecting job logs is to manually create an output queue called QJOBLOGS. you can modify the SCHEDULE and HOLD parameters for print files by using the CHGPRTF command. from the SETUP menu (type "GO SETUP"). you need to define the appropriate cleanup options by selecting option 1. see the discussion of the CRTPRTF command in IBM's Programming: Control Language Reference (SC41-0030). You can find a complete discussion of this panel and automated cleanup in Chapter 12. the system spools the job log to the system printer unless you change print file QUSRSYS/QPJOBLOG (the print file the system uses to generate job logs) to redirect spool files to another output queue where they can stay for review or printing. you can use the CHGPRTF command (with OUTQ identifying the output queue you created for this purpose) by typing . and other messages. deciding whether to generate a printed job log for jobs that are completed normally or only for jobs that are completed abnormally. completion. CHGPRTF. When your system is shipped. or if you elect to stop the cleanup function at a later date. You can elect to redirect this job log print file in one of two ways.The HOLD parameter works the way the name sounds. As with the PAGESIZE and OVRFLW parameters. the job logs will continue to accumulate in output queue QEZJOBLOG. You can also change some print file attributes at print time using the CHGSPLFA (Change Spool File Attributes) command or option 2 on the Work with Output Queue display. Where Have All the Job Logs Gone? After you have your print files under control." Figure 9. which you supply. The reason these potentially useful job logs can be a pain is that the AS/400 generates a job log for each completed job on the system. my only point is this: The first time you activate the automated cleanup function by typing a "Y" in the "Allow automatic cleanup" option on this panel (see Figure 9. OS/400 changes the job log print file so that all job logs are directed to the system-supplied output queue QEZJOBLOG. But fortunately." For now. Even if you do not start the actual cleanup process. The three methods for job-log management are controlling where the printed output for the job log is directed. Before starting cleanup. When a job is completed. which will not only redirect your job logs to a single output queue. The most popular method is to utilize the OS/400-supplied Operational Assistant. and determining how much information to include in the job logs. JOBLOGS. Controlling where the printed output is directed. but also perform automatic cleanup of old job logs based on a number of retention days. Selecting a value of *YES specifies that when the system generates spooled output for a print file. After creating an output queue to hold the job logs. or directly by typing "GO CLEANUP. it is set up so that every job (interactive sessions as well as batch) generates a job log that records the job's activities and that can vary in content according to the particular job description. the spooled file is available to the writer as soon as the file is closed. "Change cleanup options. Selecting the *NO value for HOLD specifies that the system should not hold the spooled print file on the output queue and should make the output available to a writer at the time the SCHEDULE parameter indicates. where you can enter the retention parameters for several automated cleanup functions as well as determine at what time you want the system to perform cleanup each day. "AS/400 Disk Storage Cleanup. A job log is a record of the job execution and contains informational. You can access the system cleanup option panel from the Operational Assistant main menu (type "GO ASSIST"). and OVRPRTF commands to see whether or not you need to make other changes to customize your printed output needs. when a program generates a spooled file with the attributes of SCHEDULE(*FILEEND) and HOLD(*NO).3 presents the Change Cleanup Options panel." which will present you with the Cleanup Menu panel that you see in Figure 9.3). You can use the DSPJOB (Display Job) or the DSPJOBLOG (Display Job Log) command to view a job log as the system creates it during the job's execution. For example. You should examine the various attributes associated with the CRTPRTF (Create Print File).2. the next step in customizing your system can prick a nasty thorn in the flesh of AS/400 newcomers: learning how to manage all those job logs the system generates as jobs are completed.

the message level and the message severity. they can reduce the system's performance efficiency because of the overhead for each job on the system. Deciding whether or not to generate a printed job log for jobs that are completed normally. you should remember that if you let job logs accumulate. All messages associated with a request or commands being logged from a CL program and that result in a high-level message with a severity greater than or equal to the message severity specified in the LOG parameter." which you can think of as its priority. including trace messages.") The first parameter. "Expanded Parameter Descriptions. Appendix A. This job description has a parameter with the keyword LOG. both of which control the number of messages the system writes to a job log. However... determines which messages will be logged and which will be ignored. You might also want to specify HOLD(*YES) to place the spool files on hold in your new output queue. 2 In addition to the information logged at level 1 above. which controls the level (i. The job description you use for job initiation is the object that controls the creation and contents of the job log. Messages with a severity greater than or equal to the one specified in this parameter will be logged in the job log according to the logging level specified in the previous parameter. those spool files will not be printed. message severity. and the message text level. If a job log exists. All messages associated with a request or commands being logged from a CL program and that result in a high-level message with a severity greater than or equal to the message severity specified. with the additional logging of any requests or commands being logged from a CL program: • • All requests or commands being logged from a CL program. 3 The same as level 2. The second element of the LOG parameter. .e. specifies one of the following five logging levels (note that a high-level message is one sent to the program message queue of the program that received the request or commands being logged from a CL program): 0 No data is logged. you can refer to IBM's Programming: Control Language Reference. Volume 1. Messages that are informational (e. if no printer is assigned to that queue. (For a detailed description of severity codes. messages that tell you a function is in progress) have a severity of 00. amount) of message text written to the job log when the first two values create an error message. Messages that are absolutely essential to the system's operation (e. The job logs can now remain in that queue until you print or delete them. the system is maintaining information concerning that job.CHGPRTF FILE(QPJOBLOG) OUTQ(QUSRSYS/output queue_name) Now the job logs will be redirected to the specified output queue." Every message generated on the AS/400 has an associated "severity. Therefore it is important either to utilize the automated cleanup options available in OS/400's Operational Assistant or to manually use the CLROUTQ (Clear Output Queue) command regularly to clear all the job logs from an output queue. Before discussing all three parameters.g. which has three elements -. message level. the following is logged: • • Any requests or commands logged from a CL program that cause the system to issue a message with a severity level that exceeds or is equal to that specified in the LOG parameter. 1 The only information logged is any message sent to the job's external message queue with a severity greater than or equal to the message severity specified in this LOG parameter..g. Another concern related to the overhead involved with job logs is how to control their content (size) and reduce the number of them the system generates. I should define the term "message severity. inquiry messages that must be answered) have a severity of 99. When you think about managing job logs. 4 The following information is logged: • All requests or commands logged from a CL program and all messages with a severity greater than or equal to the severity specified.

You can let users view. Do you need the information in those job logs? If you understand how your workstation sessions run (e. which menus are used and which programs called). you probably do not need the information from sessions that end normally.Understanding Output Queues Printing. A value of *SECLVL specifies that the system write both the message and help text of the error message to the job log. Handling job logs is a simple.. but you can generally re-create the errors at a workstation. When you neglect to control the number of job logs on the system. Therefore. the message text level. This parameter affects the job log in that a value of *YES instructs the system to write to the job log any logable CL commands (which can happen only if you specify LOG(*JOB) or LOG(*YES) as an attribute of the CL program being executed). . the LOG parameter on the SIGNOFF command overrides the value you specify on the job description. When many types of batch jobs (e. job log information can be useful. Customize your system to handle job logs and other print files to optimize your operations. You might need the information when errors occur. For example. you ensure that any job initiated using that value in the job description does not generate a job log if the job is completed normally.i. And job logs are a valuable information source when a job fails to perform. and it's relatively easy with the AS/400. part of managing system resources.e. By setting the message text level value to *NOLIST. The job description includes the parameter LOGCLPGM (Log CL Program Commands).With the third element of the LOG parameter. so you can use particular job descriptions when you want the system to generate a job log regardless of how the job ends. so someone can re-create events chronologically. The cornerstone for all this capability is the AS/400 output queue. What complicates this basic task is that the AS/400 provides many functions you can tailor for your printing needs. which can negatively affect system performance. hold. you can use multiple printers to handle various types of forms. the question of eliminating job logs is more complex than it is for interactive jobs. Chapter 10 . Understanding how to create and use output queues can help you master AS/400 print operations. the job description controls job log generation. Determining how much information to include in the job logs.. You can rest assured with this approach that jobs ending abnormally will still generate a job log and provide helpful diagnostic information. QDFTJOBD (Default Job Description) -. or cancel their own output. release.. Remember.g. You can use printers that exist anywhere in your configuration -. For instance. but essential. A value of *NO specifies that commands in a CL program are not logged to the job log. the system is forced to maintain information for an excessive number of jobs. a value of *MSG specifies that the system write only first-level message text to the job log. A basic understanding of AS/400 print files will help you effectively and efficiently operate your system. You simply create your user profiles with the default -. Note that for interactive jobs. For batch jobs. Jobs that are completed abnormally will generate a job log with both message and help text present. the system will generate a job log. It's one of the most common things any computer does.g. You can cause any interactive or batch job initiated with QDFTJOBD to withhold spooling of a job log if the job terminates normally. Eliminating job logs for jobs that are completed normally can greatly reduce the number of job logs written into the output queue. nightly routines) run unattended. changing the job description for such interactive sessions is effective. if on the job description you enter the value of *NOLIST in the LOG parameter and use the SIGNOFF LOG(*LIST) command to sign off from the interactive job. It is often helpful to have job logs from batch jobs that end normally as well as those that end abnormally. or you can design your system so their output simply prints on a printer in their area without any operator intervention except to change and align the forms.for the parameter JOBD (Job Description) and enter the command CHGJOBD JOBD(QDFTJOBD) LOG(*SAME *SAME *NOLIST) Is this approach wise? Interactive jobs almost always end normally.whether the printers are attached to local or remote machines or even to PCs on a LAN.

option 3. You can still use option 3 to hold the spooled file and stop the printing. (You can also use output queues to write spooled output to a diskette device. and that spooled file's output priority (which is defined in the job that generates the spooled file). In fact. the Work with Output Queue display lists each spooled file that exists on the queue you specify. You can use function key F11=View 2 to view additional information (e.1b explains each option.. The most common way output queues are created is through a printer device description. which releases them. but this chapter does not cover that function. Yes. HLD The file is spooled and on hold in the output queue. we can focus on creating output queues. the system places "Default output queue for PRINTER_NAME" in the output queue's TEXT attribute. RDY The file is spooled and waiting to be printed when the writer is available. the display also shows the spooled file name. Figure 10.e. the user of the job that created the spooled file. In contrast. How To Create Output Queues Now that we've seen that output queues contain spooled files and let you perform actions on those spooled files. The status of a spooled file can be any of the following: OPN The spooled file is being written and cannot be printed at this time (i. You can use option 6 to release the spooled file for printing. The panel in Figure 10. the system automatically creates an output queue in library QUSRSYS by the same name as that assigned to that printer. and the spooled file will appear on the display as HLD. the user data identifier.1a shows all available options. CLO The file is spooled but unavailable for printing (i. which holds spooled files. For each spooled file. I have mentioned two options for spooled files -. Figure 10.. the number of copies requested. (The spooled file attribute SAVE has a value of *YES.) The AS/400 object type identifier for the output queue is *OUTQ. .. and option 6. the status of that spooled file on the queue. the form type. This output queue is the default for that printer. the number of pages in the spooled file. you read correctly! When you create a printer device description using the CRTDEVPTR (Create Device Description (Printer)) command or through autoconfiguration.e. You can use option 3 to hold the spooled file. job name and number) about each spooled file entry.) WTR The spooled file is being printed.What Is an Output Queue? An output queue is an object containing a list of spooled files that you can display on a workstation or write to a printer device. a spooled file with SAVE(*NO) will be removed from the queue after printing.g. SAV The spooled file has been printed and is now saved in the output queue.1a shows the AS/400 display you get on a workstation when you enter the WRKOUTQ (Work with Output Queue) command for the output queue QPRINT WRKOUTQ QPRINT As the figure shows. the SCHEDULE parameter's value for the print file is *JOBEND). the SCHEDULE parameter of the print file is *FILEEND or *JOBEND).

first out) or *JOBNBR. I recommend using *JOBNBR instead of *FIFO. DSPDTA. If you select *FIFO.specifies that the system sort queue entries according to their priorities.e. The parameter values for this command determine attributes for the output queue. because with *JOBNBR you don't have to worry about changes to an output queue entry affecting the order of the queue's contents. When do you need the file separator. CLO. A change in status from HLD. And as you can imagine.2. if you program a header page for all your reports. Don't confuse the JOBSEP parameter with the FILESEP (file separator) parameter. Another concern is that for output queues that handle only a specific type of form. and AUT). and TEXT) and those with security implications (i.e. SEQ. the job separator. The information on the file separators is similar to that printed on the job separator but includes information about the particular spooled file. you can lose the job separator by selecting a value of 0. AUTCHK. I am saying you should think about how helpful the separators are and explicitly choose the number you need. When creating or changing print files. which is an attribute of print files. The job separator contains the job name. When you use the CRTOUTQ command. Using *FIFO can be tricky because the following changes to an output queue entry cause the system to reshuffle the queue's contents and place the spooled file behind all others of equal priority: • • • A change of output priority when you use the CHGJOB (Change Job) or CHGSPLFA (Change Spooled File Attributes) command. and each time the end of a print job is reached.e. and the date and time the job is run. pages) the system should place at the beginning of each job's output. such as invoices. CLO. ...An alternative method is to use the CRTOUTQ (Create Output Queue) command.*JOBNBR -. see the CRTOUTQ panel in Figure 10. a combination of file separators and job separators could quickly launch a major paper recycling campaign. Or you can enter *MSG for this value. after entering the name of the output queue and of the library in which you want that queue to exist. a separator wastes an expensive form. The other possible value for the SEQ parameter -. the job number. Understand. you are presented with two categories of parameters -. In reality. You need job separators to help separate the printed output of various jobs and to quickly identify the end of one report and the beginning of the next. the job user's name. or OPN to RDY. you can specify a value for the FILESEP parameter to control the number of file separators at the beginning of each spooled file. the system places new spooled files on the queue following all other entries already on the queue that have the same output priority as the new spooled files (the job description you use during job execution determines the output priority). However. You can choose values of either *FIFO (first in. If you'd rather not use a lot of paper. a person looking for a printed report usually pays no attention to separator pages but looks at the first page of the report to identify the contents and destination of the report. I am not saying these separators have no function. A change in status from RDY back to HLD. This information can help in identifying jobs. controls the order of the spooled files on the output queue. SEQ. the system will send a message to the message queue for the writer. job separators are probably wasteful. The first of the procedural parameters.. OPRCTL. You can specify a value from 0 through 9 to indicate the number of job separators (i. The next procedural parameter is JOBSEP (job separator). JOBSEP. or both? You need file separators to help operators separate the various printed reports within a single job. For a look at some of the parameters you can use.the procedural ones (i. or OPN. using the date and time the job that created the spooled file entered the system.

GRTOBJAUT (Grant Object Authority). The values are *YES. For secure data (e. The AUTCHK (authority check) parameter specifies whether the commands that check the requester's authority to the output queue should check for ownership authority (*OWNER) or for just data authority (*DTAAUT). copy. Finally. or RVKOBJAUT (Revoke Object Authority) command. or *NO. the current user adopts the special and object-specific authorities of the owner. Here are a few suggestions. How Spooled Files Get on the Queue It is very important to understand that all spooled output generated on the AS/400 uses a print file.g. During program execution.g. the owner can modify the output queue attributes as well as grant/revoke authorities to the output queue. output queues can provide a proper level of procedural (e. the user has not adopted *JOBCTL authority and thus cannot take advantage of a security hole. The person who owns the output queue is responsible for maintaining the security of the output queue and can even explicitly deny access to DP personnel. and end writers without having to grant them *JOBCTL special authority. Second. To appreciate the importance of controlling access. copy. if they can simply display the spooled file in the output queue? The DSPDTA (display data) parameter specifies what kind of access to the output queue is allowed for users who have *READ authority. With this ownership and the various authority parameters on the CRTOUTQ command. human resources. If the user does not have *JOBCTL special authority or does not adopt this special authority. An alternative is to write a program to perform such writer functions. (Special authorities that provide additional function are *SPLCTL and *JOBCTL. or send the output data only of their own spooled files unless they have some other special authority. which allows control of the queue and provides the ability to change queue entries. A value of *YES says that any user with *READ access to the output queue can display. the ability to control any job on the system).The security-related CRTOUTQ command parameters help control user access to particular output queues and particular spooled data. payroll. financial statements). Who Should Create Output Queues? Who should create output queues? Although this seems like a simple question. When the program ends.g. You can specify that the program adopt the authority of its owner.. which means the owner controls who can view or work with spooled files on that queue. which blocks this control for users with the *JOBCTL special authority. which also grants a user additional job-related authorities that might not be desirable (e. the requester must have ownership authority to the output queue to pass the output queue authorization test. finding print files and establishing the order of print files) and security (e.) The OPRCTL (operator control) parameter specifies whether or not a user who has *JOBCTL special authority can manage or control the files on an output queue.. Given some appropriate attention. What good is it to prevent people from watching as payroll checks are printed. When the value is *OWNER. or send the data of any file on the queue. and *DELETE authority to the output queue. (s)he must have a minimum of *CHANGE authority to the output queue and *USE authority to the printer device.g. You can modify this level of authority by using the EDTOBJAUT (Edit Object Authority). the AUTCHK parameter checks the ownership of the output queue as part of the authorization test when the output queue is accessed. change.. One problem you might face relating to security is how to allow users to start. As you can see. the AUT parameter specifies the initial level of authority allowed for *PUBLIC users. When the value is *DTAAUT. A value of *NO specifies that users with *READ authority to the output queue can display. So ownership is a key to your ability to secure output queues. you can create an environment that lets users control their own print files and print on various printers in their area of work. the department supervisor profile (or a similar one) should own the output queue. who can see what data) support. and you would make sure that the owner has *JOBCTL special authority. Whether you enter the DSPLIB (Display Library) command using the OUTPUT(*PRINT) parameter to direct your output to a . remember that you can use output queues not only for printing spooled files but also for displaying them. *ADD. The system operator should be responsible for creating and controlling output queues that hold data considered public or nonsecure. the requester must have *READ. it is important for two reasons: First. creating output queues requires more than just selecting a name and pressing Enter.

you can execute the STRPRTWTR command STRPRTWTR WRITER(writer_name) OUTQ(QPRINT) (Messages for file control are sent to the message queue defined in the printer's device description unless you also specify the MSGQ parameter. You then use the STRPRTWTR (Start Printer Writer) command. or write a report-generating program. the program QSTRUP controls whether or not the writers on the system are started. Then the program creates a separate A/R report for each branch office and places the report on the appropriate output queue. With that said. It is important to understand that the output queue and the printer are independent objects. The job first spools the nightly corporate A/R report to an output queue at the corporate office. move. When QSTRUP starts the writers. respectively.3 illustrates how one job can place spooled files on different output queues. QUSRSYS/QEZJOBLOG and QUSRSYS/QEZDEBUG) to store job logs and problem-related output. This means you can create a variety of print files on the system to accommodate various form requirements. After a writer is started. When the writer is started to a specific output queue and you use the WRKOUTQ command for that specific output queue. and the people who manage the system can decide to print. the AS/400 is capable of bypassing the spool process to perform direct printing. create and execute an AS/400 query. or to control the output queues by using the STRPRTWTR command.) When you IPL your system.5. Another essential fact to understand about spooling on the AS/400 is that normally all printed output is placed on an output queue to be printed. a print file determines the attributes printed output will have.4.if the print file has a specifically defined output queue or is overridden to a specific output queue. so output queues can exist with no printer assigned and can have entries. each printer's device description determines both its output queue and message queue. it is placed on the output queue currently defined as the output queue for that particular job. that file is placed on an output queue. you are going to use a print file to generate that output. The Operational Assistant (OA) product illustrates some implications of this fact. These output queues are not default queues for any printers. You can start a writer for any output queue (only one writer per output queue and only one output queue per writer). to start specific writers. the output from that print file is placed on that specific queue. to start printing the spooled files in output queue QPRINT. Figure 10. Entries are stored in these queues. A print file is the means to spool output to a file that can be stored on a queue and printed as needed. You can modify QSTRUP to start all writers. To list the writers on your system and the output queues they are started to.report. The output queue is determined by one of two methods -. You make spooled files available to the writer by releasing the spooled file. You must start (assign) a writer to an output queue. but this is normally avoided because of performance and work management problems when implementing direct printing. Also. OA lets you create two output queues (i. if the print file does not specifically direct the spool file. or delete them. How Spooled Files Are Printed from the Queue So how do the spooled files get printed from the queue? The answer is no secret. you can redirect the writer to another output queue by using the CHGWTR (Change Writer) command or by ending the writer and restarting it for a different output queue. You don't have to worry about the name of the writer matching the name of the queue. type the WRKOUTQ command and press Enter. . You can also use the WRKWTR (Work with Writer) command by typing WRKWTR and pressing Enter to get a display like the one in Figure 10. When a job generates a spooled file. view.e. the letters WTR appear in the Status field at the top of the Work with Output Queue display to indicate that a writer is assigned to print available entries in that queue. we can examine the spooling process more closely. For instance.. You will see a display similar to the one in Figure 10. As mentioned in the previous chapter. The OUTQ parameter on that command determines the output queue to be read by that printer. using option 6.

you can let the system create the default output queues for each printer you create. For example." They are definitely on an output queue. Figure 10. you may want to modify ownership and some output queue attributes. you can better schedule printing of a large number of reports.7 represents the WRKSPLF command output for someone who works at the "intermediate" level (there is no "advanced" assistance level for this command. Notice that one spooled file is assigned to the printer "CONTES3" while the other spooled files are "unassigned. and important reports might get delayed behind compile listings being printed just because they were spooled to a queue with a writer. and the user who created the spooled file. How can you use output queues effectively? Each installation must discover its own answer. If your installation generates relatively few reports. This command allows you to work with all spooled files generated by your job. When a programmer decides to print a spooled file. Each programmer can then use a job description to route printed output to his or her own queue. You then have two methods for working with spooled files. To start. he or she moves that file to the output queue with the shared writer active.6 represents the WRKSPLF command output for someone who works at the "basic" OS/400 assistance level (one's assistance level is determined first at the user profile level by the ASTLVL parameter. . the status. so those at the advanced assistance level will also see this same panel). Installations that generate large volumes of printed output need to control when and where these reports might be printed. How Output Queues Should Be Organized The organization of your output queues should be as simple as possible. having one output queue per available printer is the most efficient way to use output queues. This means that the only reports printed are those specifically wanted. Figure 10. At this point. Now you can clearly see which output queue each spool file is assigned to." This basic assistance level hides some of the technical details of spooled files and output queues unless you request more information by selecting option 8 "attributes" to display the spooled file detail information. a staff of programmers might share a single printer. the number of pages. but since no printer is currently started for any of those output queues. You cannot see other spooled files on those same output queues since this WRKSPLF command works only with the current user's spooled files. but that using the WRKOUTQ command is the most useful of the two for system operations. Another helpful command is the WRKSPLF (Work with Spooled Files) command. the files are listed as "unassigned. Also. things could jam up fast. If you spool all compiled programs to the same queue and make them available to the writer. since you can see more than one job's spooled files. You will find that you use both in your daily operations. but I can give you a few ideas. you can send output to an output queue and there will be a printer assigned to print from that queue. even if those spooled files are on multiple output queues. then at the command level based on the last use of the command or what the user enters when prompting the ASTLVL parameter on the command).A Different View of Spooled Files The WRKOUTQ command allows you to work with all spooled files on a particular output queue. Of course. A better solution is to create an output queue for each programmer.

What about the operations department? Is it wise to have one output queue (e.g., QPRINT or PRT01) to hold all the spooled files that nightly, daily, and monthly jobs generate? You should probably spend a few minutes planning for a better implementation. I recommend you do not assign any specific output to PRT01. You should create specific output queues to hold specific types of spooled files. For instance, if you have a nightly job that generates sales, billing, and posting reports, you might consider having either one or three output queues to hold those specific files. When the operations staff is ready to print the spooled files in an output queue, they can use the CHGWTR command to make the writer available to that output queue. Another method is to move the spooled files into an output queue with a printer already available. This method lets you browse the queue to determine whether or not the reports were generated and lets you print these files at your convenience. For some end users, you may want to make the output queue invisible. You can direct requested printed output to an output queue with an available writer in the work area of the end user who made the request. Long reports should be generated and printed only at night. The only things the user should have to do are change or add paper and answer a few messages. What a moutain of information! And I've only discussed a few concepts for managing output queues. But this information should be enough to get you started and on your way to mastering output queues.

Chapter 11 - The V2R2 Output Queue Monitor
In applications that must handle spooled files, you may need a way to determine when spooled files arrive on an output queue. For instance, your application may need to automatically transfer any spooled file that arrives on a particular local output queue to a user on a remote system. Or perhaps you want to automatically distribute copies of a particular spooled file to users in the network directory. You may even want to provide a simple function that transfers all spooled files from one output queue to another while one of your printers is being repaired. In any case, you must find a way to monitor an output queue for new entries.

The Old Solution
If you are running pre-V2R2 OS/400, you can write a program that uses the following tried-and-true approach:

• • • • •

Wake up periodically and perform a WRKOUTQ (Work with Output Queue) command specifying OUTPUT(*PRINT) Copy the output to a database file Read the database file and look for spooled file entries Determine whether an entry is new on the queue (you must be creative here) Perform the appropriate action for any new spooled files

Another option is to use the CVTOUTQ tool from the QUSRTOOL library or the version offered in Chapter 24, "CL: You're Stylin' Now!" Both of these utilities convert the entries on an output queue to a database file, which you can then read and search for new spooled file entries. If you simply want to take a snapshot of all the entries on an output queue at any given time, you can do so easily with the approach outlined above or with either of the CVTOUTQ tools. Such a capability is useful when you want to perform a function against some or all of the spooled files on a queue and then delete those spooled files before taking the next snapshot. However, all these methods lack one fundamental ability that some applications require: the ability to easily identify new spooled file entries as they arrive on the output queue.

A Better Solution
With V2R2, you can easily determine when a new spooled file arrives on an output queue. The V2R2 versions of the CRTOUTQ (Create Output Queue) and CHGOUTQ (Change Output Queue) commands let you associate a data queue with an output queue. When you do, and a spooled file becomes ready (a "RDY" status) on the output queue, OS/400 will send an entry to the associated data queue. The entry identifies the new spooled file, so your program can monitor the data queue and take appropriate action whenever a new spooled file appears.

A spooled file is always in one of several statuses on an output queue (e.g., RDY = ready, HLD = held). We are interested in the "RDY" or "ready" status. The "ready" status signifies that a spooled file is ready to print. When a spooled file arrives on the output queue and is in the "RDY" status, OS/400 sends an entry to the attached data queue (if one is attached). If you then hold that spooled file entry and again release the entry, another data queue entry is sent to the data queue. Each time a spooled file becomes ready to print on the output queue, an entry is sent to the data queue. Figure 11.1 shows the prompt screen for the CHGOUTQ (Change Output Queue) command. For the DTAQ keyword, a value of *NONE indicates that no data queue is associated with the output queue. If you enter the name of a data queue, OS/400 will send an entry to that data queue when a spooled file arrives on the associated output queue. The only requirement for entering a data queue name is that the data queue exist. The value *SAME for the DTAQ parameter indicates no change to the existing parameter value.

Figure 11.2 shows the prompt screen for the CRTDTAQ (Create Data Queue) command. A data queue associated with an output queue must have a MAXLEN value of at least 128. You can specify a longer MAXLEN, but the data queue entry that describes the spooled file will occupy only the first 128 positions. After you create the data queue and use the CHGOUTQ command to associate the data queue with an output queue, OS/400 will create a data queue entry for every spooled file that arrives on that output queue until you again execute the CHGOUTQ command and specify DTAQ(*NONE) to stop the function. Figure 11.3 represents the field layout of the spooled file data queue entry as documented in the Guide to Programming and Printing (SC41-8194). You can use a data queue defined longer than 128 bytes, but not shorter, since the entry uses 128 bytes for each entry.

The STRTFROUTQ Utility
One way you could use this new feature is to automatically transfer spooled files arriving on one output queue to another output queue. Such a utility is useful when a printer breaks and you want to reroute the broken printer's output to another printer. Because a printer can have only one output queue, you can't simply have another printer print the broken printer's output queue as well as its own. Formerly, an operator would have had to monitor the output queue and manually transfer the spooled files to another output queue. Users waiting for reports (especially if they have to walk to a remote printer to get them) don't like having to wait for the operator to transfer the spooled files or having to transfer the files themselves. Figure 11.4 shows the source for the STRTFROUTQ command, a utility that incorporates the V2R2 data queue feature to automatically transfer spooled files from one output queue to another. Besides transferring spooled files, this simple, useful utility illustrates the use of the data queue capability.

To use STRTFROUTQ, you enter both a source and a target output queue. The source output queue is the one the program will monitor for new arrivals. The target output queue is the one to which the spooled files will be transferred.

Figure 11.5 (41 KB) is the source for command processing program (CPP) STRTFROTQC. This program is the workhorse that actually identifies and transfers the spooled files. STRTFROTQC first checks that both the source and target output queues exist. If either does not, the program sends a message to the program queue and then ends, which causes the error message to be sent to the calling program. (Because you would normally be running this job in batch, the message would then be forwarded to the external queue -- the system operator.) When both the source and target output queues exist, the program associates a data queue with the source output queue. If a data queue with the same name as the source output queue already exists in library QGPL, the program uses it. If such a data queue does not exist, the program creates one. I chose to put the data queue in library QGPL because all AS/400s have a library named QGPL, but you can use any other available library instead. After making sure the data queue exists, the program uses the CHGOUTQ command to associate the data queue with the source output queue. At this point, the program enters "polling" mode. At B in Figure 11.5, the program executes the RCVDTAQE command, a front end I wrote for the QRCVDTAQ API (for the code for my front ends to the data queue APIs, see "A Data Queue Interface Facelift" — 151 KB). There is no equivalent OS/400 command. The four parameters listed at B are required; five optional parameters also exist for RCVDTAQE, but we don't need them here. The required parameters are

• • • •

DTAQ, the qualified data queue name (20 alphanumeric) DTALEN, the length of the data queue entry (5,0 decimal) DATA, the data queue entry (i.e., the data) (n alphanumeric; length as defined in previous parameter) WAIT, how long the program should wait for an entry to arrive on the data queue (1,0 decimal; negative for a never-ending wait, n for number of seconds to wait, or 0 for no wait at all)

After receiving the data for an entry, STRTFROTQC extracts the needed fields. The first field it extracts is the &end_flag field. This field, which is used later to end the program, is not part of the OS/400-supplied spooled file data queue entry. I'll explain the use and significance of this field in a moment. The values for &job, &user, &jobnbr, and &splf are all extracted from the data queue contents and transferred to character variables using the CHGVAR (Change Variable) command. Because the spooled file number is stored in binary, the CHGVAR command that extracts it uses the V2R2 %BIN or %BINARY function (D) to extract the value into a decimal field. Once the field values are extracted, the program executes the CHGSPLFA (Change Spooled File Attributes) command to move the spooled file identified in the data queue entry to the target output queue. Now back to that &end_flag field. After you execute the STRTFROUTQ command, the job will wait indefinitely for new data queue entries because STRTFROTQC assigned variable &wait a negative value (A). You could use the ENDJOB (End Job) command to end the job, but this solution is messy: It doesn't clean up the data queue or the associated output queue. When you use data queues, you must be a careful housekeeper. Data queue storage accumulates constantly and is not freed until you delete the data queue. A more elegant ending to such an elegant solution is the ENDTFROUTQ command and its CPP, ENDTFROTQC, shown in Figure 11.6 and Figure 11.7 (20 KB), respectively. When you are ready to end the STRTFROUTQ job, just enter the ENDTFROUTQ command and specify the name of the source output queue for the SOUTQ parameter. The CPP then sends a special data queue entry to the associated data queue; this entry has the value *TFREND in the first seven positions. Program STRTFROTQC checks each received data queue entry for the value *TFREND (C in Figure 11.5). When it detects this value, the program ends gracefully after deleting the data queue and disassociating the output queue so that no more data queue entries are created (E).

*SECADM. Returning to the Cleanup Tasks menu.g.) Note that you must have *ALLOBJ. users. But you can implement a few simple automated and manual procedures to keep your disk storage free of unwanted debris. Specify *YES for the ALWCLNUP parameter to tell the system that you want to enable automatic cleanup. Option 1. and OA's cleanup will not conflict with application programs other than competing for CPU cycles.2). and devices") and then option 2 ("Cleanup tasks"). unused documents. Users add to the clutter with old messages. Although it is ideal to run cleanup procedures when the system is relatively free of other tasks. you will want to have the STRTFROUTQ and ENDTFROUTQ utilities in your toolkit. If you do nothing about this disorder. "Change cleanup options. just prompt and execute the CHGCNLUP (Change Cleanup) command. you can type GO CLEANUP or select option 11 ("Customize your system. both from the OA main menu. journal receivers. or you can enter *SCDPWROFF to tell the system to run cleanup during a system power-off that you've scheduled using OA's power scheduling function (the cleanup will not be run if you power off using the PWRDWNSYS (Power Down System) command or force a power-off using the control panel). but it results in a messy by-product of system-supplied database files. "Start cleanup at scheduled time. If option 1 does not appear on the Cleanup Tasks menu. it will eventually strangle your system.1). OA's automatic cleanup is a good place to start when you're trying to clean up your AS/400's act. and message queues." gives you the Change Cleanup Options display (Figure 12. Using the Change Cleanup Options screen. (To bypass these menus. For STRTIME. and unprinted spool files. Chapter 12 . these commands make spooled file management a little easier and more efficient. By taking advantage of a little-known OS/400 function. you can enable the automatic cleanup function and specify that cleanup should be run either at a specific time each day or as part of any scheduled system power-off. Automatic Cleanup Procedures In August 1990. you do not have the proper authorities. This tracking is good. IBM introduced Operational Assistant (OA) as part of the operating system. To access the OA Cleanup Tasks menu (Figure 12. You can use this menu to start and stop automatic cleanup and to change cleanup parameters. . Today's OA functions include automatic cleanup of some of the daily messes the AS/400 makes. out-of-date records. 23:00) for the cleanup to start.If you collect utility programs.. and *JOBCTL authorities to change cleanup options. it is not a requirement. you can enter a specific time (e.AS/400 Disk Storage Cleanup OS/400 is a sophisticated operating system that tracks almost everything that happens on the system. execute option 2." and your AS/400 will execute the cleanup each day at the specified time.

.. Each parameter allows a value of either *KEEP. Look closely at the list of objects cleaned up by the "Job logs and other system output" option. job accounting. The result is that more disk storage becomes available. In fact. Do an IPL regularly. compress work control blocks. "End cleanup. OS/400 uses a variety of database files and journals to manage operating system functions (e. one called AUDLIB) that you can save and maintain separately. QSYSWRK.g. The . By "manually. When you activate this option. the "OfficeVision/400 calendar items" option is an effective way to manage the size of several OfficeVision production objects. This option cleans up old calendar items and reorganizes key database files to help maintain peak performance." to stop all automatic cleanup until you restart it using option 2.g.." I mean you must manually execute commands that clear entries or reorganize files. you have to locate the files and journals and write your own procedures to clean them up. and performance improves. if you select all possible auditing values. If you ever want to stop the automatic daily cleanup. During an IPL.. system jobs require less time to write to the end of the job log. the problem log). but receivers can be in any library and in any auxiliary storage pool). which tells the system not to clean up the specified objects. The table in Figure 12. Each week. If you activate the security audit journaling process. this receiver will grow rapidly. Manual Cleanup Procedures OA's automatic cleanup won't do everything for you. without this automatic cleanup. performance adjustment. the receiver associated with QAUDJRN (the security audit journal) will grow continuously as long as it's attached to QAUDJRN." OA's automated cleanup operation deletes old security audit journal receivers that are no longer attached to the journal. After IPL. QSYSARB. do not place audit journal receivers into library QSYS (QAUDJRN itself must be in QSYS. The cleanup procedure removes from these output queues any spool files that remain on the system beyond the maximum number of days. just select option 4. SNADS. Make sure your regular backup procedure saves the security journal receivers (only detached receivers are fully saved). As with all journal receivers. For OfficeVision/400 users. If you specify "System journals and system logs.g.g.4 lists cleanup tasks you must handle manually. This. QSPLMAINT). you are responsible for receiver maintenance. Regular. whose job logs can grow quite large between IPLs. weekly or bimonthly). Perform an IPL regularly (e. Your backup strategy should include provisions for retaining several months of security journal receivers in case you need to track down a security problem. hands-off cleanup of these journals and logs is the single most beneficial function of the automatic cleanup procedures. the system places all job logs into output queue QUSRSYS/QEZJOBLOG and all dumps (e. First. makes this cleanup option the most helpful. use the CHGJRN (Change Journal) command to detach the old receiver from QAUDJRN and attach a new one. or you must write a set of automated cleanup tools that you can run periodically or along with OA's daily cleanup operations. This housekeeping especially benefits system-supplied jobs (e. and free up unused addresses. the system also closes job logs and opens new ones.g. or a number from 1 to 366 that indicates the number of days the objects or entries are allowed to stay on the system before the cleanup procedure removes them. Figure 12. along with the possibility that IBM could change or add to these objects in a future release of OS/400. giving performance a boost. system and program dumps) into output queue QUSRSYS/QEZDEBUG..3 lists the cleanup options and the objects that they automatically clean up.The other parameters on the Change Cleanup Options screen let you control which objects the procedure will attempt to clean up. An IPL causes the system to delete temporary libraries. Save security audit journal receivers. Here are my recommendations. Place them in a library (e.

which list the space taken up by damaged objects. But since V1R3 of OS/400. The first method is to use system value QRCLSPLSTG.. Reclaim spool file storage. see the Basic Backup and Recovery Guide (SC41-0036-01). Like the S/38. the library name is absent). But when users create spool files. normal operations use a portion of auxiliary storage for permanent and temporary addresses. such as damaged objects.on very active systems. You should use the RCLSTG (Reclaim Storage) command periodically to find damaged or lost objects and to ensure that all auxiliary storage is either used properly or available for use. Valid values include whole numbers from 1 to 366. QRCL.. or other abnormal job endings can create unusual conditions in storage. A value of *NOMAX tells the system to ignore automatic spool storage cleanup. the more frequently you need to IPL -. to determine when you need to do a RCLSTG. OS/400 creates a member. objects with no owners.g. the AS/400 checks all empty QSPL database members at every IPL and deletes those that have been on the system for seven or more IPLs. all subsystems must be ended. Also. Remove unused licensed software. and objects without libraries. Clean up user output queues. execute the RCLSTG command. you can enter a value of *NOMAX for system value QRCLSPLSTG and then execute the RCLSPLSTG command whenever necessary. Keep in mind that you can execute RCLSTG only when the AS/400 is in restricted state (i. If you create many spool files. What about user-created spooled output? OA's cleanup addresses job logs and certain service and program dump output. move any objects you want to keep to another library. and delete any remaining objects. once you're done with them). When WRKSYSSTS shows that permanent and temporary addresses exceed 20 percent of the available addresses. If you want to control spool file cleanup yourself rather than have the system do it. You also should monitor the permanent and temporary addresses the system uses by executing the WRKSYSSTS (Work with System Status) command. job log. leaving only the console active).e.g. Whenever a spool file is deleted or printed. but it is impractical because it causes the system to generate a new database member for each spool file you create. the system deletes the member. the operating system clears that file's database member. objects without owners. and IBM products such as the OS/400 migration aids. The second new housecleaning method for spool files is to execute the RCLSPLSTG (Reclaim Spool Storage) command. device failures. you should look in QRCL. which lets you limit the number of days an empty member remains on the system. When you or the system creates a spool file. *NONE is also a valid value. Unexpected power failures. product demos. user report. One tool some AS/400 customers find helpful is DLTOLDSPLF. a utility in library QUSRTOOL that finds and moves or deletes all spool files older than a specified number of days. you should IPL at least once a week. readying it for reuse. Another way to reclaim disk storage is to remove unused licensed program products (e. . these files also stay on the system until the user prints or deletes them.more active your system. the AS/400 has provided two additional methods of cleaning up these empty database members. Again like the S/38. otherwise.e. OS/400 uses an empty member of the spool file database (which is maintained in library QSPL) if one is available... the default is 8 days. the system puts any damaged and lost objects it encounters into the recovery library. During a reclaim of storage. You can also use OA's disk analysis reports. this database can grow like Jack's beanstalk (I have seen QSPL grow to 150 MB). After saving libraries and objects you no longer need. delete the products you no longer need (you can use the GO LICPGM command to access the IBM licensed products menu). But even empty database members occupy a significant amount of space. You should run RCLSTG every six months or whenever you encounter messages about damaged objects or authority problems with objects. When an empty member reaches the specified limit. the AS/400 has an operating-system-managed database file that contains a member for every spool file (e. or even objects that exist in no library (i. thus overburdening the system and hurting performance. You need to either monitor user-created output queues or have users monitor their own. Reclaim storage. For more information about the OS/400 RCLSTG function. old third-party products you no longer use. Print key output) on the system. The RCLSTG command recovers and recycles addresses that the system used but no longer needs. After storage is reclaimed.

You can call this menu directly by typing GO DISKTASKS. When you select option 1 to collect disk space information. Or you can print a disk information system summary report. however. As with the security audit journal receivers.5) to collect information about and analyze disk space utilization. refer to the Programming: Backup and Recovery Guide (SC41-8079). instead. programs.6. In either case. you certainly won't want to delete or move records manually.Reset message queue sizes. or you can automate the process by writing a program. An active database environment can contribute to the AS/400's sloppy habits. OfficeVision/400 can devour disk space unless you clean up after it religiously. If you use journaling on your system. as well as a "last used" days counter. look for a public-domain or vendor-supplied file edit utility or tool. Whichever option you choose. database files. You might also want to limit the auxiliary storage available to each user by using the MAXSTG parameter on each user profile. folder. Then you can delete receivers you no longer need. Although OA's automatic cleanup clears old messages from user and workstation message queues. You should evaluate objects that are not used regularly to determine whether or not they should remain on the system. Manage journal receivers. Selecting 2 or 3 tells the system to collect information at the specified interval. use Query/400. Clean up OfficeVision/400 objects. You can collect disk space information at a specified date and time by selecting option 1. Deleting records does not increase your disk space. and objects. it doesn't reset the message queue size. you can simply delete records that are no longer needed. folders (including shared folders). you might want to archive records before you delete them. but queues don't become smaller as messages are removed. the description of each object on the system includes a "last used" date and time stamp. User-created messages can also add to the clutter on the AS/400. You can then select option 2 on the Disk Space Tasks menu to print reports that analyze disk space usage by library. you need to manage the journals you create. Or you could go one step further and write a custom utility that would search for those files and automatically reorganize them using the RGZPFM OS/400 command. Beginning with V2R2. As you can see. the menu options let you collect and print disk space information as well as actually work with libraries. In other situations. You could write a custom report to search for files with a high percentage of deleted records and then manually reorganize those files. you can also perform ad hoc interactive SQL queries. As messages accumulate. clear the file by executing the CLRSAVF (Clear Save File) command. owner. You can use the QRYDOCLIB (Query Document Library) command as a reporting tool to monitor document and folder maintenance. Since V1R3. you can use the Disk Space Tasks menu (Figure 12. The object description also contains the "last changed" date and time as well as the "last saved" date and time. or specific object. Clear save files. you can perform this task manually for specific message queues. For more information about journaling and managing journals and receivers. After you save a save file's data to tape or diskette. or you can access it through the main OA menu. message queues grow to accommodate them. you may want to define a manual or automated procedure to periodically clear those save files and reclaim that storage. You should examine your database to determine whether any files fit this description and then design a procedure to handle the "death" of active records. the system collects information about objects (e. unnecessarily using up storage and degrading performance. Remember to check development and test libraries as well as production libraries. commands) and stores it in file QUSRSYS/QAEZDISK. Encourage OfficeVision/400 users to police their own documents and mail items and to delete items they no longer need. detach and save receivers as part of your normal backup and recovery strategy. If you frequently use save files for ad hoc or regular backups.. you must use the CLRMSGQ (Clear Message Queue) command to completely clear the message queue. . Again. Old and unused objects of various kinds can accumulate on your system. Delete old and unused objects.g. Purge and reorganize physical files. you'll see the prompt in Figure 12. folders. Because the data is collected in a database file. Deleted records continue to occupy disk space until you execute a RGZPFM (Reorganize Physical File Member) command. To reset the queue size. or write high-level language programs to get the information you need. In some situations. One problem is files in which records accumulate forever.

after your cleanup job has ended. If you specify HOLD(*YES) on the SBMJOB command. In OA's automated cleanup function. Be sure to add your statements after the SNDPGMMSG (Send Program Message) command for message CPI1E91 to ensure that. To add your enhancements to QEZUSRCLNP. the AS/400 gives you the services of a maid to solve some simple cleanup issues. Starting with V2R2. apart from an interactive workstation session. You will probably want to write a CL program that reorganizes these files and run that program when OfficeVision is not in use. then submits and releases scheduled jobs at the appropriate date and time. a new system job that is started automatically when you IPL the system. you'll have to remember to submit it each time (or use the job schedule object. Every time OA's automatic cleanup function is run. other possible special values that you may find useful for the SCDDATE parameter.All Aboard the OS/400 Job Scheduler! The job scheduling function. new with V2R2.Figure 12. you can preserve the original program and avoid losing your modified program the next time you load a new release of the operating system. the new SCDDATE and SCDTIME parameters let you specify a date and time for the job to be run. Chapter 13 . Use the function. which indicates that you want to submit the job immediately. The default value for the SCDDATE and SCDTIME parameters is *CURRENT. of course. You can then release the job when you choose. you use it for a job that you want to run only once. By using a different library. There are two V2R2 additions that let you control job scheduling: • • new parameters on the SBMJOB command the new job schedule object The job schedule function was made possible by enhancing the operating system with QJOBSCD. at the appointed time the job's status on the queue will change from scheduled/held (SCD HLD) to held (HLD). . Enhancing Your Manual Procedures You can handle many of the manual tasks I've mentioned by using the QEZUSRCLNP job to incorporate your own cleanup programs and commands into OA's automatic cleanup function. Otherwise. first use the RTVCLSRC (Retrieve CL Source) command to retrieve the source statements for QEZUSRCLNP (Figure 12. so if you don't specify a value for these parameters. however. You also need to develop and implement procedures to maintain system-supplied and user-defined objects. When you use the new parameters to indicate a schedule date and/or time. Then insert your cleanup commands or calls to your cleanup programs into the QEZUSRCLNP source. QEZUSRCLNP is essentially an empty template that gives you a place to add your own cleanup code. at a later date and/or time. This job monitors scheduled job requirements. (You can modify the system library list by editing the QSYSLIBL system value. it calls QEZUSRCLNP and executes your code. This scheduling method is a one-time shot. Finally. If you want a job to run more than once.7 lists the OfficeVision/400 database files you should reorganize regularly (every little bit helps with the OfficeVision performance hog!). such as spool and save files. compile your copy of QEZUSRCLNP into a library that appears before QSYS on the system library list. lets you schedule jobs to run at dates and times you choose without performing any add-on programming. the system sends a completion message to the system operator message queue. Arriving on Time The SBMJOB command. But your cleanup shouldn't stop there. then the system releases the job on the job queue and processes it just like any other submitted job. the SBMJOB command places the job on a job queue in a scheduled state (SCD) until the date and time you specified. as I discuss later).) I caution you against replacing the system-supplied version of the program by compiling your copy of QEZUSRCLNP into QSYS.8) from library QSYS. There are. you'll usually specify an exact date (in the same format as the job's date) and time for the job to run. the SBMJOB command works just as it always has. places a job on a job queue for batch processing.

but it will not prevent the job from running when you release the job queue. Otherwise. it is the only one. You can remove a job from the queue either by using the CLRJOBQ (Clear Job Queue) command or by using the WRKJOBQ (Work with Job Queue) command and ending the job. it'll run today.. IBM didn't pick *JOBSHD. Similar logic applies for other SCDDATE and SCDTIME possibilities." or counting on your fingers!) Or you can specify SCDDATE(*MON) or *TUE. With V2R3. *WED. Holding a job queue that includes a scheduled job can delay execution of the job. Canadians and Brits. You can manipulate the entries in the job schedule using the following new commands: • • • • • • ADDJOBSCDE (Add Job Schedule Entry) CHGJOBSCDE (Change Job Schedule Entry) HLDJOBSCDE (Hold Job Schedule Entry) RLSJOBSCDE (Release Job Schedule Entry) RMVJOBSCDE (Remove Job Schedule Entry) WRKJOBSCDE (Work with Job Schedule Entries) Figure 13.. Running on a Strict Schedule In addition to enhancing the SBMJOB command. (No more "30 days hath September. The operating system offers no commands to create. or *SUN.. the job will run at the scheduled time on the first day of the month. the job will not run. V2R2 introduces a new type of AS/400 object. if today is the first day of the month and you specify SCDDATE(*MONTHSTR) and the current time is previous to the time in the SCDTIME parameter.If you indicate SCDDATE(*MONTHSTR). For example.) The job schedule is a timetable that contains descriptive entries for jobs to be executed at a specific date. on which Monday. it'll wait until next month. During which month. It is most useful for jobs that you want to run repeatedly according to a set schedule.2. The job schedule function is documented in the Work Management Guide (SC41-8078). If you remove a scheduled job from a job queue.. with a system identifier of *JOBSCD. you get a display such as that in Figure 13. Each job schedule entry is made up of many components that define the job to be run and describe the environment in which it will run. or delete your own customized job schedules. will your job be run? That depends. Figure 13. If a job is on the job schedule. This example shows the details of a job my system runs every weekday morning at 3:30.. and so on. time. change.1 shows a sample list display that appears when you run the WRKJOBSCDE command. One job schedule exists on the system: object QDFTJOBSCD in library QUSRSYS. *THU. *SAT. When you select option 5 (Display details) for an entry. the operating system takes care of that chore.3 describes those components and lists the parameter keywords the job-scheduling CL commands use. yet. *FRI.. the job schedule. you need not remember to submit it for every execution. Although its name indicates that this object is the default job schedule. to run the job on the specified day of the week. (Sorry. even if the scheduled time has passed. you can print a list of your job schedule entries by entering the . SCDDATE(*MONTHEND) will run the job on the last day of the month. and/or frequency. even when the scheduled time and date occur.

My command. OS/400 gives each job schedule entry a sequence number to identify it uniquely. but holds it until you explicitly release it for processing. Two Trains on the Same Track When I was setting up job schedule entries for my system. regardless of how many days each month has. . You simply supply the command with the job name of the existing job schedule entry you want to copy from and a name you want to give the copy: CRTDUPSCDE FROMJOB(job-name) NEWNAME(new-name) The NEWNAME parameter defaults to *FROMJOB. but if there are multiple entries with the same job name. RCYACN(*NOSBM) is the "Snooze. The scheduled date component (SCDDATE) of a schedule entry tells the system a specific date to run the job. you indicate values for three parameters: FRQ(*MONTHLY) SCDDAY(*TUE) RELDAYMON(1). in Figure 13. however. You usually refer to an entry by its job name. Notice that this feature applies only to jobs scheduled from the job schedule. I found myself wanting to copy a job schedule entry to save myself from the drudgery of retyping long. For instance. The relative day of the month parameter (RELDAYMON) gives the job schedule even more flexibility. error-prone command strings. The frequency component (FRQ) of a schedule entry may seem confusing at first. would show that they each have a unique sequence number.1. if you want to run a job only on the first Tuesday of each month. Because the job schedule commands don't offer such a function. you need to use an additional schedule entry element. *WEEKLY. indicating that the new entry should have the same name as the original. RCYACN(*SBMHLD) submits the job. you can tell the computer to take one of three actions. you cannot use the SCDDATE parameter. The combination of FRQ(*MONTHLY) and SCDDATE(*MONTHEND) will run a job on the last day of each month. you loose" option. To run a job every day. specify FRQ(*WEEKLY) and SCDDAY(*ALL). you also have to specify the sequence number to correctly refer to the entry. I decided to write a command that does. there are three entries named VKEMBOSS. It's obvious that you can schedule a job to run *ONCE. Displaying the details for each. your AS/400 may be powered off or in the restricted state at the time the job is to be submitted. In the recovery action component (RCYACN) of the schedule entry. is easy to use. I discovered that many of the entries I made were similar. the job scheduler will not attempt to submit the job after the scheduled time passes. For example. but what if you want to schedule a daily job? In that case. for example. using FRQ(*WEEKLY) and SCDDAY(*MON *TUE *WED *THU *FRI). follow the WRKJOBSCDE command with PRTFMT(*FULL). CRTDUPSCDE (Create a Duplicate Job Schedule Entry). You can also run the job only on weekdays. Just Thursdays? That's easy: FRQ(*WEEKLY) and SCDDAY(*THU). the scheduled day (SCDDAY). RCYACN(*SBMRLS) submits the job to be run as soon as possible. Sometimes your computer can't run a job at the scheduled time. For detailed information on each job schedule entry on the list. the two don't make sense together. followed by a space and OUTPUT(*PRINT). If you use the SCDDAY parameter. or *MONTHLY.WRKJOBSCDE command. not to those you submit with SBMJOB. the system will give the entry a unique sequence number.

Derailment Dangers A few cautionary comments are in order before we finish our exploration of the new OS/400 job schedule object. even though the job schedule could contain multiple entries of the same name. the system passes a copy of the LDA to the submitted job. It is important to know that a job submitted by the job schedule will not retain the contents of the LDA from the job that originally added it to the job schedule. a new API that lists job schedule entries in a user space. CRTDUPSCDE uses the IBM-supplied program QWCLSCDE. it will simply have to wait its turn. which isn't worth the effort to me because I hardly ever use this parameter.) The command also uses two user space APIs: QUSCRTUS and QUSRTVUS.usually within a few seconds. In my tests of the new function. I was never able to run the scheduled job with anything other than a blank LDA. (See A in Figure 13. When you use SBMJOB to schedule a job. The system submits a job schedule entry to a job queue or releases a scheduled job already on a job queue approximately on time -. It's also noteworthy that. I used very basic error trapping instead of error-message-handling routines. Doing so would require adding array-handling techniques to the CL program. If you have multiple same-name entries and you want to retrieve one other than the first. When I add the job schedule entry. the load on your system determines when the job actually runs. I've discovered an alternate technique. you may want to schedule jobs using the SBMJOB command instead of creating a job schedule entry. When you submit a job with the SBMJOB command. It's a common practice to store variable processing values in the LDA as a handy means of communicating between jobs or between programs within a job. on the other hand. it's a simple matter to use the CHGJOBSCDE command to make any minor changes the new entry needs. But if there are many jobs waiting on the job queue ahead of the scheduled job. SBMJOB has another benefit that a job scheduling entry does not offer. the system defaults to using an initial library list that is identical to the library list currently in use by the submitting job. just like the railroad. you'll need to add the code to loop through the data structure that returns the name.4 provides the code for the CRTDUPSCDE command. After that.Figure 13.5 is the command processing program (CPP). If you've gotten out of the old S/38 habit of creating unique job descriptions primarily to handle unique library lists. then it uses the same values in the ADDJOBSCDE command to create a new entry based on the existing one. There are a few situations that I ran into when I was implementing the function and that the Work Management Guide doesn't adequately cover. For example. the job scheduling function may not always run on time. that still lets me take advantage of a job schedule entry for recurring jobs that need the LDA. If your application depends upon specific values in the LDA. my command doesn't duplicate the OMITDATE values from the original. This new data area should be a permanent object on the system as long as the dependent job schedule entry exists. The job schedule entry. you'll need to resurrect this technique to describe the library list for job schedule entries. I didn't include some features that you might want to add. CRTDUPSCDE is very basic. the command retrieves only the first instance of a schedule entry with the name you choose. no matter whether you use SBMJOB or the job schedule object. I encourage you to experiment with enhancing this command to suit your own needs. the program breaks the output from the API down into the parameter values that describe the entry.) After retrieving the "from" job schedule entry (which you specified in the FROMJOB parameter). I also create a unique data area that contains the proper values in the proper locations.5. To conserve space. Finally. It's then a simple matter to make a minor change to the submitted program so that the program either uses the new data area instead of the LDA or retrieves the new data area and copies it to the LDA using the RTVDTAARA (Retrieve Data Area) and/or CHGDTAARA (Change Data Area) commands. depends upon the library list in its JOBD component. however. Figure 13. Although you can schedule a job to the second. according to the specifications in the submitted program. If it's . (You can find documentation for QWCLSCDE and the user space layouts used in CRTDUPSCDE in the System Programmer's Interface Reference (SC21-8223). Also. In addition to being easy to use. however.

and then reference a job log to investigate further. you can help by ensuring that the job's priority (parameter JOBPTY) puts it ahead of other jobs on the queue. By maintaining an accurate history log. One valuable AS/400 tool at your fingertips is the history log. Changing your system's date or time can also affect your scheduled jobs. The job scheduling function in V2R2 does not offer job completion dependencies. Please note that history logs are different from job logs. which contains information about the operation of the system and system status. regardless of which method you use. however. Changing the system's date or time forward. If you move the date or time system values backward. or at least different. the job you had scheduled to run at two o'clock won't repeat itself. and messages flying all over the place at once. output queues. You can sign on to the system and submit several batch jobs for processing immediately. Without a third-party product. you cannot condition the closing job to be run only if the posting job goes through to a successful completion. You can learn a lot from history -. If the system is in restricted state when you change the date or time system values. then a daily closing. the effect is fairly straightforward: The system will not reschedule any job schedule entries that were run within the repeated time. For example. AS/400 job processing is new. If the system was not restricted. It records this information in the form of messages. which it updates each time the job schedule submits a job. Whereas job logs record the sequential events of a job. the job schedule may not submit the job at all. job queues. system operator messages and replies. At the same time. the job schedule's action depends upon whether or not the system is in restricted state when you make the change. you might wonder how you can possibly manage and audit such activity. With so much going on. The history log tracks high-level activities such as the start and completion of jobs. or you can submit jobs to be run at night. the system operator can run jobs and monitor their progress.Keeping Up With the Past For many of you. The system stores a "next submission" date and time for each entry.even your system's history. if at three o'clock you change your system's time back to one o'clock. can be tricky. the history log records certain operational and status messages pertaining to all the jobs on a system. you can monitor specific system activities and reconstruct events to aid problem determination and debugging efforts. if you use the job schedule to run a daily transaction posting. any missed job schedule entries are submitted immediately (only one occurrence of each missed entry is submitted even if. device status changes. which are stored in files created by the system. if your system is down or in a restricted state at the appointed time. and users at various remote sites can sign on to the system. attempted security violations. your best bet is probably to incorporate the entire procedure into a single CL program with appropriate escape routes defined in case one or more functions fail. There can be multiple subsystems. for example. the system refers to the RCYACN attributes of the missed job schedule entries to determine whether or not to submit the jobs when you bring the system out of its restricted state. if you need to schedule jobs with such a completion requirement. You can review the history log to find a particular point of interest. Some third-party scheduling functions offer this capability. For example. Chapter 14 . you've scheduled a job to run daily and moved the system date ahead two days). If the change causes the system to skip over a time when you had a job scheduled.critical that a job run at a specific time. And as I mentioned earlier. System Message Show and Tell You can display the contents of the history log on the AS/400 by executing the DSPLOG (Display Log) command DSPLOG LOG(QHST) . but the job may still have to wait for an available activity slot before it can begin. and other securityrelated events.

You can enter just the job name. History Log Housekeeping The history log consists of a message queue and system files that store history messages. user name. The value * results in output to the screen. the system retrieves all messages from CPF2200 to CPF2299 (these are all security-related messages). The parameters for the DSPLOG command are as follows: LOG The system refers to the history log as "QHST. in which case the system might find several jobs with the same name that ran during a given period of time.1. time as hhmmss and date as mmddyy). Or you can enter the specific job name. Enter values as six-digit numbers (i.. The DSPLOG command lets you look at the contents of the history log as you would messages in a message queue. if you enter the message ID CPF2200. The files belong to library QSYS and begin with the letters QHST. PERIOD You can enter a specific time period or take the defaults for the beginning and ending period. JOB You use the JOB parameter to search for a specific job or set of jobs. Notice that the default for "Beginning time" is the earliest available time and the default for "Beginning date" is the current date. The command DSPOBJD OBJ(QSYS/QHST*) OBJTYPE(*FILE) . For example. To look at previous days.The resulting display resembles the screen in Figure 14. type in DSPLOG and press F4. and n represents a sequence character appended to the Julian date (0 through 9 or A through Z). You can use the DSPOBJD (Display Object Description) command to display a list of history files. and line failures are listed as messages in file QHST. you can place the cursor on a particular message and press the Help key to display second-level help text for the message. To prompt for parameters. The yyddd stands for the Julian date on which the log was created. and job number to retrieve the history information for a particular job. Because system events such as job completions. this parameter helps narrow your search. By specifying "00" as the last two digits of the message ID. MSGID Like the JOB parameter. you can retrieve related sets of messages.e. The system displays the screen shown in Figure 14. and *PRINT results in a printed spooled file. invalid sign-on attempts. The text description maintained by the system contains the beginning and ending date and time for the messages contained in that file. OUTPUT You are probably familiar with this parameter.2." QHST provides many of the functions the QSRV and QCHG logs provide on the S/36. followed by a number derived as QHSTyydddn. The DSPLOG command has several parameters that provide flexibility when inquiring into the history log. You can specify one message or multiple messages. which is helpful for tracking activities that occurred during a particular time period. you must supply a value.

make sure that once each month you make a save copy of the QHST files.results in a display similar to the one shown in Figure 14. The OA category "System Journals and System Logs" lets you specify the number of days of information to keep in the history log. which the system value QHSTLOGSIZ controls.) Keep in mind that OA does not provide a strategy for archiving the history logs to a media that you can easily retrieve.) To determine how much history log information to keep. You can use the DLTQHST utility (from the QUSRTOOL library) to delete old history files. OA then deletes log files older than the specified number of days. record on the tape label the names of the beginning and ending log files. Because the system itself does not automatically delete files. one company discovered that a programmer had planted a system virus. It provides a certain amount of security auditing and lets you determine whether and when specific jobs were executed and how they terminated. In most cases. The system creates a new file each time the existing file reaches its maximum size limit. you will soon start to recognize the messages that are most beneficial to you. although large installations with heavy history log activity may need to save and delete objects every 15 days. you can do the following: • • On the first day of each month. it is important to develop a strategy for deleting the log files (to save disk space) and for using the data before you delete the files. You should maintain enough recent history on disk to be able to easily inquire into the log to resolve problems. (For more information about Operational Assistant.3. If you activate OA cleanup procedures.g. In reviewing its history log. View the existing log files on the system and delete any that are more than 30 days old. save all QHST files in library QSYS to tape." The second-level help text would tell you which program was attempting to delete the journal receiver. OA will still delete the log files. you might be able to prevent the loss of important information.. Maintaining a history log lets you reconstruct events that have taken place on the system. Or you might notice the message "Receiver ACG0239 in JRNLIB never fully saved (I C). It's probably wise to use the same set of tapes and save to the next sequence number. Using and maintaining a history log is not difficult and could prove to be time well spent. A history log can also alert you to less serious occurrences (e. you might be prompted to find out who uses DSP23 and why (s)he is trying to sign on with the system security officer profile. If. The history log is a management tool that lets you quickly analyze system activities. see IBM's AS/400 System Operations: Operational Assistant Administrator's Guide (SC41-8082). Or you can use it to review all completion messages to find out how many jobs are executed on your system each day or which job ended abnormally. For quick reference. Inside Information Careful review of history logs can alert you to unusual system activity. a specific sequence of jobs was not performed exactly as planned). (Hint: Remember that the text description contains the beginning and ending date and time to help you determine the age of the file. you should consider the disk space required to store the information and schedule your file saves accordingly. As you monitor the history log (preferably every day). If these events are brought to your attention. for example. The best way to manage history logs on your system is to take advantage of the automatic cleanup capabilities of Operational Assistant (OA). If you are remiss in performing this save. the message "Password from device DSP23 not correct for user QSECOFR" appears frequently in the log. it is a good idea to keep 30 days of on-line history. . If you elect not to use this automatic cleanup the OA offers.

with little or no time of system inactivity. This approach also gives you the simplest and safest strategy for recovery. If this is your situation. To simplify recovery. A basic rule of backup and recovery is that if you don’t save it. For more information concerning QAUDJRN. test data) on your system that doesn’t need to be restored and can be omitted from your backup. When and how often do you need to back up? Ideally. What should you be backing up? The simple answer to this question is that you should back up everything.. . When you design a backup strategy.to 6-hour block of time available daily with no system activity. such as a 4. the more quickly you can recover the pieces. see the AS/400 Security Reference (SC41-8083).” Designing and Implementing a Backup Strategy You should design your backup strategy based on the size of your backup window. This new journal is capable of monitoring for the security-related events recorded in the QHST as well as additional events that QHST does not record. you probably can’t afford an unscheduled outage either. For this reason. If your system is so critical to your business that you don’t have a manageable backup window. Always keep your recovery strategy in mind as you design your backup strategy. If a system failure or disaster occurs. when and how you run your backup.g. depend on the size of your backup window — the amount of time your system can be unavailable to users while you perform a backup. The fewer pieces needed in your backup strategy. you can replace the computer hardware and software that runs your business. This is the only way to verify that you’ve designed a good backup strategy that will meet your system recovery needs. it’s critical to have a good backup and recovery strategy. think of it as a puzzle: The fewer pieces you have in the puzzle.to 12-hour block of time available daily with no system activity. A simple way to ensure you have a good backup of your system is to use the options provided on menu SAVE ( Figure 15. For more information about these options. However. it doesn’t get restored. you should seriously evaluate the availability options of the iSeries. the data that resides on the system. you should also design your recovery strategy to ensure that your backup strategy meets your system recovery needs. the more quickly you can put the pieces of the puzzle together.Note: The security journaling capabilities that OS/400 offers using the audit journal QAUDJRN provide additional event-monitoring capabilities specifically related to security. Complex — You have a short backup window. is irreplaceable. you need to balance the time it takes to save your data with the value of the data you might lose and the amount of time it may take to recover.Backup Basics The most valuable component of any computer system isn’t the hardware or software that runs the computer but.1). When designing your backup and recovery strategy. the menu options I refer to are from menu SAVE. saving your entire system every night is the simplest and safest backup strategy. At the same time you design your backup strategy. you may have some noncritical data (e. including dual systems. The final step in designing a backup strategy is to test a full system recovery. such as an 8. Chapter 15 . as well as what you back up. however. You should test your recovery strategy at your recovery services provider’s location. see “Availability Options. In the following discussion of backup strategies. Realistically. This command presents you with additional menus that make it easy either to back up your entire system or to split your entire system backup into two parts: system data and user data. rather. Medium — You have a medium backup window. Companies go out of business when their data can’t be recovered. Your business may depend on your ability to recover your system. Your company’s data. though. which you can reach by typing Go Save on a command line. you need to back up when your system is at a known point and your data isn’t changing. Your backup strategy will typically be one of three types: • • • Simple — You have a large backup window.

see “Preparing and Managing Your Backup Media. When developing a medium backup strategy. Keep in mind that unattended save operations require you to have a tape device capable of holding all your data. .”) Even if you don’t have enough time or enough tape-device capability to perform an unattended save using option 21. applying PTFs. Using the SavChgObj command. your backup strategy might look like this: Friday night: Entire system (option 21) Monday night: All user data (option 23) Tuesday night: All user data (option 23) Wednesday night: All user data (option 23) Thursday night: All user data (option 23) Friday night: Entire system (option 21) Implementing a Medium Backup Strategy You may not have a large enough backup window to implement a simple backup strategy. or directories You can use one or a combination of these methods. keep in mind that the more often your data changes. while program objects change infrequently. A simple backup strategy may also involve SAVE menu option 23 (All user data). This approach can be useful if you have a system environment in which program objects and data files exist in the same library. Saving changed objects. the more often you need to back it up. If your system has a long period of inactivity on weekends. The SavDLO (Save Document Library Objects) command lets you save documents and folders that have changed since the last save or since a particular date and time. you can use SAVE menu option 22 (System data only) to save just the system data after applying PTFs or installing a new licensed program product. (For more information about backup media. Several commands let you save only the data that has changed since your last save operation or since a particular date and time. You’ll therefore need to evaluate in detail how often your data changes. You can use SAVE menu option 21 (Entire system) to completely back up your system (with the exception of queue entries such as spooled files). For example. This option saves user data that can change frequently. As an alternative. you can save just the data files that have changed. you’ll need to implement a backup and recovery strategy of medium complexity. folders. Weekly backup: Back up the entire system. you can still implement a simple backup strategy: Daily backup: Back up only user data that changes frequently. You can use SavDLO to save changed documents and folders in all your user auxiliary storage pools (ASPs) or in a specific user ASP. you may have large batch jobs that take a long time to run at night or a considerable amount of data that takes a long time to back up.Implementing a Simple Backup Strategy The simplest backup strategy is to save everything daily whenever there is no system activity. If this is your situation. You should also consider using this option to back up the entire system after installing a new release. data files change very frequently. Typically. or installing a new licensed program product. You can use the SavChgObj (Save Changed Objects) command to save only those objects that have changed since a library or group of libraries was last saved or since a particular date and time. You can also schedule option 23 to run without operator intervention. Option 21 offers the significant advantage that you can schedule the backup to run unattended (with no operator intervention). Several methods are available to you in developing a medium backup strategy: • • • saving changed objects journaling objects and saving the journal receivers saving groups of user libraries.

You can use the Sav (Save) command to save only those objects in directories that have changed since the last save or since a particular date or time. if the batch workload on your system is heavier on specific days of the week. groups of folders and directories that change frequently. For instance. folders. SavDLO. program objects change infrequently.000 records and one record changes. you should detach and save the journal receivers regularly. you’re saving only the changed records in the file. and Sav commands. and Sav commands. or directories. on a daily basis. Data files change frequently. you should consider using the SavChgObj strategy either by itself or in combination with journaling. It’s best to keep your journal receivers isolated from other objects so that your save/restore functions are simpler. you may want to save only the libraries with data files on a daily basis. Saving groups of user libraries. However. Implementing a Complex Backup Strategy If you have a very short backup window that requires a complex strategy for backup and for recovery. When you save a journal receiver. your backup strategy might look like this: Day/time Friday night Monday night Tuesday night Wednesday night Thursday night Friday night Batch workload Light Heavy Light Heavy Heavy Light Save operation Entire system (option 21) Journal receivers only All user data (option 23) Journal receivers only Journal receivers only Entire system (option 21) To take full advantage of journaling protection. Depending on your environment. This design simplifies your backup and recovery procedures. saving changed objects may not help in your system environment. and. When you journal a database file. Journaling objects and saving the journal receivers. SavDLO. You can also choose to save only your changed data. You can also save. If your applications regularly add new file members. The frequency with which you save the journal receivers depends on the number of journaled changes that occur on your system. you can use some of the same techniques described for a medium backup strategy. In this environment. Many applications are set up with data files and program objects in different libraries. using a combination of the SavChgObj. keep in mind that this approach will make your recovery more complex. if you have a file member with 100. . not the entire file. Saving the journal receivers several times during the day may be appropriate for your system environment. For example: Day/time Friday night Monday night Tuesday night Wednesday night Thursday night Friday night Batch workload Light Heavy Light Heavy Heavy Light Save operation Entire system (option 21) Changed data only* All user data (option 23) Changed data only* Changed data only* Entire system (option 21) * Use a combination of the SavChgObj. the system writes a copy of every changed record to a journal receiver. but with a greater level of detail. If your save operations take too long because your files are large. the SavChgObj command saves the entire file member. The way in which you save journal receivers depends on whether they reside in a library with other objects. you’ll use either the SavLib (Save Library) command or the SavObj (Save Object) command. journaling your database files and saving the journal receivers regularly may be a better solution. you may need to save specific critical files at specific times of the day or week. Be aware that you must save a new member of a database file before you can apply journal entries to the file. For example. If your system environment is set up like this. If you journal your database files and have a batch workload that varies. on most systems.

BRMS is IBM’s strategic OS/400 backup and recovery product on the iSeries and AS/400. you must have a complete backup of your entire system.Several other methods are available to you in developing a complex backup strategy. Saving data concurrently using multiple tape devices. A parallel save is intended for very large objects or libraries. as well as to provide skeleton code that you can use as a guideline for your own backup programs. with the exception of backing up spooled files (I cover spooled file backup later). (This function is implemented with IBM’s Backup.) Save-While-Active. folders. Recovery and Media Services (BRMS) Overview. Saving data in parallel using multiple tape devices. The save-while-active process can significantly reduce the amount of time your system is unavailable during a backup. you can save libraries to one tape device. Figure 15. see “Backup. You can reduce the amount of time your system is unavailable by performing save operations on more than one tape device at a time. You can use one or a combination of these methods: • • • save data concurrently using multiple tape devices save data in parallel using multiple tape devices use the save-while-active process Before you use any of these methods. To help you make your decision. Later. BRMS is a comprehensive tool for managing the backup. Recovery and Media Services (BRMS) Overview” [Chapter 16]. For more information about using BRMS to implement your backup strategy. the system “spreads” the data in the object or library across multiple tape devices. or directories to different tape devices. this section provides a look at some of the inner workings of these primary save options. if your installation requires a more complex backup strategy. With this method. This option puts the system into a restricted state. For example. archiving. I provide more information about saving data concurrently using multiple tape devices. This .” [Chapter 16] The Inner Workings of Menu SAVE Menu SAVE contains many options for saving your data. you can use OS/400’s save commands in a CL program to customize your backup. Recovery and Media Services licensed program product. objects.2 illustrates the save commands and the SAVE menu options you can use to save the parts of the system and the entire system. If you choose to use save-while-active. and directories to a third tape device. Recovery and Media Services product. An Alternative Backup Strategy Another option available to help implement your backup strategy is the Backup. Or. folders to another tape device. For detailed instructions and a checklist on using these options. I provide more details about save-while-active later. Starting with V4R4. Or you can save different sets of libraries. make sure you understand the process and monitor for any synchronization checkpoints before making your objects available for use. but four are primary: • • • • 20 — Define save system and user data defaults 21 — Entire system 22 — System data only 23 — All user data You can use these menu options to back up your system. see “Backup. refer to OS/400 Backup and Recovery (SC41-5304). you can perform a parallel save using multiple tape devices. for more information about it. and recovery environment for one or more servers in a site or across a network in which data exchange by tape is required. Entire System (Option 21) SAVE menu Option 21 lets you perform a complete backup of all the data on your system.

Option 21 runs program QMNSave. Option 22 runs program QSRSavI. option 23 places the system in restricted state.LIB/TapeDeviceName. The following program extract represents the significant processing that option 22 performs: EndSbs Sbs(*All) Option(*Immed) ChgMsgQ MsgQ(QSysOpr) Dlvry(*Break or *Notify) SavSys SavLib Lib(*IBM) AccPth(*Yes) Sav Dev('/QSYS. including files. The following program extract represents the significant processing that option 23 performs: EndSbs Sbs(*All) Option(*Immed) ChgMsgQ MsgQ(QSysOpr) Dlvry(*Break or *Notify) SavSecDta SavCfg SavLib Lib(*AllUsr) AccPth(*Yes) + . The following CL program extract represents the significant processing that option 21 performs: EndSbs Sbs(*All) Option(*Immed) ChgMsgQ MsgQ(QSysOpr) Dlvry(*Break or *Notify) SavSys SavLib Lib(*NonSys) AccPth(*Yes) SavDLO DLO(*All) Flr(*Any) Sav Dev('/QSYS. It does not save any user data. It’s best to run this option overnight for a small system or during the weekend for a larger system.DEVD') Obj(('/*') ('/QSYS.LIB/TapeDeviceName. Option 23 runs program QSRSavU. System Data Only (Option 22) Option 22 saves only your system data. The Sav command also omits the QDLS file system because the SavDLO command saves QDLS.DEVD') Obj(('/QIBM/ProdData') ('/QOpenSys/QIBM/ProdData')) UpdHst(*Yes) StrSbs SbsD(ControllingSubsystem) + + + + All User Data (Option 23) Option 23 saves all user data.Lib.Lib file system because the SavSys (Save System) command and the SavLib Lib(*NonSys) command save QSys.LIB' *Omit) ('/QDLS' *Omit)) UpdHst(*Yes) StrSbs SbsD(ControllingSubsystem) + + + + + Note: The Sav command omits the QSys.means no users can access your system while the backup is running. security data. user-written programs. Like options 21 and 22. option 22 puts the system into a restricted state. and all other user data on the system. Like option 21. and configuration data. You should run this option (or option 21) after applying PTFs or installing a new licensed program product. This option also saves user profiles.

including which backup schedules include the library and when the library was last backed up (DspBckupL *Lib) a folder backup list with the same information for all folders in the system (DspBckupL *Flr) a list of all system values (DspSysVal) a list of network attributes (DspNetA) a list of edit descriptions (DspEdtD) . distribute the saved data area to the other systems. or 23. Printing System Information When you perform save operations using option 21. you can optionally request a series of reports with system information that can be useful during system recovery. 22. You can use SAVE menu option 20 (Define save system and user data defaults) to change the default values displayed on this panel for menu options 21. To change the defaults. you no longer need to worry about which.3 shows the Specify Command Defaults panel values used by these options. and restore it. Printing the system information requires *AllObj.DEVD') Obj(('/*') ('/QSYS. The following lists and reports are generated when you print the system information (the respective CL commands are noted in parentheses): • • • • • a library backup list with information about each library in the system. if any. You can simply review the new default options and then press Enter to start the backup using the new default parameters. Setting Save Option Defaults When you save information using option 21.LIB' *Omit) ('/QDLS' *Omit) ('/QIBM/ProdData' *Omit) ('/QOpenSys/QIBM/ProdData' *Omit)) UpdHst(*Yes) StrSbs SbsD(ControllingSubsystem) + + + + + + Note: The Sav command omits the QSys. The Specify Command Defaults panel presented by these options provides a prompt for printing system information. For example. However. The system saves the new default values in data area QSRDflts in library QUsrSys for future use (the system creates QSRDflts only after you change the IBM-supplied default values). and 23. option 20 offers an additional benefit: You can simply define your default parameters using option 20 on one system and then save data area QSRDflts in library QUsrSys.Lib file system because the SavSys command. This information is especially useful if you can’t use your SavSys media to recover and must use your distribution media. You can also use command PrtSysInf (Print System Information) to print the system information. the system displays the IBM-supplied default parameter values. When you select option 20. 22. In addition. You can change any or all of the parameter values to meet your needs. you can specify additional tape devices or change the message queue delivery default. or 23 from menu SAVE. 22.SavDLO DLO(*All) Flr(*Any) Sav Dev('/QSYS. You probably don’t need to print the information every time you perform a backup. the Sav command executed by option 23 omits the /QIBM and /QOpenSys/QIBM directories because these directories contain IBM-supplied objects. The Sav command also omits the QDLS file system because the SavDLO command saves QDLS. *IOSysCfg. you can specify default values for some of the commands used by the save process. the system displays the default parameter values for options 21.Lib. you must have *Change authority to both library QUsrSys and the QSRDflts data area in QUsrSys. the SavSecDta (Save Security Data) command. 22. and the SavCfg (Save Configuration) command save QSys. options to change on subsequent backups.LIB/TapeDeviceName. you should print it whenever important information about your system changes. Changing the defaults simplifies the task of setting up your backups. The first time you use option 20. and *JobCtl authority and produces many spooled file listings. distributed systems with the same save parameters on each system. If you have multiple. Once you’ve defined new default values. Figure 15. and 23.

or directories to different tape devices. and directories to a third tape device. Z*) You can also save a single library concurrently to multiple tape devices by using the SavObj or SavChgObj command. For example: SavLib Lib(*AllUsr) + Dev(FirstTapeDevice) + OmitLib(A* B* $* #* @* . the system processes the request in several stages that overlap. For example.. You can save data concurrently using multiple tape devices by saving libraries to one tape device. one way to reduce the amount of time required for a complex backup strategy is to perform save operations to multiple tape devices at once. . improving save performance.. Or. L*) SavLib Lib(*AllUsr) + Dev(SecondTapeDevice) + OmitLib(M* N* . When you run multiple save commands. L*) + Dev(FirstTapeDevice) + Lib(LibraryName) SavChgObj Obj(M* N* O* . you can save generic objects from one large library to one tape device and concurrently issue another SavObj command against the same library to save a different set of generic objects to another tape device. you can use the OmitLib (Omit library) parameter with generic naming.. objects. Z*) + Dev(SecondTapeDevice) + Lib(LibraryName) Concurrent Saves of DLOs (Folders) You can run multiple SavDLO commands concurrently for DLO objects that reside in the same ASP. folders to another tape device. You can use generic naming on the Obj (Object) parameter while performing concurrent SavChgObj operations to multiple tape devices against a single library.. you can save different sets of libraries.. To perform concurrent save operations to different tape devices. This technique lets you issue multiple save operations using multiple tape devices to save objects from one large library. For example: SavChgObj Obj(A* B* C* $* #* .. Concurrent Saves of Libraries and Objects You can run multiple save commands concurrently against multiple libraries.• • • • • • • • • • • • • • • a list of PTF details (DspPTF) a list of reply list entries (WrkRpyLE) a report of access-path relationships (DspRcyAP) a list of service attributes (DspSvrA) a list of network server storage spaces (DspNwSStg) a report showing the power on/off schedule (DspPwrScd) a list of hardware features on your system (DspHdwRsc) a list of distribution queues (DspDstSrv) a list of all subsystems (DspSbsD) a list of the IBM software licenses installed on your machine (DspSfwRsc) a list of journal object descriptions for all journals (DspObjD) a report showing journal attributes for all journals (WrkJrnA) a report showing cleanup operations (ChgClnup) a list of all user profiles (DspUsrPrf) a report of all job descriptions (DspJobD) Saving Data Concurrently Using Multiple Tape Devices As I mentioned earlier. This technique allows concurrent saves of DLOs to multiple tape devices.. folders..

uses the other image to save the object to tape. Note: Parameter OmitFlr is allowed only when you specify DLO(*All) or DLO(*Chg). Save-while-active lets you use the system during part or all of the backup process. Rather than maintain two complete images of the object being saved. applications can once again change the objects. the system saves to the first tape device all folders starting with DEPT except those that start with DEPT2. The save-while-active job uses this image — called the checkpoint image — to save the object. Concurrent Saves of Objects in Directories You can also run multiple Sav commands concurrently against objects in directories. The image that the system saves doesn’t include any changes made during the save-while-active job. The system locks objects as it obtains the checkpoint images. This technique allows concurrent saves of objects in directories to multiple tape devices. other save operations permit either no access or only read access to objects during the backup. How Does Save-While-Active Work? OS/400 objects consist of units of storage called pages. Folders that start with DEPT2 are saved to the second tape device. In contrast. You can use the Sav command’s Obj (Object) parameter with generic naming to perform concurrent save operations to different tape devices.LIB/SecondTapeDevice. the system maintains two images only for the pages of the objects that are being changed as the save is performed. The image on the tape is an image of the object as it existed when the system reached the checkpoint.DEVD') Obj(('/DIRA*')) UpdHst(*Yes) Sav Dev('/QSYS. you can use the save-while-active process on particular save operations along with your other backup and recovery procedures. For example: Sav Dev('/QSYS. at the same time. After the system has obtained the checkpoint images.LIB/FirstTapeDevice. For example: SavDLO DLO(*All) Flr(DEPT*) Dev(FirstTapeDevice) OmitFlr(DEPT2*) SavDLO DLO(*All) Flr(DEPT2*) Dev(SecondTapeDevice) + + + + + In this example. When you use save-while-active to save an object. The first image contains the updates to the object with which normal system activity works. The second image is a “snapshot” of the object as it exists at a single point in time called a checkpoint. and you can’t change objects during the checkpoint processing. When an application makes changes to an object during a save-while-active job. the system creates two images of the pages of the object. . the system uses one image of the object’s pages to make the changes and.DEVD') Obj(('/DIRB*')) UpdHst(*Yes) + + + + Save-While-Active To either reduce or eliminate the amount of time your system is unavailable for use during a backup (your backup outage).You can use the command’s Flr (Folder) parameter with generic naming to perform concurrent save operations to different tape devices.

How you use save-while-active in your backup strategy depends on whether you choose to reduce or eliminate the time your system is unavailable during a backup. resulting in a more complex recovery procedure.Synchronization. when you choose to reduce your backup outage with save-while-active. library synchronization. With library synchronization. It’s also the recommended way to use save-while-active. it’s safe to start your applications or subsystems again. It’s strongly recommended that you use full synchronization. Reducing the backup outage is much simpler and more common than eliminating it. or system-defined synchronization. you must choose when the objects will reach a checkpoint in relationship to each other — a concept called synchronization. Using save-while-active this way can significantly reduce your backup outage. Keep in mind that eliminating your backup outage with save-while-active requires much more complex recovery procedures. SavActWait (Save active You can specify the maximum number of seconds that the save-while-active . using save-while-active this way doesn’t require you to implement journaling or commitment control. The checkpoints may occur at different times. the checkpoints for all the objects occur at the same time. Save Commands That Support the Save-While-Active Option The following save commands support the save-while-active option: Command SavLib SavObj SavChgObj SavDLO Sav Function Save library Save object Save changed objects Save document library objects Save objects in directories The following parameters are available on the save commands for the save-while-active process: Parameter SavAct (Save-whileactive) Description You must decide whether you're going to use full synchronization. When you use save-while-active to reduce your backup outage. the checkpoints for all the objects in a library occur at the same time. When you back up more than one object using the save-while-active process. Typically. even when you’re saving objects in only one library. your system recovery process is exactly the same as if you performed a standard backup operation. you don’t end the applications that modify the objects or end the subsystems in which the applications are run. There are three kinds of synchronization: • • • With full synchronization. the time during which your system is unavailable for use ranges anywhere from 10 minutes to 60 minutes. After the system reaches a checkpoint for those objects. the system decides when the checkpoints for the objects occur. When you use save-while-active to eliminate your backup outage. you can restart the applications. this method affects the performance and response time of your applications. It's highly recommended that you use full synchronization in most cases. It’s highly recommended that you use save-while-active to reduce your backup outage unless you absolutely cannot have your system unavailable for this time frame. However. To use save-while-active to reduce your backup outage. You should use this approach only to back up objects that you’re protecting with journaling or commitment control. With system-defined synchronization. Once you know checkpoint processing is completed. Also. You’ll need to include these procedures in your disaster recovery plans. One save-while-active option lets you have the system send a message notification when it completes the checkpoint processing. You should use save-while-active to eliminate your backup outage only if you have absolutely no tolerance for any backup outage. during a time period in which no changes can occur to the objects. you can end any applications that change objects or end the subsystems in which these applications are run.

Backing Up Spooled Files When you save an output queue. BRMS maintains all the advanced function attributes associated with the spooled files. IBM does offer support for saving and restoring spooled files in its BRMS product. To help determine the best way to recover your system. The sheer number of possible failures precludes a one-size-fits-all answer to recovery. make sure you understand how the problem occurred so you can choose the correct recovery procedures. this record will be invaluable.boulder. visit IBM’s iSeries Information Center at http://publib. This command does let you copy information from a spooled file to a database file. Instead. Once you’ve copied spooled file information to user spaces. including their associated advanced function attributes (if any). If your problem requires hardware or software service. and list APIs. which categorizes failures and their associated recovery processes and provides checklists of recovery steps. Plan your recovery. you should refer to “Selecting the Right Recovery Strategy” in OS/400 Backup and Recovery. CpySplF copies only textual data and not advanced function attributes such as graphics and variable fonts. For complete details about using the save-while-active process to either reduce or eliminate your backup outage. there are times when some type of recovery may be necessary. you can save the user spaces. Recovery and Media Services (BRMS) Overview. you must examine the details of your failure and recover accordingly. Keep the checklist for future reference. . For more information about BRMS. but you shouldn’t rely on this method for spooled file backup. With a combination of spooled file APIs. These APIs include • • • • • • • QUSLSpl (List Spooled Files) QUSRSplA (Retrieve Spooled File Attributes) QSpOpnSp (Open Spooled File) QSpCrtSp (Create Spooled File) QSpGetSp (Get Spooled File Data) QSpPutSp (Put Spooled File Data) QSpCloSp (Close Spooled File) These APIs let you copy spooled file information to a user space for save purposes and copy the information back from the user space to a spooled file. user space APIs. be sure to do the following: • • • • If you have to back up and recover because of some system problem. see “Backup. CpySplF also does nothing to preserve print attributes such as spacing.com/pubs/html/as400/infocenter. Don’t be afraid to ask questions. you can back up spooled files. Before beginning your recovery. see System API Reference (SC41-5801). You can specify whether the system sends you a message when it reaches a checkpoint.” [Chapter 16] Recovering Your System Although the iSeries is very stable and disasters are rare. Make a copy of the OS/400 Backup and Recovery checklist you’re using. and check off each step as you complete it. If you need help later.wait time) SavActMsgQ (Save active message queue) SavActOpt (Save-whileactive options) operation will wait to allocate an object during checkpoint processing. This parameter has values that are specific to the Sav command. For more information about these APIs.htm. One common misconception is that you can use the CpySplF (Copy Spooled File) command to back up spooled files. The spooled file APIs perform the real work of backing up spooled files. make sure you understand exactly what the service representative does.ibm. its description is saved but not its contents (the spooled files). The extent of recovery required and the processes you follow will vary greatly depending on the nature of your failure.

In some cases.ibm. availability options can prevent the need for recovery. you need to understand the following: • • • the value of the data on your system the cost of a scheduled or unscheduled outage your availability requirements The following availability options can complement your backup strategy: • • • • • • • journal management access-path protection auxiliary storage pools device parity protection mirrored protection dual systems clustered systems You should compare these options and decide which are best suited to your business needs. SavLib. Your system has mounted user-defined file systems before the save. it's helpful to be acquainted with the following terms. but first. refer to IBM's iSeries Information Center at http://publib.boulder. Continue to use the checklist titled “Recovering your entire system after a complete system loss (Checklist 17)” in Chapter 3 of OS/400 Backup and Recovery to completely recover your system in any of the following situations: • • • • Your system has logical partitions. restoring to a system with the same serial number). Availability Options Availability options are a complement to a backup strategy. You’re recovering to a different system (a system with a different serial number). and how to implement them. Minnesota. You can use these steps only if you saved your entire system using either option 21 from menu SAVE or the equivalent SavSys. This article is excerpted from the book Starter Kit for the IBM iSeries and AS/400 by Gary Guthrie and Wayne Madden (29th Street Press.iseriesnetwork. see http://www. iSeries 400 and AS/400e Technical Conferences. These options can significantly reduce the time it takes you to recover after a failure. Development Lab.com/str/books/uniquebook2. and Business Continuity and Recovery Services Conferences and writing for various iSeries and AS/400e magazines and Web sites. you deliberately make your system unavailable to users. If any surprises await you. Debbie Saugen is the technical owner of iSeries 400 and AS/400 Backup and Recovery in IBM’s Rochester. it’s far better to uncover them in a test situation than during a disaster. Debbie enjoys sharing her knowledge by speaking at COMMON.Starting with V4R5. Your system uses the Alternate Installation Device Setup feature that you can define through Dedicated Service Tools (DST) for a manual IPL from tape. their benefits versus costs.” which provides step-by-step instructions for completely recovering your entire system to the same system (i.htm. She is also a senior recovery specialist with IBM Business Continuity and Recovery Services. For more information about the book. We'll look more closely at each availability option in a moment. One piece of advice warrants repeating: Test as many of the procedures in your recovery plan as you possibly can before disaster strikes. You might use a ..e. which are often used in discussing system availability: • An outage is a period of time during which the system is unavailable to users. For details about availability options. 2001). and Sav commands. During a scheduled outage. SavDLO.com/pubs/html/as400/infocenter. not a replacement. the OS/400 Backup and Recovery manual includes a new appendix called “Recovering your AS/400 system. To justify the cost of using availability options.cfm?NextBook=187.

In continuous operations. You can then assign objects to particular ASPs. a file can have multiple access paths. You use a journal to define which files and access paths you want to protect. When the system writes data to disk. SMAPP provides a simple way to reduce recovery time after a system failure. If access paths become corrupted. the system has no scheduled outages. isolating them on particular disk units. By using journal management to record changes to access paths. You can use the remote journal function to set up journals and journal receivers on a remote iSeries system. or apply PTFs. Access paths in use at the time of a system failure are at risk of corruption. An unscheduled outage is usually caused by a failure of some type. such as changes to database files. look like a single unit of storage. This can be a very time-consuming process. to ensure that certain access paths critical to your business are protected.• • • scheduled outage to run batch work. Because different programs may need to access the file’s records in different sequences. High availability means that the system has no unscheduled outages. The remote journal function lets you replicate journal entries from the source system to the remote system. back up your system. Journal Management for Backup and Recovery You can use journal management (often referred to as journaling a file or an access path) to recover the changes to database files (or other objects) that have occurred since your last complete backup. With SMAPP. you can let the system determine which access paths to protect. Auxiliary Storage Pools Your system may have many disk units attached to it for auxiliary storage of your data that. Access-Path Protection An access path describes the order in which the records in a database file are processed. You should consider an access-path protection plan to limit the time required to recover corrupted access paths. Continuous availability means that the system has neither scheduled nor unscheduled outages. The system offers two methods of access-path protection: • • system-managed access-path protection (SMAPP) explicit journaling of access paths You can use these methods independently or together. The system makes this determination based on access-path target recovery times that you specify. This can result in considerable time savings. the system must rebuild them before you can use the files again. You can use explicit journaling. to your system. it spreads the data across all of these units. A journal receiver contains the entries (called journal entries) that the system adds when events occur that are journaled. When the system now writes to these objects. or security-related events. the system can recover access paths without the need for a complete rebuild. Using journal entries. managing the required environment for you. even when using SMAPP. You can divide your disk units into logical subsets known as auxiliary storage pools (ASPs) which don't necessarily correspond to the physical arrangement of disks. These journals and journal receivers are associated with journals and journal receivers on the source system. you can greatly reduce the amount of time it takes to recover access paths should doing so become necessary. The system evaluates the protected and unprotected access paths to develop its strategy for meeting your access-path recovery targets. it spreads . changes to other journaled objects.

the system continues to operate without interruption. the system remains available during the failure. and buses.g. standard DASD mirroring will remain the best choice. It also doesn’t protect against system outages caused by failures in other disk-related hardware (e. each ASP can have different strategies for availability. When you start mirrored protection or add disk units to an ASP that has mirrored protection. backup and recovery. ASPs provide a recovery advantage if the system experiences a disk unit failure that results in data loss. To provide maximum hardware redundancy and protection. and performance. for others. To protect data. Device parity protection is designed to prevent system failure and to speed the recovery process for certain types of failures. When a disk-related component fails. the system tries to pair disk units from different controllers. For some systems. Device parity protection doesn’t protect you if you have a site disaster or user error. using the mirrored copy of the data until repairs are complete on the failed component. System objects and user objects in other ASPs are protected from the disk failure. the use of ASPs provides a certain level of flexibility.the information across only the units within the ASP. In addition to the protection that isolating objects to particular ASPs provides. In many cases. recovery is required only for the objects in the ASP containing the failed disk unit. your system remains operational during repairs. you should protect all the disk units on your system with either device parity protection or mirrored protection (covered next). disk controllers. If possible. the system creates mirrored pairs using disk units that have identical capacities. you can duplicate • • • • disk units disk controllers disk IOPs a bus If a duplicate exists for the failing component and attached hardware components. Remote mirroring support lets you have one mirrored unit within a mirrored pair at the local site and the second mirrored unit at a remote site. the data can be reconstructed by using the parity value and the values of the bits in the same locations on the other disks. Device Parity Protection Device parity protection is a hardware availability function that protects against data loss due to disk unit failure or damage to a disk. . The system continues to run while the data is being reconstructed. When a disk failure occurs. IOPs. Mirrored Protection Mirrored protection is a software availability function that protects against data loss due to failure or damage to a disk-related component. The overall goal of device parity protection is to provide high availability and to protect data as inexpensively as possible. For instance. remote DASD mirroring provides important additional capabilities. The goal is to protect as many disk-related components as possible. The disk controller or IOP computes the parity value from the data at the same location on each of the other disk units in the device parity set. When you assign the disk units on your system to more than one ASP. not as a substitute for a good backup and recovery strategy. Different levels of mirrored protection are possible.. The system protects your data by maintaining two copies of the data on two separate disk units. depending on the duplicated hardware. disk I/O). In such a case. the disk controller or input/output processor (IOP) calculates and saves a parity value for each bit of data.

The cluster is identified by name and consists of one or more cluster nodes. This strategy makes it easier for the operator to know which set to mount for backup. The remote journal function improves on this technique by transmitting journal entries to a duplicate journal receiver on the secondary system at the licensed internal code layer. Preparing and Managing Your Backup Media OS/400’s save commands support different types of devices (including save file. You’ll need to use sets of tapes and implement a rotation schedule. the secondary system can take over critical application programs. The read-write heads can collect dust and other material that can cause errors when reading or writing to tape media. . The primary system transmits journal entries to the secondary system. The most common way to maintain data on the secondary system is through journaling. Preparing and managing your tape media is an important part of your backup operations. in which two systems maintain some or all data. You need to be able to easily locate the correct media to perform a successful system recovery. as follows: Backup Backup 1 Backup 2 Backup 3 Backup 4 Backup 5 Backup 6 . When you perform a system recovery. At a minimum. you should always back up to a tape device. where a user-written program uses them to update files and other journaled objects in order to replicate the application environments of the primary system. Clustering let you efficiently group your systems together to create an environment that approaches 100 percent availability. Media set Set 1 Set 2 Set 3 Set 1 Set 2 Set 3 . you should rotate three sets of media. Cleaning Your Tape Devices It’s important to clean your tape devices regularly.Dual Systems System installations with very high availability requirements use a dual-systems approach. If the primary system fails. diskette. Choose a tape device and tape media that has the performance capabilities and density capacity that will meet your backup window and any requirements you have for running an unattended backup. If you’re using new tapes. An important part of a good backup strategy is to have more than one set of backup media. Several software packages are available from independent software vendors to support dual systems. . . Clustered Systems A cluster is a collection or group of one or more systems that work together as a single system. Users sometimes implement this by transmitting journal entries at the application layer. you may need to go back to an older set of tape media if your most recent set is damaged or if you discover a programming error that has affected data on your most recent backup media. tape. it’s especially important to clean the device because new tapes tend to collect more material on the read- . . You may find that the easiest method is to have a different set of media for each day of the week. For a backup strategy. and optical).

The new-volume identifier identifies the tape as a standard-labeled tape that can be used by the system for backups. It’s a good idea to choose volume-identifier names that help identify tape-volume contents and the volume set to which each tape belongs. Here’s a sample command to initialize a new tape volume: InzTap Dev(Tap01) + NewVol(A23001) + Check(*No) + Density(*DevType) Tip: It’s important to initialize each tape only once in its lifetime and give each tape volume a different volume identifier so tape-volume error statistics can be tracked. B for Tuesday. Preparing Your Tapes for Use To prepare tape media for use. option 21-Media set 3 Volume names and labels for a medium backup strategy might look like this: Volume Naming — Part of a Medium Backup Strategy Volume name External label E21001 Friday-Menu SAVE.write heads. For specific recommendations.) When you initialize tapes. refer to your tape drive’s manual. You can use the special value *DevType to easily specify that the format be based on the type of tape device being used. option 21-Media set 1 E21002 Friday-Menu SAVE. otherwise. option 23-Media set 1 B23002 Tuesday-Menu SAVE. option 23-Media set 2 B23003 Tuesday-Menu SAVE. option 21-Media set 1 E21002 Friday-Menu SAVE. the backup operation (option number from menu SAVE). The density specifies the format in which to write the data on the tape based on the tape device you’re using. and the media set with which the tape volume is associated. you should also specify Check(*No). option 23-Media set 3 E21001 Friday-Menu SAVE. The following table illustrates how you might initialize your tape volumes and label them externally in a simple backup strategy. the system tries to read labels from the volume on the specified tape device until the tape completely rewinds. you’re required to give each tape a new-volume identifier (using the InzTap command’s NewVol parameter) and a density (Density parameter). When initializing new tapes. Volume Naming — Part of a Simple Backup Strategy Volume name External label B23001 Tuesday-Menu SAVE. option 21-Media set 2 E21003 Friday-Menu SAVE. option 21-Media set 2 AJR001 Monday-Save journal receivers-Media set 1 AJR002 Monday-Save journal receivers-Media set 2 ASC001 Monday-Save changed data-Media set 1 ASC002 Monday-Save changed data-Media set 2 BJR001 Tuesday-Save journal receivers-Media set 1 BJR002 Tuesday-Save journal receivers-Media set 2 . Each label has a prefix that indicates the day of the week (A for Monday. (Some tapes come pre-initialized. you’ll need to use the InzTap (Initialize Tape) command. Naming and Labeling Your Tapes Initializing each tape volume with a volume identifier helps ensure that your operators load the correct tape for the backup. and so on).

B23001 B23002

Tuesday-Menu SAVE, option 23-Media set 1 Tuesday-Menu SAVE, option 23-Media set 2

Tip: If your tapes don’t come prelabeled, you should put an external label on each tape volume. The label should show the volume-identifier name and the most recent date the tape was used for a backup. Color-coded labels can help you locate and store your media — for example, yellow for set 1, red for set 2, and so on.

Verifying Your Tapes
Good backup procedures dictate that you verify you’re using the correct tape volumes. Depending on your system environment, you can choose to manually verify your tapes or have the system verify your tapes:

• •

Manual verification — If you use the default value of *Mounted on the Vol (Volume) parameter of the save commands, telling the system to use the currently mounted volume, the operator must manually verify that the correct tape volumes are loaded in the correct order. System verification — By specifying a list of volume identifiers on the save commands, you can have the system verify that the correct tape volumes are loaded in the correct order. If the tape volumes aren’t loaded correctly, the system will send a message telling the operator to load the correct volumes.

Another way to verify that the correct tape volumes are used is to specify expiration dates on the media files. If you rely on your operators to verify tape volumes, you can use the ExpDate (Expiration date) parameter and specify the value *Perm (permanent) for your save operations. This will prevent someone from writing over a file on the tape volume by mistake. When you’re ready to use the tape volume again, specify Clear(*All) on the save operations. If you want the system to verify your tape volumes, specify an ExpDate value that ensures you don’t use the media again too soon. For example, if you rotate five sets of media for daily saves, specify an expiration date of the current day plus four on the save operation. Specify Clear(*None) on save operations so the system doesn’t write over unexpired files. Caution: It’s important to try to avoid situations in which an operator must regularly respond to (and ignore) messages such as “Unexpired files on the media.” If operators get into the habit of ignoring routine messages, they may miss important messages.

Storing Your Tapes
An important part of any recovery strategy is storing the tape volumes in a safe but accessible location. Ensure the tape volumes have external labels and are organized well so you can locate them easily. To enable disaster recovery, you should store a complete set of your backups at a safe, accessible location away from your site. Consider contracting with a vendor that will pick up and store your tapes. When choosing off-site storage, consider how quickly you can retrieve the tapes. Also, consider whether you’ll have access to your tapes on weekends and during holidays. A complete recovery strategy keeps one set of tapes close at hand for immediate data recovery and keeps a duplicate set of tapes in off-site storage for disaster recovery purposes. To duplicate your tape volumes for off-site storage, you can use the DupTap (Duplicate Tape) command.

Handling Tape Media Errors
When you’re saving or restoring to tape, it’s normal for some tape read/write errors to occur. Tape read/write errors fall into one of three categories:

Recoverable errors: Some tape devices support recovering from read/write errors. The system repositions the tape automatically and tries the save or restore operation again.

Unrecoverable errors — processing can continue: In some instances, the system can’t continue to use the current tape but can continue processing on a new tape. The system will ask you to load another tape. You can still use the tape with the unrecoverable error for restore operations. Unrecoverable errors — processing cannot continue: In some cases, an unrecoverable read/write error will cause the system to end the save operation.

Tapes physically wear out after extended use. You can determine whether a tape is wearing out by periodically printing the error log. Use the PrtErrLog (Print Error Log) command, and specify Type(*VolStat). The printed output provides statistics about each tape volume. If you’ve used unique volume-identifier names for your tapes and you’ve initialized each volume only once, you can determine which tapes have excessive read/write errors. Refer to your tape-volume documentation to determine the error threshold for the tape volumes you’re using. You should discard any bad tape volumes. If you think you have a bad tape volume, you can use the DspTap (Display Tape) or the DupTap command to check the tape’s integrity. Both of these commands read the entire tape volume and will detect any objects on the tape that the system can’t read.

Chapter 16 - Backup, Recovery and Media Services (BRMS) Overview
Backup, Recovery and Media Services (BRMS) is IBM's strategic backup and recovery product for the iSeries and AS/400. Packaged as licensed program 5769-BR1 (V4R5) or 5722-BR1 (V5R1), BRMS is a comprehensive tool for managing the backup, archive, and recovery environment for one or more systems in a site or across a network in which data exchange by tape is required. BRMS lets you simplify and automate backups as well as manage your tape inventory. It keeps track of what you've saved, when you saved it, and where it is saved so that when recovery is necessary, BRMS restores the correct information from the correct tapes in the correct sequence. This conceptual introduction to BRMS and some of its many features will help you determine whether BRMS is right for your backup strategy.

An Introduction to BRMS
BRMS lets you easily define, change, and execute simple, medium, or complex backup procedures. It offers fullfunction backup facilities, including

• • • • •

keywords to match the OS/400 save keywords (e.g., *AllUsr, *IBM) exit commands to allow processing of user-defined routines full, incremental, and noncumulative incremental saves saves to save files save-while-active processing

BRMS even provides support for backing up spooled files, a save/restore feature sorely missing from OS/400. You may be wondering, "What could be simpler than backing up the entire system every night?" Nothing is simpler, but not everyone can afford the outage that this type of save requires. BRMS is an effective solution in backing up only what's really required. BRMS also lets you easily schedule a backup that includes a SavSys (Save System) operation, which isn't so easy using just OS/400. In addition to these capabilities, BRMS offers step-by-step recovery information, printed after backups are complete. Recovery no longer consists of operators clenching the desk with white knuckles at 4:00 a.m., trying desperately to recover the system in time for the users who'll arrive at 8:00 a.m., without any idea what's going on or how long the process will take. With native OS/400 commands, the only feedback that recovery personnel get is the occasional change to the message line on line 25 of the screen as the recovery takes place. BRMS changes

this with full and detailed feedback during the recovery process — with an auto-refresh screen, updated as each library is restored. Following are some of the features that contribute to the robustness of BRMS:

• •

• • • •

Data archive — Data archive is important for organizations that must keep large volumes of history data yet don't require rapid access to this information. BRMS can archive data from DASD to tape and track information about objects that have been archived. Locating data in the archives is easy, and the restore can be triggered from a work-with screen. Dynamic data retrieval — Dynamic retrieval for database files, document library objects, and stream files is possible with BRMS. Once archived with BRMS, these objects can be automatically restored upon access within user applications. No changes are required to user applications to initiate the restore. Media management — In a large single- or multisystem environment, control and management of tape media is critical. BRMS allows cataloging of an entire tape inventory and manages the media as they move from location to location. This comprehensive inventory-management system provides many reports that operators can use as instructions. Parallel save and restore — BRMS supports parallel save and restore, reducing the backup and recovery times of very large objects and libraries by "spreading" data across multiple tape drives. This method is in contrast to concurrent save and restore, in which the user must manage the splitting of data. With parallel save and restore, operations end at approximately the same time for all tape drives. Lotus Notes Servers backup — BRMS supports backup of online Lotus Notes Servers, including Domino and Quickplace Lotus Notes Servers. Flexible backup options — You can define different backup scenarios and execute the ones appropriate for particular circumstances. Spooled file backup — Unlike OS/400 save and restore functions, BRMS provides support for backing up spooled files. Spooled file backup is important to a complete backup, and BRMS lets you tailor spooled file backup to meet your needs. Storage alternatives — You can save to a tape device, a Media Library device, a save file, or a Tivoli Storage Manager server (previously known as an ADSM server).

It is these features, and more, that make BRMS a popular solution for many installations. Later, we'll take a closer look at some of these capabilities.

Getting Started with BRMS
BRMS brings with it a few new save/restore concepts as well as some new terminology. For instance, you'll find repeated references to the following terms when working with BRMS:

• • • • •

media — a tape cartridge or save file that will hold the objects being backed up media identifier — a name given to a physical piece of media media class — a logical grouping of media with similar physical and/or logical characteristics (e.g., density) policy — a set of commonly used defaults (e.g., device, media class) that determine how BRMS performs its backup backup control group — a grouping of items (e.g., libraries, objects, stream files) to back up

You're probably thinking that "media" and "media identifier" aren't such new terms. True, but most people don't think of save files as media, and media identifier is typically thought to mean volume identifier. Policies and backup control groups are concepts central to BRMS in that they govern the backup process. IBM provides default values in several policies and control groups. You can use these defaults or define your own for use in your save/restore operations. Policies are templates for managing backups and media management operations. They act as a control point for defining operating characteristics. The standard BRMS package provides the following policies:

• •

System Policy — The System Policy is conceptually similar to system values. It contains general defaults for many BRMS operations. Backup Policy — The Backup Policy determines how the system performs backups. It contains defaults for backup operations.

one for each group. Move Policies — Move Policies define the way media moves through storage locations from creation time through expiration. For example. Note that none of the full backup control groups discussed so far saves spooled files. yet it's also critical that user data be backed up frequently and in a timely way. The *BkuGrp control group contains the special values *SavSecDta and *SavCfg. which saves the licensed internal code. security data. It's not a requirement that the number of devices used for the backup be used on the restore. user profiles. Media Policies — Media Policies control media-related functionality. users demand 24x7 access to their mail and other Lotus Notes databases. security data. *AllUsr. you'll need to create a backup list of your spooled files to be included in your full backup control group (more about how to do this later). *SysGrp (system group) and *BkuGrp (backup group). BRMS is shipped with two default backup control groups. the special values *AllProd. When the backup is submitted. If you use the two control groups *SysGrp and *BkuGrp. You specify both a maximum number of resources (devices) and a minimum number of resources to be used during the backup. *AllTest. Another alternative was to invest in an additional server. it's best to use the same number of tape devices on the restore. full-system default backup control group. without requiring users to exit the system. This support is intended for use with large objects and libraries. which also save user profiles. the backup will be run with the minimum of 15. If spooled files are critical to your business. The *SysGrp control group contains the special value *SavSys. *System. You typically define parallel resources when you work with backup control groups. you can save Lotus Notes databases while they're in use. This redundancy in saved data contributes to the additional backup time when using control groups *SysGrp and *BkuGrp. Prior save-while-active support required ending applications to reach a checkpoint or the use of commitment control or journaling. they determine where BRMS finds tapes needed for a backup. You can back up your entire system using these two control groups. and configuration data twice. you can create a new backup control group that includes the following BRMS special values as backup items: Seq 10 20 30 40 50 Backup items *SavSys *IBM *AllUsr *AllDLO *Link The time required to back up the system using this full backup control group is less than that required to use a combination of the *SysGrp and *BkuGrp backup control groups. *ASP01-*ASP99. and configuration data. In pre-V5R1 releases of OS/400. but doing so requires two backup commands. To back up your entire system using a single control group. you could specify 32 for maximum resources and 15 for minimum resources. security data. Starting with V5R1. and the *BkuGrp control group backs up all user data. and configuration data. BRMS supports parallel save/restore function. BRMS includes a new.• • • Recovery Policy — The Recovery Policy defines how the system typically performs recovery operations. However. and *IBM are supported on BRMS parallel saves. OS/400. replicate the server . BRMS Online Lotus Notes Servers Backup support meets these critical needs. to reduce the number of tape mounts. the system checks for available tape resources. Restores for objects saved in parallel with these special values are still done in a serial mode. Its goal is to reduce backup and recovery times by evenly dividing data across multiple tape drives. With this support. For instance. Online Backup of Lotus Notes Servers with BRMS In today's working environment. Saving Data in Parallel with BRMS As I mentioned. The *SysGrp control group backs up all system data. Starting with V5R1. that combines the function of groups *SysGrp and *BkuGrp. If it can't find 32 available tape devices. with the objects being "spread" at the library level. you save the user profiles.

and perform the backup from the second server. BRMS calls Domino or Quickplace through recovery exits that let Domino or Quickplace apply any changes from the secondary file backup to the database that was just restored.1) is displayed. . BRMS and Domino or Quickplace accomplish this using a BRMS concept called a package. The Online Lotus Notes Servers Backup process allows the collection of two backups into one entity. This support is meaningful because with OS/400 save functions. you can filter the spooled files to save by specifying values for spooled file name. you can create a backup list that specifies the output queues you want to save. Within a spooled file list. Backing Up Spooled Files with BRMS With BRMS. you can save multiple output queues by selecting multiple sequence numbers. Thus. If you try to start the console monitor from your own workstation. Console monitoring lets users submit the SavSys job to the job scheduler instead of running the save interactively. you don't have to be nearby when the system save is processed. restricted-state saves must be run interactively from a display in the controlling subsystem. you must issue this command from the system console because BRMS runs the job in subsystem QCtl. Installation of BRMS automatically configures control groups and policies that help you perform online backup of your Lotus Notes Servers. a secondary file is backed up and associated with the first backup using the package concept. You create a spooled file backup list using command WrkLBRM (Work with Lists using BRM). BRMS sends a message indicating that you're not in a correct environment to start the console monitor. However. This recovery process maintains the integrity of the data. When you add an output queue to the list. When you need to recover a Lotus Notes Server database that was backed up using BRMS Online Backup. You can temporarily interrupt the console-monitoring function to enter OS/400 commands and then return the console to a monitored state. (The figure shows the panel after backup information has been entered. you can update the backup list by adding the output queues you want to save. if you want to save only spooled files that belong to user A. such as transaction logs or journaling information.. specifying • • • *Bku for the Use field a value for the List name (e. The secondary file contains all the changes that occurred during the online backup. Domino or Quickplace will back up the databases while they are online and in use.g. or user data. You simply specify the special value *SavSys on the StrBkuBRM (Start Backup using BRM) command or within a BRMS control group to perform a SavSys. you can specify user A's name in the User field. For example. SaveSplF) *Spl for the Type field When you press Enter.data.) Including Spooled File Entries in a Backup List Now. BRMS's support means you can run an unattended SavSys operation to save the OS/400 licensed internal code and operating system (or other functions you want to run in a restricted state). Generic names are also allowed. You simply add a list. You can use the Submit to batch parameter on the StrBkuBRM command to enter *Console as a value. Restricted-State Saves Using BRMS You can use the console monitor function of BRMS to schedule unattended restricted-state saves. You can then specify this backup list on your backup control groups. the Add Spooled File List panel (Figure 16. job name. user name. thereby performing your saves in batch mode. When the backup is completed. Online Lotus Notes Servers Backup with BRMS avoids these requirements. The package is identified by the PkgID (Package identifier) parameter on the SavBRM (Save Object using BRM) command.

you can change any of the BRMS recovery defaults by pressing F9 on the Select Recovery Items screen. there Backup policy . consider saving your spooled files on a separate tape using the *Load exit in a control group. If necessary. You may still want to use the graphical interface to perform some of the basic operations. From the Select Recovery Items panel that appears. you'll be prompted to load the first tape to read the label information before restoring the documents on the subsequent tapes. user names. you'll need to be aware of some differences between the green-screen and the OpsNav interfaces. such as creating a backup policy. Using wizards.1 saves output queue Prt01 in library QUsrSys. system dates. To exclude an output queue. Once you set up your backup list. In the greenscreen interface. BRMS retains spooled file attributes.The sample setup in Figure 16. If you leave the Outq field at its default value *All. BRMS saves all spooled files from all output queues in library QUsrSys. If your spooled file save happens to span multiple tape volumes. and where it is backed up. you can use the WrkSplFBRM (Work with Spooled Files using BRM) command to display the status of your saves. how it is backed up. adding items to a backup policy. or split your spooled file saves so you use only one tape at a time. preparing the tapes for use. The WrkSplFBRM panel displays your spooled files in the order in which they were created on the system. If you specify an incremental save for a list type of *Spl. job names. To restore saved spooled files. In the green-screen interface. BRMS searches the tape label for the folder and restores all the documents. the original dates and times aren't restored. During the save and restore operations. The backup history includes any items backed up using a backup policy. you may not find all the functionality in OpsNav that you have with the greenscreen version. names. BRMS has an Operations Navigator (OpsNav) interface that makes setting up and managing your backup and recovery strategy even easier. However. After you've successfully saved your spooled files. or monthly backup control group as a backup item with a list type of *Spl. a combination of a backup control group and a media policy would make up a backup policy. adding tape media to BRMS. and restoring backed-up items. Note that BRMS doesn't support incremental saves of spooled files. you can use the *Exc value. During the restore operation. BRMS restores spooled file data in the output queue from which the data was saved. weekly. During the restore. watch for additional features in future releases of BRMS Operations Navigator. with multiple documents (spooled members) within the folder. Also. BRMS doesn't automatically clear the output queues after the spooled files are successfully saved. the equivalent term is media information. If so. and. Defaults that control what data is backed up. you can add it to your daily. use the WrkSplFBRM command and select option 7 (Restore spooled file) on the resulting screen. To help with recovery. you can simplify the common operations you need to perform. you can specify the spooled files you want to restore. and times. If you're currently using BRMS. By default. Be aware that BRMS saves spooled files as a single folder. Restoring Spooled Files Saved Using BRMS BRMS doesn't automatically restore spooled files when you restore your user data during a system recovery. OS/400 assigns new job numbers. all spooled files in the list are saved. Here are some key terms: New terminology Definition Backup history Information about each of the objects backed up using BRMS. user data fields. The BRMS Operations Navigator Interface With V5R1. Terminology Differences The OpsNav version of BRMS uses some different terminology than the green-screen BRMS. in most cases.

how it is backed up. display. when installed. Three backup policies come with BRMS: • • • *System — backs up the entire system *SysGrp — backs up all system data *BkuGrp — backs up all user data If you have a simple backup strategy. Media pool A group of media with similar density and capacity characteristics. This information includes the item name. the date of the backup. A backup policy is a group of defaults that controls what data is backed up. you can run your backup at any time or schedule your backup to run whenever it fits into your backup window. Backup Policies One ease-of-use advantage offered by BRMS OpsNav is that you can create backup policies to control your backups. and where it is backed up. When you back up your data using a BRMS backup policy. Once you've defined your backup policies. information about each backed-up item is stored in the backup history. All information needed to perform a backup is included in the backup policy.is no system policy in the OpsNav interface. and manage tape media Some functions unavailable in the current release of BRMS Operations Navigator but included in the green-screen interface include • • • • • • • • move policies tape library support backup to save files backup of spooled files parallel backup networked systems support advanced functions. the type of backup. this is known as a media class. In the green-screen interface. you can implement your strategy using these three backup policies. and the . looks and behaves like the rest of the graphical user interface of OpsNav. you create your own backup policies. the current version of BRMS Operations Navigator lets you • • • • • • • • • • • run policies shipped with BRMS view the backup history view the backup and recovery log create and run a backup policy back up individual items restore individual items schedule items to be backed up and restored print a system recovery report customize user access to BRMS functions and components run BRMS maintenance activities add. If you have a medium or complex strategy. Functional Differences As of this writing. such as hierarchical storage management (HSM) BRMS Application Client for Tivoli Storage Manager Backup and Recovery with BRMS OpsNav BRMS Operations Navigator is actually a plug-in to OpsNav. A plug-in is a program that's created separately from OpsNav but.

Scheduling Unattended Backup and Restore Operations Earlier. If you use backup policies to back up items on your system. Expand Backup. However. Restoring Individual Items If a file becomes corrupted or accidentally deleted. simply right-click the item you want to back up and select Backup. To access the policy properties. Using OpsNav. folders. you can do so by editing the properties of the policy. Many customization options that aren't available in the New Backup Policy wizard are available in the properties of the policy. Of course. or folders using the OpsNav hierarchy. you can use the OpsNav Restore wizard to restore individual items on your system. security data. 2. you can choose to run the backup policy immediately or schedule it to run later. If you want to change the policy later. Fortunately. Recovery and Media Services. Backing Up Individual Items In addition to using backup policies to back up your data. you can restore those items from the backup history. Right-click Backup policies. right-click Backup. You also use the backup history information for system recoveries. You can also choose to back up just security or configuration data. you can choose to back up individual files. right-click the policy and select Properties. you may need to restore individual items on your system. To access the wizard: 1. you can view details about the item. You can specify the level of detail you want to track for each item in the properties for the policy.volume on which the item is backed up. When you restore an item from the backup history. configuration data. you can select which version of the item you want to restore. You can also restore items that you backed up without using a backup policy. You can then restore items by selecting them from the backup history. To access the wizard in OpsNav. such as when it was backed up and how large it is. The wizard gives you the following options for creating your backup policies: Option Back up all system and user data Description Enables you to do a full system backup of IBM-supplied data and all user data (spooled files are not included in this backup) Back up all user data Enables you to back up the data that belongs to users on your system. user libraries. such as user profiles. Creating a BRMS Backup Policy You can use the New Backup Policy wizard in OpsNav to create a new BRMS backup policy. whether they were backed up with a backup policy or not. you can also schedule non-restricted-state save and restore operations. for these items. . and select New policy. libraries. you saw how to schedule unattended restricted-state saves using the console monitor and the StrBkuBRM command. you don't have the benefit of using the backup history to make your selection. Recovery and Media Services and select Restore. and objects in directories Back up Lotus server data online Back up a customized set of objects Enables you to perform an online backup of Lotus Domino and Quickplace servers Enables you to choose the items you want to back up After creating a backup policy. If there are several versions of the item in the backup history.

a user either has access to a BRMS function or component or has no access to it. if you schedule the same policy to run on any other day. a user must have access to the specific media policy. these users can also process backups by using backup control groups (i.. If a user has access. he or she can use and view the function or component. You can let certain users use specific functions and components while letting others use and change specific functions and components. you can give a user both types of access (so the user can both use and change a particular function.e. a user can process a specific item (e. System Recovery Report BRMS produces a complete system recovery report that guides you through an entire system recovery. A user without Basic Backup Activities access can't see backup menu options or command parameter options. however. The Functional Usage Model provides lists of existing items (e. or item) or only one type of access (e. you schedule MyLib for backup every Thursday. SavObjBRM. A user can find a list of his or her existing backup control . control group) has two levels of authority access: • • Access or No Access — At the first level of authority access using the Functional Usage Model.g. The report lets you know exactly which tape volumes are needed to recover your system. Restore operations. using the StrBkuBRM command) or by saving libraries. are scheduled less often than backup operations. the system backs up MyLib. For example. Backup Control Groups — Users with Backup Control Groups access can change specific backup control groups (in addition to using and viewing them). a user must have access to the specific backup list. With use access. If you need to schedule an existing backup policy. or item. you simply use the OpsNav New Policy wizard to create and schedule a backup. BRMS Security Functions BRMS provides security functions via the Functional Usage Model.g. and specific backup and media management item (e. You can grant various types of functional usage to all users or to specified users only. If you schedule the policy to run on Thursday. Each BRMS function. you need only follow the console monitor instructions presented by OpsNav when you schedule the backup. You can also schedule restore operations in much the same manner as backup operations using OpsNav. Similarly. component. and Items In the backup area. policy. If the save operation requires a restricted-state system. you can use OpsNav to schedule your backup. access to use but not to change a particular function. a control group) in a backup operation but can't change the item. you can do so by rightclicking its entry under Backup Policies in OpsNav and selecting Schedule. or folders (SavLibBRM. Security Options for BRMS Functions. control groups. Tip: When you schedule a backup policy to be run. When recovering your entire system. or item). to change a media policy.In addition. and backup lists. Backup Policy — Users with Backup Policy access can change the backup policy (in addition to using and viewing it). To do so..g. which lets you customize access to selected BRMS functions and functional components by user. Specific Change or No Change — The second level of authority access lets a user change a specific function. backup lists. Keep the recovery report with your tape volumes in a secure and safe off-site location. control groups. objects. component. However.. component. to change a backup list. a library. the system does not back up MyLib. Users without access to the backup policy cannot change it. media and move policies) for which you can grant specific access. remember that only the items scheduled to be backed up on the day you run the policy will be backed up. say you have a backup policy that includes the library MyLib. functional component. In the policy properties.. the following usage levels are available: • • • Basic Backup Activities — Users with Basic Backup Activities access can use and view the backup policy. With the Functional Usage Model. or SavFlrLBRM). Components. you should use the report in conjunction with OS/400 Backup and Recovery (SC41-5304). For example. With this basic level of access.g.. You must use the OpsNav interface to access the Functional Usage Model feature.

and command RstLibBRM (Restore Library using BRM). A user can find a list of his or her existing backup lists under the backup lists heading in OpsNav. A user can find a list of his or her existing media classes under the media classes heading in OpsNav. Users without access to a media class cannot change it. the following usage options are available: • • • • • • • Basic System-related Activities — Users with Basic System-related Activities access can use and view device panels and commands. Advanced Media Activities — Users with Advanced Media Activities access can perform media-related tasks such as expiring. Users without access to a media policy cannot change it. Maintenance — Users with Maintenance access can schedule and run maintenance operations. You can grant a user access to any number of media policies. In the area of media management. Users without access to the backup control groups cannot change them.• groups under the backup control groups heading in OpsNav. and initializing media. Users with this access level can also use and view the system policy. You can grant a user access to any number of specific control groups. the following usage levels are available: • • • • • • • • Basic Media Activities — Users with Basic Media Activities access can perform basic media-related tasks. You can grant a user access to any number of move policies. Media Classes — Users with Media Classes access can change specific media classes (in addition to using and viewing them). Any user can display log information. removing. In the recovery area. Media Information — Users with Media Information access can change media information with command WrkMedIBRM (Work with Media Information). Users without access to the recovery policy can't change it. the following usage levels are available: • • Basic Recovery Activities — Users with Basic Recovery Activities access can use and view the recovery policy. Media Management . Backup Lists — Users with Backup Lists access can change specific backup lists (in addition to using and viewing them). Users without Basic Recovery Activities access can't see recovery menu options or command parameter options. Devices — Users with Devices access can change device-related information. Users without access to a move policy cannot change it. They can also use the WrkMedIBRM (Work with Media Information using BRM) command to process basic recoveries. but they can't change them. You can grant a user access to any number of media classes. Auxiliary Storage Pools — Users with ASP access can change information about BRMS ASP management. Users with this access can also use and view (but not change) media policies and media classes. but only those with Log access can remove log entries. System Policy — Users with System Policy access can change system policy parameters. A user can find a list of his or her existing move policies under the move policies heading in OpsNav. Move Verification — Users with Move Verification access can perform move verification tasks. In the system area. They can also view and display auxiliary storage pool (ASP) panels and commands. Media Policies — Users with Media Policies access can change specific media policies (in addition to using and viewing them). Recovery Policy — Users with Recovery Policy access can change the recovery policy (in addition to using and viewing it). Users without Basic Media Activities access can't see related menu options or command parameter options. You can grant a user access to any number of specific backup lists. A user can find a list of his or her existing media policies under the media policies heading in OpsNav. Initialize — Users with Initialize access can use the InzBRM (Initialize BRM) command. Log — Users with Log access can remove log entries. Move Policies — Users with Move Policies access can change specific move policies (in addition to using and viewing them). command RstObjBRM (Restore Object using BRM). Users without access to a backup list cannot change it. such as using and adding media to BRMS. Users without this access can't change device-related information. Basic Movement Activities — Users with Basic Movement Activities access can manually process or display MovMedBRM (Move Media using BRM) commands.

Once you've added tape media to the BRMS inventory. It's not that we're dumb.boulder. BRMS prevents a user from accidentally writing over active files or using the wrong tape. Recovery and Media Services (SC41-5345). Check It Out As you can see. There's a lot more to BRMS than what's been covered here.ibm.htm) and IBM's iSeries Information Center (http://publib. right-click the volume and select the action you want to perform from the menu. as well as the BRMS home page (http://www. You may have experienced a similar feeling of discomfort the first few times you signed on to your AS/400. This gives you the capability to manually expire a tape and make it available for use in the BRMS media inventory. Chapter 17 . right-click Tape Volumes and select Include.BRMS makes media management simple by maintaining an inventory of your tape media. you may still need a good introduction to the concepts of work management on the AS/400.htm).ibm. . Before you can use any tape media with BRMS. see Backup.com/service/brms. under Media. Recovery and Media Services and select Run Maintenance) or using BRMS command StrMntBRM (Start Maintenance for BRM). The BRMS maintenance operation automatically performs BRMS cleanup on your system. right-click Tape Volumes and select Add). "How do I find that job?" or "Where did it go?" Although I'm sure you have progressed beyond these initial stages of bewilderment.as400. status. You can also use the green-screen BRMS command AddMedBRM (Add Media to BRM). you can view the media based on criteria you specify.Defining a Subsystem We've all found ourselves lost at some time or other. Keep in mind that BRMS isn't a replacement for your backup and recovery strategy. To filter which media you see in the list. Perhaps you submitted a job and then wondered. such as the volume name. BRMS provides some powerful features for simplifying and managing many aspects of iSeries backup and recovery.com/pubs/html/as400/infocenter. To view information about a particular tape volume or perform an action on that volume. You can do this using OpsNav's Add media wizard (under Media. BRMS Housekeeping You should perform a little BRMS housekeeping on a daily basis. For the complete details. rather. BRMS selects the tape to use from the available pool of tapes. We've simply gone to an unfamiliar place without having the proper orientation. it's a tool that can help you implement and carry out such a strategy. It keeps track of what data is backed up on which tape and which tapes have available space. media pool. or expiration date. When you run a backup. and runs reports. BRMS maintenance performs these functions: • • • • • • • • • • • • expires media removes media information removes migration information (180 days old) removes log entries (from beginning entry to within 90 days of current date) runs cleanup retrieves volume statistics audits system media changes journal receivers prints expired media report prints version report prints media information prints recovery reports You can run BRMS maintenance using OpsNav (right-click Backup. you need to add it to the BRMS inventory and initialize it. updates backup information.

the maximum number of jobs allowed in the subsystem -. Autostart job entries let you predefine any jobs you want the system to start automatically when it starts the subsystem. So what better place to start our study of work management than with the subsystem? Defining a Subsystem A subsystem. and even program compiles.1 (401 KB .e.Work management on the AS/400 refers to the set of objects that define jobs and how the system processes those jobs. and the maximum number of jobs allowed in the subsystem. *BASE and *INTERACT) with other subsystems. Let me illustrate two situations in which work management could enhance system operations. is where the system brings together the resources needed to process work. The general definition includes the subsystem name.1 lead to one destination -. establish a private pool of main storage. Work entries define how jobs enter the subsystem and how the subsystem processes that work. The storage pool definition also lets you establish the activity level -. and user profile) that affect the way a job interacts with the system. You can create workstation entries for specific workstation names (e. communications entries.yes. and everyone else in a third pool. but those approaches would be impractical and unnecessary. defined by a subsystem description. for generic names (e. or by the type of .. The objects designated by a 1 represent jobs that enter the system. and the objects designated by a 3 represent additional job environment attributes (e. DSP10 and OH0123). Getting Oriented Just as a road map gives you the information you need to find your way in an unfamiliar city. On the AS/400. The answer lies in understanding the work management concepts of multiple subsystems and multiple job queues. improving performance. job description. or controlling job priorities. 401! Might want to go have some coffee while you wait for it to download. your programmers in another.g. or you could have your programmers compile interactively. the objects designated by a 2 represent parts of the subsystem description.for a particular storage pool. excessive peaks and valleys in performance occur due to the heavy interaction of these users. and prestart job entries. You can use a workstation entry to initiate an interactive job when a user signs on to the system or when a user transfers an interactive job from another subsystem. all jobs must process in a subsystem. class. In the Roman Empire all roads led to Rome. solving problems.g. operator-submitted batch jobs.) serves as a guide to understanding work management. The storage pool definition lets a subsystem either share an existing pool of main storage (e. memory pools) based on user profiles so that you can place your power users in one pool.. You could tell your operators not to submit jobs. and OH*). or both. Let me briefly introduce you to these components of the subsystem description.2.g. You could do this if you knew the work management concepts of memory management. the system is processing short jobs slowly because they spend too much time in the job queue behind long-running end-user batch jobs... • • • • Subsystem attributes provide the general definition of the subsystem and control its main storage allocations. As shown in Figure 17. You will notice that all the paths in Figure 17. You investigate and discover that. Workstation entries define which workstations the subsystem will use to receive work. My goal for this and the next two chapters is to teach you the basic skills you need to effectively and creatively manage all the work processed on your AS/400. workstation entries. Perhaps when your "power users" and programmers share a subsystem.. DP*. DSP*. Storage pool definitions are the most significant subsystem attributes. indeed. A subsystem's storage pool definition determines how the subsystem uses main storage for processing work. Learning work management skills means learning how to maximize system resources. description. Figure 17. Perhaps you are plagued with end users who complain that the system takes too long to complete short jobs. They consist of autostart job entries. the subsystem description contains seven parts that fall into three categories. I can't imagine anyone operating an AS/400 in a production environment without having basic work management skills to facilitate problem solving and operations. thereby creating consistent performance for each user group.the subsystem. job queue entries. Perhaps you want to use separate storage pools (i. you can easily perform such tasks as finding a job on the system.g. With a good understanding of work management concepts. It shows the basic work management objects and how they relate to one another.

Main Storage and Subsystem Pool Definitions When the AS/400 is shipped. the request attaches to that prestart job. all of main storage resides in two system pools: the machine pool (*MACHINE) and the base pool (*BASE). You can control shared pool sizes by using the CHGSHRPOOL (Change Shared Storage Pool) or WRKSHRPOOL (Work with Shared Storage Pools) commands. A shared pool is an allocation of main storage where multiple subsystems can process work. 5251. prioritize them. the amount of main storage you allocate to the machine pool is hardware-dependent and varies with each AS/400. can allocate multiple job queues. Communications entries define the communications device associated with a remote location name from which you can receive a communications evoke request. which submits jobs to the subsystem for processing. Figure 17. when you execute the command DSPSBSD QBASE OUTPUT(*PRINT) you will find the following pool definitions for QBASE listed (if the defaults have not been changed): QBASE ((1 *BASE) (2 *INTERACT)) . When a communications evoke request requires the program running in the prestart job. You can designate *BASE as a shared pool for all subsystems to use to process work. Now that we've covered some basic terms. however.3 shows a WRKSHRPOOL screen. *MACHINE and *BASE are both examples of shared pools. Prestart job entries define jobs that start on a local system before a remote system sends a communications request.g. 3476.. and 3477). or you can divide it into smaller pools of shared and private main storage. and *SHRPOOL1 to *SHRPOOL10 (for pools that you can define for your own purposes). For more information about calculating the required machine pool size. A single subsystem. let's take a closer look at subsystem attributes and how subsystems can use main storage for processing work. while QSPL uses *BASE and *SPOOL. Routing entries identify which programs to call to control routing steps that will execute in the subsystem for a given job. on which you can modify the pool size or activity level simply by changing the entries. A job queue. To see what pools a subsystem is using.• workstations (e. *SPOOL (for printers). All these components of the subsystem description determine how the system uses resources to process jobs within a subsystem. The base pool is the main storage that remains after you reserve the machine pool. For instance. thereby eliminating all overhead associated with initiating a job and program. can only be allocated by one active subsystem. The AS/400's default controlling subsystem (QBASE) and the default spooling subsystem (QSPL) are configured to take advantage of shared pools. and specify for each a maximum number of active jobs. see Chapter 2 and IBM's AS/400 Programming: Work Management Guide (SC41-8078). Job queue entries define the specific job queues from which to receive work. you use the DSPSBSD (Display Subsystem Description) command. Other shared storage pools that you can define include *INTERACT (for interactive jobs). You must define the machine pool to support your system hardware. QBASE uses the *BASE pool and the *INTERACT pool. I will expand upon my discussion of work entries in Chapter 18 and my discussion of routing entries in Chapter 19. Routing entries also define in which storage pool the job will be processed and which basic execution attributes (defined in a job class object associated with a routing entry) the job will use for processing.

and only the routing entries in that subsystem use it to determine which pool jobs will use. doesn't appear for *BASE because system value QBASACTLVL maintains the activity level. the system startup program starts several subsystems (i. the initial configuration of these subsystems is like the initial configuration of subsystem QBASE. and program compiles. both storage definitions require a specific activity level. *MACHINE and *BASE. Both QINTER and QPGMR define specific amounts of main storage to be allocated to the subsystem instead of sharing the *INTERACT pool. Pool numbering within a subsystem is unique to that subsystem. In such cases. A separate communications subsystem. the activity level. DAYQ. the activity level is not listed as part of the subsystem description. It's common to use a private pool when the system uses the controlling subsystem QCTL instead of QBASE. Also. Three batch subsystems (QBATCH. based on the routing data associated with each job. QINTER. The second pool definition for QBASE is (2 *INTERACT). In this example of the QBASE pool definitions. prevents unwanted contention for resources between end users and programmers. IBM ships the following pool definitions for the multiple subsystem approach: QCTL ((1 *BASE)) QINTER ((1 *BASE) (2 *INTERACT)) QBATCH ((1 *BASE)) QCMN ((1 *BASE)) QSPL ((1 *BASE) (2 *SPOOL)) As you can see. daytime end-user processing of short jobs. QCMN.e. Although using QBASE as the controlling subsystem lets you divide main storage into separate pools. The AS/400's two predefined system pools. A third part of the pool definition. respectively) provide for daytime and nighttime processing of operator-submitted batch jobs. Because you can use the CHGSHRPOOL or WRKSHRPOOL commands to modify the activity level for shared pool *INTERACT. and QSPL) at IPL that are designed to support specific types of work. with private main storage and private activity levels. If you change your controlling subsystem to QCTL. and QPGMRB. Be careful not to confuse subsystem pool numbering with system pool numbering. pool sharing does not provide optimum performance in a diverse operations environment where various types of work process simultaneously.5. meaning that the system will use all of *BASE as a shared pool. and QSPL handles spooling. subsystems with private pools may be necessary to improve performance. in that shared pools reserve areas of main storage for specific types of jobs. As subsystems define new storage pools (shared or private) in addition to the two predefined system pools. Look at the pool definitions in Figure 17. whereas shared pool activity levels are maintained as part of the shared pool definitions (using the CHGSHRPOOL or WRKSHRPOOL commands).Parentheses group together two definitions. in which two interactive subsystems (QINTER and QPGMR) provide private pools for both end users and programmers. and system pool number 2. using QCTL is inherently easier to manage and administer in terms of controlling the number of jobs and performance tuning. A private pool is a specific allocation of main storage reserved for one subsystem. are defined as system pool number 1. QBATCH. Figure 17. However. . The private pool configuration in this example.4. nor is it specified when you use the CRTSBSD or CHGSBSD commands. the system simply assigns the next available system pool number to use as a reference on the WRKSYSSTS display. QSPL ((1 *BASE) (2 *SPOOL)) the system pool numbering might correspond to the subsystem pool numbering as shown in Figure 17. each of which can contain two distinct parts (the subsystem pool number and size).5 also demonstrates how you can use multiple batch subsystems. For example. QCMN. with the above pools for QBASE and the following pools for QSPL. respectively. is configured to handle any communications requests. the (1 *BASE) represents the subsystem pool number 1 and a size of *BASE..

Starting a Subsystem A subsystem definition is only that -. Next. First. The AS/400 defines this unit of work with a job name. it uses the storage pool definition to allocate main storage for job processing. Chapter 18 examines work entries and where jobs come from.a definition. When we're done with all that. it's wise to use shared pools if you have a system with limited main storage. The system then allocates job queues so that when the subsystem completes the start-up process. because shared pools ensure efficient use of main storage by letting more than one subsystem share a storage pool. Figure 17. . look over IBM's AS/400 Programming: Work Management Guide and make a sketch of your system's main storage pool configuration to see how your subsystems work. For example. when you submit a batch job. When the system has completed all these steps. and Chapter 19 discusses routing and where jobs go.6 outlines the steps your system takes to activate a subsystem after you execute a STRSBS command. If the system finds communications entries. On the other hand. : WMADDEN Job number . : 003459 Any transaction OS/400 completes is associated with an active job executing on the system. it uses the workstation entries to allocate workstation devices and present the workstation sign-on displays. when a user signs on to a workstation. it is these three attributes that make a job unique. Now that I've introduced you to subsystems.with the skills you need to implement a multiple subsystem work environment. the subsystem can receive jobs from the job queues. when you submit a prestart job.The decision about whether to use shared pools or private pools should depend on the storage capacity of your system. when your system receives a communications evoke request from another system.Where Jobs Come From One of OS/400's most elegant features is the concept of a "job. private pools provide a reserved pool of main storage and activity levels that are constantly available to a subsystem without contention from any other subsystem. the part of the description that defines how jobs gain access to the subsystem for processing. the resulting job might be known to the system as Job name . So let's continue Chapter 17's look at the subsystem description by focusing on work entries. . or when you create autostart job entries that the system automatically executes when it starts the associated subsystem. private pools are a wise choice for a system with ample main storage. you'll find yourself on Easy Street -. : DSP10 (Workstation ID) User profile . the subsystem is finally ready to begin processing work. it uses them to allocate the named devices." a unit of work with a tidy package of attributes that lets you easily identify and track a job throughout your system. To start a subsystem. Next. Chapter 18 . But where do these jobs come from? A job can be initiated when you sign on to the system from a workstation. a user profile associated with the job. They are easy to manage when dealing with multiple subsystems. Understanding how jobs get started on the system is crucial to grasping AS/400 work management concepts. . Therefore. On one hand. it starts any defined prestart or autostart jobs. and a computer-assigned job number. you use the STRSBS (Start Subsystem) command.

You must specify a value for either the WRKSTNTYPE parameter or the WRKSTN parameter. For example. The value that you should normally use for this attribute is the default. which describes how a user gains access to a particular subsystem (for interactive jobs) using a workstation. The easiest to understand is the workstation entry. you use the ADDWSE (Add Work Station Entry) command.) Now you're acquainted with the workstation entry attributes. In fact.g. or school). You can give this attribute a value of *USRPRF (the default) to tell the system to use the job description named in the user profile of the person who signs on to the workstation. The default value. WRKSTNTYPE(3477)). WRKSTNN(OHIO*). The AT attribute tells the system when to allocate the workstation. For instance. you can use either the WRKSTNTYPE or WRKSTN attribute to specify which workstations the system should allocate. supplying a number for this attribute could cause confusion if one day the limit is reached and some poor soul has to figure out why certain workstations aren't functioning. If you use the value *SBSD or a job description name and there is a valid user profile associated with the job description via the USER attribute. initiate a sign-on screen at the workstation) when the subsystem is started. tells the subsystem to let any workstation whose name begins with "OHIO" establish an interactive job. and autostart job. any user can simply press Enter and sign on to the subsystem. Or you can specify a value of *SBSD to tell the system to use the job description of the subsystem. This entry tells the system to allocate all workstations. regardless of the type (e. the user then assumes the user ID associated with the default job description named on the workstation entry.e. To define a workstation entry.. it's wise to use the default value *USRPRF for the JOBD attribute so that a user profile is required to sign on to the workstation.. an entry defining WRKSTN(DSP01) tells the subsystem to allocate device DSP01. 3476. all of which have the following attributes: • • • • WRKSTNTYPE (workstation type) or WRKSTN (workstation name) JOBD (job description name) MAXACT (maximum number of active workstations) AT (when to allocate workstation) When defining a workstation entry. or 3477). There may be times when you want to define a workstation entry so that one user profile is always used when someone accesses the system via a particular workstation (e.e. In addition. communications. you physically limit) the number of devices. Or you can use the WRKSTNTYPE attribute in one or more workstation entries to tell the system to allocate a specific type of workstation (e. a job queue and a subsystem description job queue entry must exist. tells the system to allocate the workstation (i. mall. because you typically control (i. AT(*ENTER) tells the system to let jobs enter the subsystem only via the TFRJOB (Transfer Job) command. be sure to construct such configurations so that only certain workstation entries have a job description that provides this type of access. you specify WRKSTNTYPE(*ALL) in the workstation entry. (To transfer a job into an interactive subsystem. 5250. 5291.g. the subsystem must de-allocate one workstation before it can allocate another. When this limit is reached.. When you look at the default workstation entries for QINTER. you cannot mix WRKSTNTYPE and WRKSTN entries in the same subsystem. You can also use a qualified name of an existing job description.g. You can enter either a specific name or a generic name. the subsystem recognizes only the entries that define workstations by the WRKSTN attribute and ignores any entries using the WRKSTNTYPE attribute. if you want to allocate all workstations. A subsystem can have as many workstation entries as you need.Types of Work Entries There are five types of work entries: workstation. For security reasons. If you do. but how can you use workstation entries? Let's say you want to process all your interactive jobs in subsystem QINTER. if you wanted to disseminate public information at a courthouse. It could take days to find this seldom-used attribute and change the value. In such a situation. You can also define workstation entries using the WRKSTN attribute to specify that the system allocate workstations by name. *NOMAX. prestart job.. The workstation entry's MAXACT attribute determines the maximum number of workstations allowed in the subsystem at one time.. In such cases. you see the following: CRTSBSD (Create Subsystem Description) commands: . The generic entry. AT(*SIGNON). The JOBD workstation entry attribute specifies the job description for the workstation entry. job queue.

and the device will show a sign-on display. you can allocate programmers' workstations to the proper subsystem by first giving them names like PGMR01 or PGMR02 and then creating a workstation entry that specifies WRKSTN(PGMR*). What about you? Can you see how your configuration is set up to let interactive jobs process? Take a few minutes to examine the workstation entries in your system's subsystems. which would cause the subsystem to allocate the remaining workstations. If no user is signed on to the workstation. you can make it work for you. However. you must assign one or more job queues to a subsystem. the system will allocate all workstations to QINTER. you should start QINTER first. One is to name your various workstations so you can use generic WRKSTN values in the workstation entry. Your second option for ensuring that the correct subsystem allocates the devices is to use routing entries to reroute workstation jobs to the correct subsystem (I will explain how to do this in the next chapter). If AT(*SIGNON) is specified in the workstation entry. and one for remote end-user workstations (REMOTE). or simply specify WRKSTNTYPE(*ALL). A job queue entry associates a job queue with a subsystem. For the remote subsystem. The attributes of a job queue entry are as follows: .The first set of values tells the system to allocate all workstations to subsystem QINTER when the subsystem is started. Consequently. Imagine that you want to establish an interactive environment for two subsystems: QINTER (for all end-user workstations) and QPGMR (for all programmer workstations). Likewise. you will need to read on to learn how subsystems allocate workstations to ensure that those workstations in the programmer and local subsystems are allocated properly. To submit jobs for processing. You can. You can use the DSPSBSD (Display Subsystem Description) command to display the work entries that are part of the subsystem description. it would be impractical to use the WRKSTNTYPE attribute. and it would require you to end the subsystem each time you added a new device. In fact. You might preface all local end-user workstation names with ADMN and LOC and then create workstation entries in the local subsystem using WRKSTN(ADMN*) and WRKSTN(LOC*). one for local end-user workstations (LOCAL). because defining types does not necessarily define specific locations for certain workstations. but not to allocate the device. which will then allocate (from QINTER) only the workstations whose names begin with "PGMR". The second set of values tells the system to let the console transfer into the subsystem. the first subsystem will allocate the device. After a brief delay. you could continue to create workstation entries using generic names like the ones described above. How can you make sure the system allocates the workstations to the correct subsystem? Perhaps you're thinking you can create individual workstation entries for each device. This arrangement isn't all bad. Job Queue Entries Job queue entries control job initiation on your system and define how batch jobs enter the subsystem for processing. Every workstation has its rightful place by simply using the system to do the work. When the system starts another subsystem with a workstation entry that applies to that same device (with AT(*SIGNON) specified). For example. To ensure that each workstation is allocated to the proper subsystem. So you have only two good options for ensuring that the correct subsystem allocates the devices. Conflicting Workstation Entries Can workstation entries in different subsystems conflict with each other? You bet they can! Consider what happens when two different subsystems have workstation entries that allocate the same device. the subsystem will try to allocate it. What about a multiple subsystem environment for interactive jobs? Let's say you want to configure three subsystems: one for programmers (PGMRS). but such a method would be a nightmare to maintain. the second subsystem will allocate the device. You supply WRKSTNTYPE(*ALL) for subsystem QINTER and WRKSTN(PGMR*) for subsystem QPGMR. start QPGMR.

short-running end-user batch jobs. The subsystem will search this job queue to receive jobs for processing..• • • • JOBQ (job queue name) MAXACT (maximum number of active jobs from this job queue) SEQNBR (sequence number used to determine order of selection among all job queues) MAXPTYn (maximum number of active jobs with this priority) The JOBQ attribute. and BATCHSBS. The subsystem searches each job queue in the order specified by the SEQNBR attribute of each job queue entry.g. The default for MAXPTY1 through MAXPTY9 is *NOMAX. NIGHTSBS processes nighttime.. You can use the SEQNBR attribute to sequence multiple job queue entries associated with the subsystem.1. same name) attribute of the subsystem description controls the maximum number of jobs in the subsystem from all entries (e. which you can use to define only one subsystem job queue entry. however. The MAXACT attribute defines the maximum number of jobs that can be active in the subsystem from the job queue named in this entry. To create the batch work environment in Figure 18. This attribute controls only the maximum number of jobs allowed into the subsystem from the job queue. The MAXPTYn attribute is similar to the MAXACT attribute except that MAXPTYn controls the number of active jobs from a job queue that have the same priority (e. BATCHSBS processes operator-submitted requests and program compiles. MAXPTY1 defines the maximum for jobs with priority 1. To illustrate how job queue entries work together to create a proper batch environment. which is required. The next step is to create the appropriate job queues with the following CRTJOBQ (Create Job Queue) commands: CRTJOBQ CRTJOBQ CRTJOBQ CRTJOBQ JOBQ(QGPL/DAYQ) JOBQ(QGPL/NIGHTQ) JOBQ(QGPL/PGMQ) JOBQ(QGPL/BATCHQ) Then. MAXPTY2 defines the maximum number for jobs with priority 2). defines the name of the job queue you are attaching to the subsystem. You can name only one job queue for a job queue entry. add the job queue entries to associate the job queues with the subsystems: . when defining multiple job queue entries. job queue and communications entries). DAYSBS processes daytime. The MAXACT (yes. The default for this attribute is 10. you should determine the appropriate sequence numbers desired to prioritize the job queues. long-running end-user batch jobs. you first create the subsystems using the following CRTSBSD (Create Subsystem Description) commands: CRTSBSD SBSD(QGPL/DAYSBS) POOL((1 *BASE) (2 400 1)) MAXACT(1) CRTSBSD SBSD(QGPL/NIGHTSBS) POOL((1 *BASE) (2 2000 2)) MAXACT(2) CRTSBSD SBSD(QGPL/BATCHSBS) POOL((1 *BASE) (2 1500 3) MAXACT(3) Notice that each subsystem has an established maximum number of active jobs (MAXACT(n)).1 shows a scheme that includes three subsystems: DAYSBS. The maximum limit matches the activity level specified in the subsystem pool definition so that each active job is assigned an activity level without having to wait for one. Figure 18.g. but you can define multiple job queue entries for a subsystem. NIGHTSBS. The default for MAXACT is 1. which lets only one job at a time from this job queue process in the subsystem.

where they wait to process at night. Later. only one job filters into the subsystem at a time. to let more jobs from the job queue process without having to re-create the job queue entry to modify MAXACT. As with workstation entries. If there are no communications entries. COMMDEV or REMSYS) or device type (e. You should specify *NONE for this attribute to ensure that any program start request supplies a valid user profile and password.g. The RMTLOCNAME attribute specifies the remote location name you define when you use the CRTDEVxxxx command to create the communications device. the job queues defined in the job queue entries are allocated. Notice that BATCHSBS supports a maximum of three jobs (MAXACT(3)). first queued! Communications Entries After you establish a workstation and a physical connection between remote sites. Therefore. first served. As with the WRKSTNTYPE and WRKSTN attributes. you can change the subsystem pool size and activity level. . see the CRTMODD (Create Mode Description) command description in IBM's AS/400 Programming: Control Language Reference (SC41-0030). which enables the subsystem to process the program start request. first come. except that it lets two jobs process at the same time. Subsystem DAYSBS is a simple configuration that lets one job queue feed jobs into the subsystem..ADDJOBQE ADDJOBQE ADDJOBQE ADDJOBQE SBSD(DAYSBS) JOBQ(DAYQ) MAXACT(*NOMAX) SEQNBR(10) SBSD(NIGHTSBS) JOBQ(NIGHTQ) MAXACT(*NOMAX) SEQNBR(10) SBSD(BATCHSBS) JOBQ(PGMQ) MAXACT(1) SEQNBR(10) SBSD(BATCHSBS) JOBQ(BATCHQ) MAXACT(2) SEQNBR(20) Now let's walk through this batch work environment. When a subsystem starts. job queue entries can conflict if you define the same job queue as an entry for more than one subsystem.g. you should use the default value *USRPRF to ensure that a user profile is used and that the system uses the job description associated with the user making the program start request. DFTUSR defines the default user for the communications entry. There is no default for the DEV or the RMTLOCNAME attribute. I configured the BATCHSBS subsystem with two job queue entries. JOBD and DFTUSR. or first come. the system rejects any program start request. no job queues are allocated and no jobs are processed. using a specific job description can cause a security problem if that job description names a default user. along with the MAXACT subsystem attribute. Because the MAXACT attribute value of DAYSBS is 1. you need a communications entry.. JOBD specifies the job description to associate with this entry. As with workstation entries. To show you how job queues can work together to feed into one subsystem. The MODE attribute defines specific communications boundaries and variables. are crucial. while job queue entry BATCHQ lets two jobs be active (MAXACT(2)). the system allocates job queue NIGHTQ and jobs can be processed. When NIGHTSBS starts. The next two attributes.. When a subsystem is inactive. application programs can send batch jobs to the NIGHTQ job queue. *APPC) needed for communications. you must specify one or the other. For more information about the MODE attribute. you simply need it to link the remote system with your subsystem. but not both. A communications entry has the following attributes: • • • • • • DEV (name or type of communications device) RMTLOCNAME (remote location name) JOBD (job description name) DFTUSR (default user profile name) MODE (mode description name) MAXACT (maximum number of jobs active with this entry) The DEV attribute specifies the particular device (e. despite the fact that you specified the attribute MAXACT(*NOMAX) for the DAYQ job queue entry. This subsystem is inactive during the day and starts at night via the STRSBS (Start Subsystem) command. that job queue cannot be allocated to another subsystem until the first subsystem ends. There's no real pizazz to this entry. Job queue entry PGMQ lets one job from that queue be active (MAXACT(1)). And when a job queue is allocated to an active subsystem. As you do with the workstation entry. In other words.. The configuration of NIGHTSBS is similar to the configuration of DAYSBS.

The program does not execute -. Prestart Job Entries The prestart job entry goes hand-in-hand with the communications entry. defined in the autostart job entry. When the system receives a program start request. which means that the job description must use the request data or routing data to execute a command or a program. telling the subsystem which program to start when the subsystem itself is started. it will compare the program evoke to the prestart job program defined. (Other jobs get their initial routing program from the routing data entries that are part of the subsystem description. In this case. RQSDTA('call histpgm')). Where Jobs Go Now we've seen where jobs come from on the AS/400 -. you should place the new objects in a user-defined library. Autostart Job Entry An autostart job entry specifies the job to be executed when the subsystem starts. For instance. use an ADDPJE (Add Prestart Job Entry) command similar to the following: ADDPJE SBSD(COMMSBS) PGM(OEPGM) JOBD(OEJOBD) Then. When you are . it starts a job by using the prestart program that is ready and waiting. You can add a communications entry by using the ADDCMNE (Add Communications Entry) command. you will find that an autostart job entry exists to submit the QSTRUPJD job using the job description QSYS/QSTRUPJD.g.) The two key attributes of the prestart job entry are PGM and JOBD. The job HISTORY. One reminder.the system simply performs all the opens and initializes the job named in the prestart job entry and then waits for a program start request for that particular program. when the communications entry receives a program start request (an EVOKE) and processes the request. The prestart job entry is the only work entry that defines an actual program and job class to be used. This job description uses the request data to call a program used in the IPL process. When you examine either the QBASE or QCTL subsystem description (using the DSPSBSD command). In the example above. starts each time the associated subsystem starts. the system has no need to start a job because the prestart job is already started. ensuring that the job runs whether or not anyone remembers to submit it. if you want to print a particular history report each time the system is IPLed. if the program evoke is also OEPGM. as in the following example: ADDCMNE SBSD(COMMSBS) RMTLOCNAME(NEWYORK) JOBD(*USRPRF) DFTUSR(*NONE) MODE(*ANY) MAXACT(*NOMAX) + If you are communicating already and you want to know what entries are configured.. you can add the following autostart job entry to the controlling subsystem description: ADDAJE SBSD(sbs_name) JOB(HISTORY) JOBD(MYLIB/HISTJOBD) The JOB and JOBD attributes are the only ones the autostart job entry defines.but where do they go? I'll address that question in the next chapter when we look at how routing entries provide the final gateway to subsystem processing.The MAXACT attribute defines the maximum number of program start requests that can be active at any time in the subsystem for this communications entry. thus saving valuable time in program initialization. To add a prestart job entry. The PGM attribute specifies the program to use and the JOBD attribute specifies the job description to be used. If you decide to create or modify the system-supplied work management objects such as subsystem descriptions and job queues. use the DSPSBSD (Display Subsystem Description) command to find out. OS/400 uses an autostart job entry to assist the IPL process. HISTJOBD would have the correct RQSDTA (Request Data) attribute to call the program that generates the history report (e.

routing data is defined by either the RTGDTA (Routing Data) parameter of the job description associated with the job or by the RTGDTA parameter of the SBMJOB (Submit Job) command. Chapter 19 . I concentrate on subsystem routing entries to prove to you. which control the flow of traffic from one place to another.The processing that starts when the routing program executes. the routing data of the specified job description will be used as the routing data for the interactive job. it might help to think of street signs. the routing data defined on the job description associated with the user profile is used as the routing data for the interactive job. in this case. In fact. and the routing data would be QCMDI. In this chapter. By having your new objects in your own library. For most jobs. When a user signs on to a workstation that uses a subsystem workstation entry where *USRPRF is defined as the JOBD attribute. Let's say you create a user profile using the CRTUSRPRF (Create User Profile) command and do not enter a specific job description. I am constantly surprised by the number of AS/400 programmers who have never fully examined routing.ready to start using your new objects. Let me give you a couple examples. If the value of the JOBD parameter of the workstation entry is *SBSD (which instructs the system to use the job description that has the same name as the subsystem description) or an actual job description name. AS/400 jobs must have routing data. The system uses QDFTJOBD (the default job description) for that user profile.Demystifying Routing So far. which determines how jobs are processed after they reach the subsystem. the JOBD would be QDFTJOBD. I have explained how jobs are defined and started on the AS/400. that you have nothing to fear! The AS/400 uses routing to determine where jobs go. once and for all. . Executing DSPJOBD QDFTJOBD reveals that the RTGDTA attribute has a value of QCMDI. Now let's look at each of these job types to see how routing data is defined for each. you can easily document any changes.A character string. The key to determining routing data for an interactive job is the JOBD (Job Description) parameter of the workstation entry that the subsystem uses to allocate the workstation being used. Routing entry -. Routing Data for Interactive Jobs Users gain access to a given subsystem for interactive jobs via workstations. Routing data determines which routing entry the subsystem will use. defined by workstation entries. And we've seen how work entries control how jobs gain access to the subsystem. We've seen that jobs are processed in a subsystem. The AS/400 uses the following routing concepts to process each and every job: • • • Routing data -. you can change the system startup program QSYS/QSTRUP to use your new objects for establishing your work environment (to change the system startup program. it's almost as though routing is some secret whose meaning is known by only a few. To execute in a subsystem. that determines the routing entry the subsystem will use to establish the routing step. the routing data for that interactive job would be the routing data defined on the job description associated with the user profile. If the value for the JOBD parameter is *USRPRF. up to 80 characters long.A subsystem description entry. which you create. Routing step -. which is where the system combines all the resources needed to process work. that determines the program and job class the subsystem will use to establish a routing step. Now we need to talk about routing. To understand routing. you modify the CL source and recompile the program).

If you submit a job using the SBMJOB command without modifying the default value for the RTGDTA parameter.) QCMDB -.1. If you define the RTGDTA parameter as *RQSDTA. The RTGDTA parameter of the SBMJOB command determines the routing data. if you define the RTGDTA parameter as QCMDB or any userdefined routing data. We can pick any of the four possible values for the RTGDTA attribute on the SBMJOB command and follow the path to see how that value eventually determines the routing data for the submitted batch job. the system examines the JOBD parameter of the SBMJOB command and then uses either the user profile's job description or the actual job description named in the parameter. the job uses the value specified in the RQSDTA (Request Data) parameter of the SBMJOB command as the routing data." you would enter the command SBMJOB JOB(MESSAGE) CMD('SNDMSG"hi" TOMSGQ(QSYSOPR)') This batch job would use the routing data of QCMDB. in which the workstation entry defines SPJOBD as the job description. . as I stated above. specifying *RQSDTA is practical only if specific routing entries have been established in a subsystem to start specific routing steps based on the command or program being executed by a job. the subsystem uses the SPJOBD job description to establish job attributes. How do I know that? Because. routing-data -. the routing data is always QCMDB -. We'll get to that in a minute.2 provides an overview of how the routing data for a batch job is established. including the RTGDTA value of SPECIAL. that value becomes the routing data for the job. The RTGDTA parameter on this command has four possible values: • • • • JOBD -. Routing Data for Batch Jobs Establishing routing data for a batch job is simple. you use the RTGDTA parameter of the SBMJOB (Submit Job) command. (Because the request data represents the actual command or program to process.Now look at Figure 19. Keeping these values in mind. let's look at a SBMJOB command. and the resulting job (012345/USER_XX/job_name) is submitted to process in a subsystem. A user submits a job via the SBMJOB command.up to 80 characters of user-defined routing data. Figure 19.the routing data of the job description.as long as this default has not been changed via the CHGCMDDFT (Change Command Default) command. Finally. If you specify RTGDTA(*JOBD).the default routing data used by the IBM-supplied subsystems QBASE or QBATCH to route batch jobs to the CL processor QCMD (more on this later). RQSDTA -. To submit a batch job that sends the operator the message "hi.the value specified in the RQSDTA (Request Data) parameter on the SBMJOB command. the value QCMDB is the default. Instead of using the job description associated with the user profile. a routing data character string ('high-priority') is defined. Now examine the following SBMJOB command: SBMJOB JOB(PRIORITY) CMD('call user-pgm') RTGDTA('high-priority') + In this example. By now you are probably wondering just how modifying the routing data might change the way a job is processed.

3 shows how the subsystem uses routing data for an interactive job. In addition to establishing the routing step. The Importance of Routing Data When a job enters a subsystem. compare value. class. define which storage pool the job will be processed in. you would arrange the values PGMR. (The search is based on the starting position specified in the routing entry and the literal specified as the compare value. starting position. In this case. It's important that you understand these attributes and how you can use them to create the routing entries you need for your subsystems. Routing entries. the compare value for the first routing entry (SEQNBR(10)) and the routing data for job 012345/USER_XX/DSP01 are the same. The subsystem seeks a match to determine which program to use to establish the routing step for that job.3 has a compare value of *ANY. As shown in Figure 19. I said that routing entries identify which programs to call. an autostart job entry in the subsystem description consists of just two attributes: the job name and the specific job description to be used for processing. immediately followed by the desired program name. The routing data is not taken from a permanent object on the AS/400. The prestart job entry attribute. you need to remember two rules. program. which always has the value PGMEVOKE starting in position 29. For communications jobs (communications evoke requests). let's talk about routing entries and associated job classes. PGMRS. AND PGMRS1 this way: Sequence Number 10 20 30 Compare Value 'PGMRS1' 'PGMRS' 'PGMR' Placing the least specific value (PGMR) first would cause a match to occur even when the intended value (e. Each attribute is defined when you use the ADDRTGE command to add a routing entry to a subsystem description. when using similar compare values. the interactive job is started. specifies the program to start in the subsystem. (Notice that routing entry SEQNBR(9999) in Figure 19. it executes the program defined in the routing entry (QCMD in library QSYS) to establish the routing step for the job in the subsystem. Before we take a closer look at the various attributes of a routing entry. First. the subsystem builds the routing data from the program start request.3. use the sequence numbers to order the values from most to least specific.3. Communications. When USER_XX signs on to workstation DSP01. typically defined when you create a subsystem. When the job enters the subsystem. but is instead derived from the program start request that the communications entry in the subsystem receives and processes. In Chapter 18. Now that you have the feel of how this process works. For example. the routing entry also provides the job with specific runtime attributes based on the job class specified. always use the compare value *ANY with SEQNBR(9999) so it will be used only when no other match can be found. Prestart jobs use no routing data. The routing data of a particular job description is the only source for the routing data of an autostart job. a routing entry consists of a number of attributes: sequence number. the subsystem looks for routing data that matches the compare value in one or more routing entries of the subsystem description -.Routing Data for Autostart.) In Figure 19. . and pool ID. the specified class is QINTER. let me explain how routing entries relate to routing data.similar to the way you would check your written directions to see which highway exit to take. are defined as part of the subsystem description via the ADDRTGE (Add Routing Entry) command.g.. Because the system has found a match. Jobs that require routing data (all but prestart jobs) follow this same procedure when being started in the subsystem. maximum active. and Prestart Jobs As you may recall from Chapter 18.) Second. and the routing data (QCMDI) is established. the system compares the routing data in the job to the routing data of each routing entry until it finds a match. Figure 19. PGM. The processing of this program is the routing step for that job. and specify the execution attributes the job will use for processing. PGMRS1) is more specific. The sequence number is simply a basic numbering device that determines the order in which routing entries will be compared against routing data to find a match. When assigning a sequence number.

for a batch job. For more information on these performance-related attributes of the CLASS object. Look at the following pool definition and abbreviated routing entry: Pool Definition: ((1 *BASE) (2 10000 20)) Sequence Number 10 Compare Value 'QCMDI' Pool ID 1 This routing entry tells the system to use subsystem pool number 1 (*BASE). The PGM attribute determines what program is called to establish the routing step for the job being processed. the program simply executes. the request message would be the initial program or menu request. The important thing is to use a compare value that matches some routing data that identifies a particular job or job type. The routing entry can be used to make sure that a specific program is executed when certain routing data is found.000 KB of storage is set aside in pool number 2. As I explained in Chapter 17. this routing entry is probably incorrectly specifying pool number 1. Why go to this trouble? Because you can use this matching routing entry to determine a lot about the way a job is processed on the system (e. Normally. Later in this chapter. a routing step simply starts the program named in the routing entry. Beginners . MAXACT determines the maximum number of active jobs that can use a particular routing entry. the command or program to execute). they do not match the numbering scheme of the system pools. it waits for a request message to process.3. called QBATCH. When QCMD is the routing program. Remember. if the value (ROUTE 5) is used.. as well as the time slice for a job. all the routing entries use class QINTER. this program is QCMD (the IBM CL processor).The compare value and starting position attributes work together to search a job's routing data for a match.g. the system also has an IBM-supplied class. The compare value can be any characters you want (up to 80). in CPU milliseconds.. that defines attributes more typical for batch job processing. The last routing entry attribute is the POOLID (subsystem storage pool ID). the system searches the job's routing data starting in position 5 for the value ROUTE. a job will process before being bumped from the activity level to wait while another job executes a time slice. The routing entry program is the first program executed in the routing step. see IBM's AS/400 Programming: Work Management Guide (SC41-8078). For example. You will rarely need to change this attribute's default (*NOMAX). subsystem storage pool. (The time slice is the length of time. and time slice). Considering that 10. run priority. regardless of the initial program or specific request data for a job. In Figure 19. If you look at the subsystem description for QBASE or QBATCH. and these numbers are used only within that particular subsystem description. Because you would not want to process a batch job using these same values. To route jobs with the correct routing program and job class. I explain how this might be beneficial to you. it would be the request data (i. you will find the following routing entry: Sequence Number 10 Compare Value 'QCMDB' Program QSYS/QCMD Class QBATCH This entry uses program QCMD and directs the system to use class QBATCH to define the runtime attributes for jobs having routing data QCMDB. the systemsupplied routing data for the default batch job description QBATCH is QCMDB.e. the subsystem definition includes the specific storage pools the subsystem will use. These storage pools are numbered in the subsystem. For an interactive job. You can use different classes to create the right performance mix. If the routing program is a user-defined program. Runtime Attributes The CLASS (job class) is an important performance-related object that defines the run priority of the job. The routing entry attribute POOLID tells the system which subsystem storage pool to use for processing this job. which is defined to represent the run priority and time slice typical for an interactive job. but it can be any program.) A routing entry establishes a job's run priority and time slice much the way speed limit or yield signs control the flow of traffic.

if you look carefully at Figure 19. Both commands have the RTGDTA parameter. we've discussed how routing data is created. (Note that the routing data need not match the job description name.. Next. Figures 19. you need to separate the users.commonly make the mistake of leaving the default value in the routing entry definition when creating their own subsystems and defining their own routing entries. All three subsystems use the WRKSTNTYPE (workstation type) entry with the value *ALL. notice that each workstation entry defines JOBD(*USRPRF) so that the routing data from the job descriptions of the user profiles will be the routing data for the job. When it finds QOFFICE. But why would you want it to? One reason might be to use a new class to change the runtime attributes of the job.4b shows some sample subsystem definitions. However. After a user signs on to a workstation in subsystem QINTER. OFFICEJOBD and PGMRJOBD have QOFFICE and QPGMR specified.4c. you must build subsystem descriptions that use the routing entries associated with the job descriptions. no matter what workstation they are using at the time. After a job is started. respectively. Also. the routing entries do all the work. Now for one more hurdle. and general end users in certain subsystems based on their locations or functions. you can reroute it using the RRTJOB (Reroute Job) command or transfer it to another subsystem using the TFRJOB (Transfer Job) command. program QOFFICE simply executes the TFRJOB command to transfer this particular job into subsystem QOFFICE.. However. Just remember to compare the pool definition with the routing entry definition to ensure that the correct subsystem pool is being used. so that when the job enters subsystem QOFFICE. Suppose you issue the following command during the execution of a job: RRTJOB RTGDTA('FASTER') RQSDTA(*NONE) Your job would be rerouted in the same subsystem but use the value FASTER as the value to be compared in the routing entries. In Figure 19. Figure 19. you will see that the TFRJOB command also modifies the routing data to become QCMDI.) To enable users to work in separate subsystems. you first need to create or modify their user profiles and supply the appropriate job description based on the subsystem in which each user should work. as their routing data. The first routing entry looks for the compare value QOFFICE. OV/400 users would have OFFICEJOBD. and programmers would have the job description PGMRJOBD. only the workstation entry in QINTER uses the AT(*SIGNON) entry to tell the subsystem to allocate the workstations. Figure 19. User-defined INTERJOBD has QINTER as the routing data. Do-It-Yourself Routing To reinforce your understanding of routing and tie together some of the facts you've learned about work management. You need to do more than just separate the workstations. program QOFFICE in library SYSLIB is called to establish the routing step.4a lists three job descriptions that have distinct routing data. and how routing entries establish a routing step for a job and control specific runtime attributes of a job.4c. OfficeVision/400 (OV/400) users. which lets you modify the job's current routing data to establish a new routing step. how routing entries are established to search for that routing data.4f describe the objects and attributes needed to define such an environment. general end users would have INTERJOBD. Let's say you want to place programmers. A job can have more than one routing step. Is There More Than One Way to Get There? So far. consider the following example. In our example. routing data QCMDI matches the corresponding routing entry and uses .4a through 19. This means that subsystem QINTER allocates all workstations and QOFFICE and QPGMR (both with AT(*ENTER)) only allocate workstations as jobs are transferred into those subsystems.

all files are described at four levels (Figure 20. it's just that there are quite a few facts to digest. program QINTER (Figure 19.4f) is called to transfer the job into subsystem QINTER. routing entries are really plain and simple. Notice that each subsystem has a routing entry that searches for QINTER. So if you count types. source files. First is the object-level description. and save files. the MONMSG CPF0000 EXEC(RRTJOB RTGDTA(QCMDI)) command reroutes the job in the current subsystem. The same is true for routing data *ANY. you can use them to intercept jobs as they enter the subsystem and then control the jobs using various run-time variables. However. If an error occurs on the TFRJOB command. subsystems QOFFICE and QPGMR use similar routing entries to make sure each job enters the correct subsystem. In this chapter. you will find that you can use routing entries as solutions to numerous work management problems.4e) to transfer the job into subsystem QPGMR. The AS/400 maintains the same object description information for a file (e. As intimidating as they may at first appear. The next routing entry in the QINTER subsystem looks for compare value QPGMR. For instance. This allows OS/400 to present the correct format for the description when using the DSPFD command. The AS/400 supports five types of files — database files. When it finds QPGMR. The second level of description the system maintains for *FILE objects is a file-level description.4d shows how class QOFFICE might be created to provide the performance differences needed for OV/400 users. device files. And 10.2). If you count the various types of files the AS/400 supports.g. Basically. DDM files. you get five.program QCMD and class QOFFICE. It describes the attributes or characteristics of a particular file and is embedded within the file itself. Each file type (and subtype) contains unique characteristics that provide unique functions on the AS/400. I look at the various types of files and describe the way each file type functions.. If this compare value is found. You can display or print a file description with the DSPFD (Display File Description) command. It's not that the concept is difficult to grasp. Figure 19. it is the concept of AS/400 file structure. Start by studying subsystem descriptions to learn what each routing entry controls. Once you understand them. This also provides OS/400 with the ability to determine which commands can operate on which types of files. You can look at the object-level information with the DSPOBJD (Display Object Description) command. if you count the file subtypes — all the objects designated as OBJTYPE(*FILE) — you get 10. Structure Fundamentals If there is any one AS/400 concept that is the key to unlocking a basic understanding of application development. it calls program QPGMR (Figure 19.1 lists the five file types that exist on the AS/400. I strongly recommend that you take the time to learn how your system uses routing entries. The file description is created along with the file when you execute a CRTxxxF command.4b. Chapter 20 . its library and size) as it does for any other object on the system. Still puzzled? Figure 20. A file is an AS/400 object whose object type is *FILE. The file subtype is one of the attributes maintained as part of the file description. Routing data QCMDI calls program QCMD and then processes the initial program or menu of the user profile. Look again at Figure 19. as well as the 10 subtypes and the specific CRTxxxF (Create xxx File) commands used to create them. So let's start by looking at how files are described. In our example. how many do you get? The answer is five. the DLTF (Delete File) command . On the AS/400.File Structures Getting a handle on AS/400 file types can be puzzling.

This level describes the various. While there is the DSPOBJD command and the DSPFD command. If you try to add records to that file. tape files. If the fourth level of description — field descriptions — is not used when creating the file. AS/400 database and source files can have no data members. and the name of that member will be the same name as the file itself. so good. Now that I have your attention (and you're trying to remember just who Martin Luther was — look under Church History: The Reformation). The final level of descriptive information the system maintains for files is the field-level description. Whether you are using PDM (Programming Development Manager) or SEU (Source Entry Utility). You may be saying. Field descriptions do not exist for all types of files. the data member. Data Members: A Challenge Now that you know how files are described. DDM files.3. using the year itself to construct the name of the data member. and logical files can have multiple record formats (we'll cover this topic in a future chapter). printer.Figure 20. As you can see. just as Martin Luther did. All members have the same description in terms of record format and fields. If you create a physical file using the CRTPF (Create Physical File) command and take the defaults for member name and maximum number of members. and ICF — a description of each field and field attribute is maintained. When you create a new source member. Believe it or not. that has caused even the best application programmers to cry in anguish. You traditionally think of a file containing a set of records. and each member has a various number of records.5 represents how this application might use a single physical file to store these records. and save files have no field descriptions because they have no fields.works for any type of file on the system. the record format is described by a specific record length. you need a challenge! We now need to consider a particular organizational element that applies only to database and source files. This type of application might use a database file to store each year's records in separate data members. The applications that access this data must use the OVRDBF (Override with Database File) command to open the correct data member for record retrieval. I will impart the truth to you and save you any future anguish. the two types of files that actually contain records of data. So far. and usually an AS/400 database file has a description and a data member that contains all the records that exist in that database file. diskette files. display. (In the case of DDM files. but each member contains unique data. you will create a file that contains only one data member. You would have to use the ADDPFM command to create a data member in the file before you could add records to the file. The third level of descriptive information the system maintains for files is the record-level description. Applications perform I/O by using specific record formats. and each file has records. You can use the DSPFFD command to display or print the field-level descriptions for a file. which are MBR(*FILE) and MAXMBRS(1). but the ADDPFM (Add Physical File Member) command only works for physical files. Each year represents a unique set of records. Each source member is a different data member in the file. the field descriptions of the target system file are used. each year has a unique data member. you don't have to tell us that. record formats that exist in the file. Examine Figure 20. Each file is described (as discussed). You use the DSPF command and the DSPFFD (Display File Field Description) command to display or print the record-level information. by specifying the name of the source member you want to work with. "Wait. you are actually creating another data member in this physical source file. the system will issue an error stating that no data member exists. you are instructing the software to override the file to use that particular member for record retrieval. but on the AS/400 there is an additional element of file organization. there is no Display Record Description command. Consider another example — a user application that views both current and historical data by year. if more than one. Now comes the tricky part.4 represents the way a source file is organized. At the other end of the scale is the fact that you can have multiple data members in a file. If you create a physical file and specify MBR(*NONE). A record format describes a set of fields that make a record. OS/400 uses the description of the file to maintain and enforce each file's object identity. right?" I wish it were that simple. which introduces you to the concept of the file data member. All files have at least one record format. source. A source file offers a good example. until they discover the truth. respectively. Figure 20. logical.) For the remaining files — physical. the file will be created without any data member for records. . An application can further break the record format into fields by either explicitly defining those fields within the application or by working with the external field definitions if they are defined for a record format.

Wow! No database members... one database member... multiple database members... Why? That's a fair question. Using multiple data members provides a unique manner to handle data that uses the same record format and same field descriptions and yet must be maintained separately for business reasons. One set of software can be written to support the effort, but the data can be maintained, even saved, separately. Having sorted through the structure of AS/400 files and dealt with data members, let's look specifically at the types of files and how they are used.

Database Files
Database files are AS/400 objects that actually contain data or provide access to data. Two types of files are considered database files — physical files and logical files. A physical file, denoted as TYPE(*FILE) and ATTR(PF), has file-, record-, and field-level descriptions and can be created with or without using externally described source specifications. Physical files — so called because they contain your actual data (e.g., customer records) — can have only one record format. The data entered into the physical file is assigned a relative record number based on arrival sequence. As I indicated earlier, database files can have multiple data members, and special program considerations must be implemented to ensure that applications work with the correct data members. You can view the data that exists in a specific data member of a file using the DSPPFM (Display Physical File Member) command. A logical file, denoted as TYPE(*FILE) and ATTR(LF), is created in conjunction with physical files to determine how data will be presented to the requester. For those of you coming from a S/36, the nearest kin to a logical file is an index or alternate index. Logical files contain no data but instead are used to specify key fields, select/omit logic, field selection, or field manipulation. The key fields serve to specify the access paths to use for accessing the actual data records that reside in physical files. Logical files must be externally described using DDS and can be used only in conjunction with externally described physical files.

Source Files
A source file, like QRPGSRC where RPG source members are maintained, is simply a customized form of a physical file; and as such, source files are denoted as TYPE(*FILE) and ATTR(PF). (Note: If you work with objects using PDM, physical data files and physical source files are distinguished by using two specific attributes — PFDTA and PF-SRC.) All source files created using the CRTSRCPF (Create Source Physical File) command have the same record format and thus the same fields. When you use the CRTSRCPF command, the system creates a physical file that allows multiple data members. Each program source is one physical file member. When you edit a particular source member, you are simply editing a specific data member in the file.

Device Files
Device files contain no actual data. They are files whose descriptions provide information about how an application is to use particular devices. The device file must contain information valid for the device type the application is accessing. The types of device files are display, printer, tape, diskette, and ICF. Display files, denoted by the system as TYPE(*FILE) and ATTR(DSPF), provide specific information relating to how an application can interact with a workstation. While a display file contains no data, the display file does contain various record formats that represent the screens the application will present to the workstation. Each specific record format can be viewed and maintained using IBM's Screen Design Aid (SDA), which is part of the Application Development Tools licensed program product. Interactive high-level language (HLL) programs include the workstation display file as one of the files to be used in the application. The HLL program writes a display file record format to the screen to present the end user with formatted data and then reads that format from the screen when the end user presses Enter or another appropriate function key. Whereas I/O to a database file accesses disk storage, I/O to a display file accesses a workstation. Printer files, denoted by the system as TYPE(*FILE) and ATTR(PRTF), provide specific information relating to how an application can spool data for output to a writer. The print file can be created with a maximum record length specified and one format to be used with a HLL program and program-described printing, or the print file can be created from external source statements that define the formats to be used for printing. Like display files, the print files themselves contain no data and therefore have no data member associated with them. When an application

program performs output operations to a print file, the output becomes spooled data that can be printed on a writer device. Tape files, denoted by the system as TYPE(*FILE) and ATTR(TAPF), provide specific information relating to how an application can read or write data using tape media. The description of the tape file contains information such as the device name for tape read/write operations, the specific tape volume requested (if a specific volume is desired), the density of the tape to be processed, the record and block length to be used, and other essential information relating to tape processing. Without the use of a tape file, HLL programs cannot access the tape media devices. Diskette files, denoted by the system as TYPE(*FILE) and ATTR(DKTF), are identical to tape files except that these files support diskette devices. Diskette files have attributes that describe the volume to be used and the record and block length. ICF (Intersystem Communications Function) files, denoted by the system as TYPE(*FILE) and ATTR(ICFF), provide specific attributes to describe the physical communications device used for application peer-to-peer communications programming. When a local application wants to communicate with an application on a remote system, the local application turns to the ICF file for information regarding the physical device to use for those communications. The ICF file also contains record formats used to read and write data from and to the device and the peer program.

DDM Files
DDM (Distributed Data Management) files, denoted by the system as TYPE(*FILE) and ATTR(DDMF), are objects that represent files that exist on a remote system. For instance, if your customer file exists on a remote system, you can create a DDM file on the local system that specifically points to that customer file on the remote system. DDM files provide you with an interface that lets you access the remote file just as you would if it were on your local system. You can compile programs using the file, read records, write records, and update records while the system handles the communications. Figure 20.6 represents a typical DDM file implementation.

Save Files
Save files, denoted by the system TYPE(*FILE) and ATTR(SAVF), are a special form of file designed specifically to handle save/restore data. You cannot determine the file-, record-, and field-level descriptions for a save file. The system creates a specific description used for all save files to make them compatible with save/restore operations. Save files can be used to receive the output from a save operation and then be used as input for a restore operation. This works just as performing save/restore operations with tape or diskette, except that the saved data is maintained on disk, which enhances the save/restore process because I/O to the disk file is faster than I/O to a tape or diskette device. Save file data also can be transmitted electronically or transported via a sneaker network or overnight courier network to another system and then restored. We have briefly looked at the various types of files that exist on the AS/400. Understanding these objects is critical to effective application development and maintenance on the AS/400. One excellent source for further reading is IBM's Programming: Data

Chapter 21 - So You Think You Understand File Overrides
"Try using OvrScope(*Job)."

How many times have you heard this advice when a file override wasn't working as intended? Changing your application to use a job-level override may produce the intended results, but doing so is a bit like replacing a car's engine because it has a fouled spark plug. Actually, with a fully functional new engine, the car will always run right again. A job-level override, on the other hand, may or may not produce the desired results, depending on your application's design. And even if the application works today, an ill-advised job-level override coupled with modifications may introduce application problems in the future. If you're considering skipping this article because you believe you already understand file overrides, think again! I know many programmers, some excellent, who sincerely believe they understand this powerful feature of OS/400 — after all, they've been using overrides in their applications for years. However, I've yet to find anyone who does fully understand overrides. So, read on, surprise yourself, and learn once and for all how the system processes file overrides. Then put this knowledge to work to get the most out of overrides in your applications.

Anatomy of Jobs
Before examining file overrides closely, you need to be familiar with the parts of a job's anatomy integral to the function of overrides. The call stack and activation groups both play a key role in determining the effect overrides have in your applications. Jobs typically consist of a chain of active programs, with one program calling another. The call stack is simply an ordered list of these active programs. When a job starts, the system routes it to the beginning program to execute and designates that program as the first entry in the call stack. If the program then calls another program, the system assigns the newly called program to the second call stack entry. This process can continue, with the second program calling a third, the third calling a fourth, and so on, each time adding the new program to the end of the call stack. The call stack therefore reflects the depth of program calls. Consider the following call stack: ProgramA ProgramB ProgramC ProgramD You can see four active programs in this call stack. In this example, the system called ProgramA as its first program when the job started. ProgramA then called ProgramB, which in turn called ProgramC. Last, ProgramC called ProgramD. Because these are nested program calls, each program is at a different layer in the call stack. These layers are known as call levels. In the example, ProgramA is at call level 1, indicating the fact that it is the first program called when the job started. ProgramB, ProgramC, and ProgramD are at call levels 2, 3, and 4, respectively. As programs end, the system removes them from the call stack, reducing the number of call levels. For instance, when ProgramD ends, the system removes it from the call stack, and the job then consists of only three call levels. If ProgramC then ends, the job consists of only two call levels, with ProgramA and ProgramB making up the call stack. This process continues until ProgramA ends, at which time the job ends. So far, you've seen that when one program calls another, the system creates a new, higher call level at which the called program runs. The called program then begins execution, and when it ends, the system removes it from the call stack, returning control to the calling program at the previous call level. That's the simple version, but there's a little more to the picture. First, it's possible for one program to pass control to another program without the newly invoked program running at a higher call level. For instance, with CL's TfrCtl (Transfer Control) command, the system replaces (in the call stack) the program issuing the command with the program to which control is to be transferred. Not only does this action result in the invoked program running at the same call level as the invoking program, but the invoking program is also completely removed from the chain of programs making up the call stack. Hence, control can't be returned to the program that issued the TfrCtl command. Instead, when the newly invoked program ends, control returns to the program at the immediately preceding call level.

you need a basic familiarity with activation groups to comprehend file overrides. such as ODPs and storage for program variables. the system removes from the call stack not only the ending program but also any program at a call level higher than that of the ending program. . are a further division of jobs into smaller substructures. Activation groups. ProgramB's design then determines whether it is removed from the call stack. or deletes the activation group. it's sufficient simply to note that activation groups exist. In fact. Activation-grouplevel overrides issued from the default activation group are scoped to call-level overrides. unless the override is issued using a call to program QCmdExc. A complete discussion of activation groups could span volumes. that particular activation group within the job. each activation group can maintain its own private ODP. regardless of the activation group or call level in which the override is issued. when a program ends. system programs will likely appear throughout your call stack. In other words. when you execute these programs. the system removes them from the call stack. You might be thinking about our example and scratching your head. This event results in the system returning control to ProgramB's error handler. the call stack begins with several system programs before any user-written programs appear. instead. You can scope an override to the following levels: • • • Call level — A call-level override exists at the level of the process that issues the override. Only the most recently issued job-level override is in effect. activation groups consist of private system resources. processing continues in ProgramB. and that they can contain their own set of resources not available to other activation groups within the job. You're probably familiar with the fact that a job is a structure with its own allocated system resources. we'll look at a few miscellaneous rules. A job-level override remains in effect from the time it is issued until the system replaces or deletes it or until the job in which the override was issued ends. deletes it. In reality. overrides to the same file. Job level — A job-level override applies to all programs running in the job. ProgramC and ProgramD. none of which can access the resources unique to the other activation groups within the job. and the order in which the system processes overrides. These resources are available to programs executed within that job but are not available to other jobs. that they are substructures of a job. A job can consist of multiple activation groups. Scoping an Override An override's scope determines the range of influence the override will have on your applications. An activation-group level override remains in effect from the time the override is issued until the system replaces it.You may recall that earlier I said that as programs end. These rules apply only when the override is issued from an activation group other than the default activation group. In addition to an understanding of a job's call levels. A call-level override remains in effect from the time it is issued until the system replaces or deletes it or until the call level in which the override was issued ends. and running in. You assign ILE program objects to an activation group when you create the program objects. You should also note that under normal circumstances. If it handles the exception. only the most recently issued activation-grouplevel override is in effect. For now. but programs assigned to a different activation group don't have access to the same ODP. Activation group level — An activation-grouplevel override applies to all programs running in the activation group associated with the issuing program. such as open data paths (ODPs) and storage for program variables. An activation group's allocated resources are available only to program objects that are assigned to. programs assigned to the same activation group can use the ODP. introduced with the Integrated Language Environment (ILE). ProgramB is not removed from the call stack. For example. in that case. the call level is that of the process that called QCmdExc. Override Rules The rules governing the effect overrides have on your applications fall into three primary areas: the override scope. This point is important only to demonstrate that the call stack isn't simply a representation of user-written programs as they are called. Then. regardless of the call level in which the override is issued. As is the case with jobs. In such a case. although multiple activation groups within a job may open the same file. After examining the details of each of these areas. "How can ProgramB end before ProgramC?" Consider the fact that ProgramD can send an escape message to ProgramB's program message queue. wondering. This return of control to ProgramB results in the system removing from the call stack all programs at a call level higher than ProgramB — namely. the system creates the activation group (or groups) to which the programs are assigned.

along with the manner in which you can specify overrides. Figure 1 shows five ILE programs: Program3 and Program4 are both running in the default activation group. ILE program objects. can run in either the default activation group or a named activation group. jobs that run only OPM programs consist solely of the default activation group. the job will have each different named activation group as part of its structure. consider the following program fragment: ProgramC: OvrPrtF OvrPrtF Call File(Report) OutQ(Sales01) + OvrScope(*CallLvl) File(Report) Copies(3) + OvrScope(*CallLvl) Pgm(PrintPgm) . on the other hand. If any of a job's program objects are assigned to a named activation group. In fact. whether OPM or ILE.You specify an override's scope when you issue the override. notice that two activation groups. it also shows the valid levels to which you can scope overrides. Now. All jobs have as part of their structure the default activation group and can optionally have one or more named activation groups. The figure not only depicts the types of program objects that can run in the default activation group and in a named activation group. and Program7 are running in a named activation group. Program6. make up the job. depending on how you assign the program objects to activation groups. the overrides from both programs are combined. Consider the following program fragments. Figure 1 shows two OPM programs. Original Program Model (OPM) programs can run only in the default activation group. First. Overriding the Same File Multiple Times One feature of call-level overrides is the ability to combine multiple overrides for the same file so that each of the different overridden attributes applies. the default activation group and a named activation group. Figure 1 portrays each of these possibilities. if the job's program objects are assigned to different named activation groups. by using the override command's OvrScope (Override scope) parameter. and Program5. Because OPM programs can't be assigned to a named activation group. Program1 and Program2. can issue overrides scoped to the job level or to the call level. Programs running in the default activation group. Figure 1 depicts an ILE application's view of a job's structure. which issue the OvrPrtF (Override with Printer File) command: ProgramA: OvrPrtF Call ProgramB: File(Report) OutQ(Sales01) + OvrScope(*CallLvl) Pgm(ProgramB) OvrPrtF Call File(Report) Copies(3) + OvrScope(*CallLvl) Pgm(PrintPgm) When program PrintPgm opens and spools printer file Report. ILE programs running in a named activation group can scope overrides not only to these two levels but to the activation group level as well. the job will have as part of its structure that named activation group. resulting in the spooled file being placed in output queue Sales01 with three copies set to be printed. both running in the default activation group.

let's look at an example. show them your latest modification — ProgramA: OvrPrtF TfrCtl ProgramB: File(Report) OutQ(Sales01) + OvrScope(*CallLvl) Pgm(ProgramB) OvrPrtF Call File(Report) Copies(3) + OvrScope(*CallLvl) Pgm(PrintPgm) — and watch them go berserk again! This latest change is identical to the first iteration of ProgramA and ProgramB. You may need to point out to the programmers that they didn't really figure it out at all when they determined that only the most recent override within a program is in effect. This type of override is called a merged override. the system follows a distinct set of rules that govern the order in which overrides are processed. many overrides may be issued. TfrCtl doesn't start a new call level. 3. This feature provides a convenient way to replace an override within a single call level without the need to first delete the previous override. In fact. Remember. In the case of ProgramC. applicable attributes that have been overridden multiple times and remove overrides when an applicable request to delete overrides is issued. Figure 2A shows a job with 10 call levels. The system processes the overrides for a file when it opens the file and uses the following sequence to check and apply overrides: 1. In other words. the Copies(3) override is in effect. To aid your understanding. though. To determine the merged override. In the course of a job. many may be issued for a single file. ProgramB will simply replace ProgramA on the call stack. only the most recent override is in effect. and then ask them to help you figure out why things didn't work right after you changed the application to look like ProgramC! When they finally figure out that only the most recent override within a program is in effect. or replace.What do you think happens? You might expect this program to be functionally equivalent to the two previous programs. thereby running at the same call level as ProgramA. the system constructs a single override from the overridden attributes in effect from all the overrides. 2. call-level overrides up to and including the call level of the oldest procedure in the activation group containing the file open (beginning with the call level that opens the file and progressing in decreasing call-level sequence) the most recent activation-grouplevel overrides for the activation group containing the file open call-level overrides lower than the call level of the oldest procedure in the activation group containing the file open (beginning with the call level immediately preceding the call level of the oldest procedure in the activation group containing the file open and progressing in decreasing call-level sequence) the most recent job-level overrides This ordering of overrides can get tricky! It is without a doubt the least-understood aspect of file overrides and the source of considerable confusion and errors. In fact. The rule is: Only the most recent override within a call level is in effect. It's also fun to show programmers ProgramA and ProgramB. Before we look at how the system processes these overrides. as well as the attribute values that will be in effect due to the job's overrides. explain that things worked flawlessly. but the OutQ(Sales01) override is not. When many overrides are issued for a single file. try the exercise twice. but it isn't. see whether you can determine the file that ProgramJ at call level 10 will open. The Order of Applying Overrides You've seen the rules concerning the applicability of overrides. The system must also modify. the overrides aren't combined. Within a single call level. Merged overrides aren't simply the result of accumulating the different overridden file attributes. as you've seen. you use the TfrCtl command to invoke ProgramB. and overrides within each call level and each activation group. . 4. the first time without referring to the ordering rules. except that rather than issue a Call to ProgramB from ProgramA. programs in the default activation group and in two named activation groups (AG1 and AG2). the most recent override replaces the previous override in effect. Because the call level doesn't change.

Call level 8 contains an activation-grouplevel override in activation group AG1 for file Report1. c. d. . in step 1. h. Call level 3 contains a call-level override for file Report1. Remember. but the program is running in the default activation group. The Copies attribute for file Report1 is overridden to 7. Step 3 — call-level overrides lower than the call level of the oldest procedure in the activation group containing the file open Remember. Therefore. There is no call-level override for file Report1 at call level 7. Did you arrive at these results in either of your tries? Let's walk. the system processes this override as a call-level override. a. Active overrides at this point: LPI(12) CPI(13. c. Active overrides at this point: LPI(9) CPI(13. The system discontinues searching for activation-grouplevel overrides because this is the most recently issued activation-grouplevel override in activation group AG1. Step 2 — the most recent activation-grouplevel overrides for the activation group containing the file open The system now checks for the most recently issued activation-grouplevel override within activation group AG1. step by step. there is only one call level lower than call level 2. The activation-grouplevel override in call level 9 is in activation group AG2 and is therefore not applicable. the system processes call-level overrides beginning with call level 10 and working up the call stack through call level 2. The oldest procedure in activation group AG1 appears at call level 2. Call level 2 contains the oldest procedure in activation group AG1 (the activation group containing the file open). and the previous Copies attribute value is replaced with this latest value of 6.Figure 2B reveals the results of the job's overrides. b. There is no call-level override for file Report1 at call level 9. The CPI attribute for file Report1 is overridden to 13. where file Report1 was opened.3) FormFeed(*Cut) Copies(9) Step 2 is now complete. Therefore. i. When the system processes call level 2. activation-grouplevel overrides issued from the default activation group are scoped to call-level overrides. There is no call-level override for file Report1 at call level 8. Step 1 is now complete.3) Copies(4) There is no call-level override for file Report1 at call level 2. The LPI attribute for file Report1 is overridden to 9. the previous LPI attribute value is replaced with this latest value of 12. Call level 6 contains a call-level override for file Report1. Active overrides at this point: Copies(7) Call level 5 shows an activation-grouplevel override. call level 2 is the call level of the oldest procedure in activation group AG1. f. through the process of determining the overrides in effect for this example. In this case. There is no call-level override for file Report1 at call level 10. There is no activation-grouplevel override for file Report1 in activation group AG1 at call level 9.3. and the previous Copies attribute value is replaced with this latest value of 4. a. step 1 is complete. The FormFeed attribute for file Report1 is overridden to *Cut. e.3) Copies(6) There is no call-level override for file Report1 at call level 4. g. Active overrides at this point: CPI(13. There is no activation-grouplevel override for file Report1 at call level 10. The system begins processing call-level overrides at the call level preceding call level 2. and the previous Copies attribute value is replaced with this latest value of 9. Step 1 — call-level overrides up to and including the call level of the oldest procedure in the activation group containing the file open Checking call level 10 shows that the system opens file Report1 in activation group AG1. b.

not the original. Again. What do you think happens when you override the file name to a different file using the ToFile parameter on the OvrPrtF command? Once the system issues an override that changes the file.3) FormFeed(*Cut) OutQ(Prt01) Copies(8) All other attribute values come from the file description for printer file Report1. Figure 2C contains the new programs. There is no call-level override for file Report1 at call level 9. The call stack has been processed through call level 1. b. It's easy to see how this process could be confusing and lead to the introduction of errors in applications! Now. let's step through the process of determining the overrides in effect for this example. changing the file to be opened from Report1 to Report2. step 1 is complete. b. In ProgramC at call level 3. Only two of the original programs have been changed in this new example. The OutQ attribute for Report1 is overridden to Prt01. Therefore. There is no call-level override for file Report1 at call level 8.3) FormFeed(*Cut) OutQ(Prt01) Copies(2) Step 3 is now complete. There is no call-level override for file Report1 at call level 7. This completes the application of overrides. c. d. and no programs issued an override to the file name. There is no job-level override for file Report1 at call level 9. d. There is no job-level override for file Report1 at call level 10.a. Active overrides at this point: LPI(12) CPI(13. The oldest procedure in activation group AG1 appears at call level 2.3) FormFeed(*Cut) OutQ(Prt01) Copies(8) Step 4 is now complete. a. Notice that the program runs in activation group AG2 rather than AG1. The previous Copies attribute value is replaced with this latest value of 8. There is no job-level override for file Report1 at call level 8. Active overrides at this point: LPI(12) CPI(13. Figure 2D shows the results of the overrides. The final merged override that will be applied in call level 10 is LPI(12) CPI(13. c. . Step 4 — the most recent job-level overrides The system finishes processing overrides by checking for the most recently issued job-level override for file Report1. the system processes call-level overrides beginning with call level 10 and working up the call stack through call level 2. Call level 1 contains a call-level override for file Report1. Step 1 — call-level overrides up to and including the call level of the oldest procedure in the activation group containing the file open Checking call level 10 shows that the system opens file Report1 in activation group AG1. the ToFile parameter has been added to the OvrPrtF command. And ProgramB at call level 2 now overrides printer file Report2 rather than Report1. and the previous Copies attribute value is replaced with this latest value of 2. When the system processes call level 2. Job-level overrides can come from any activation group. it searches for overrides to the new file. There is no call-level override for file Report1 at call level 10. Let's look at a slightly modified version of our example. our HLL program (ProgramJ) opened file Report1. Call level 7 contains a job-level override for file Report1. let's make the process even more confusing! In the previous example. in step 1. a. The system discontinues searching for job-level overrides because this is the most recently issued job-level override.

e.3) Copies(6) g.e. Call level 3 contains a call-level override for file Report1. There is no call-level override for file Report1 at call level 4. but the program is running in the default activation group. Step 3 — call-level overrides lower than the call level of the oldest procedure in the activation group containing the file open Again. There is no activation-grouplevel override for file Report2 at call level 8.5) CPI(13. h. Step 2 — the most recent activation-grouplevel overrides for the activation group containing the file open The system now checks for the most recently issued activation-grouplevel override within activation group AG1 where file Report1 (actually Report2 now) was opened.3) Copies(4) i. call level 2 is the call level of the oldest procedure in activation group AG1.e. the previous LPI attribute value is replaced with this latest value of 7. g. call level 1). Active overrides at this point: Copies(7) f. The CPI attribute for file Report1 is overridden to 13. Call level 2 contains an activation-grouplevel override in activation group AG1 for file Report2.3. d. There is no activation-grouplevel override for file Report2 at call level 4. a. There is no activation-grouplevel override for file Report2 at call level 5. Active overrides at this point: CPI(13. . There is no activation-grouplevel override for file Report2 at call level 6. There is no call-level override for file Report2 at call level 2. The LPI attribute for file Report1 is overridden to 9. There is no activation-grouplevel override for file Report2 at call level 7. Call level 2 contains the oldest procedure in activation group AG1 (the activation group containing the file open). a.5. Call level 6 contains a call-level override for file Report1. The call stack has been processed through call level 1. The FormType attribute for file Report2 is overridden to FormB. and the previous Copies attribute value is replaced with this latest value of 6. There is no activation-grouplevel override for file Report2 at call level 3. This is especially noteworthy because the system will now begin searching for overrides to file Report2 rather than file Report1.. The system discontinues searching for activation-grouplevel overrides because this is the most recently issued activation-grouplevel override in activation group AG1. Call level 5 shows an activation-grouplevel override. and the previous Copies attribute value is replaced with this latest value of 4. the system processes this override as a call-level override. activation-grouplevel overrides issued from the default activation group are scoped to calllevel overrides. c. Notice that the printer file has also been overridden to Report2. Step 1 is now complete. There is no activation-grouplevel override for file Report2 in activation group AG1 at call level 9. Again. Active overrides at this point: ToFile(Report2) LPI(9) CPI(13. There is no activation-grouplevel override for file Report2 at call level 10. h. The activation-grouplevel override in call level 9 is in activation group AG2 and is therefore not applicable. f. The system begins processing call-level overrides at the call level preceding call level 2 (i. i. Active overrides at this point: ToFile(Report2) LPI(7. Therefore. There is no call-level override for file Report2 at call level 1.3) FormType(FormB) Copies(3) Step 2 is now complete. b. The Copies attribute for file Report1 is overridden to 7. and the previous Copies attribute value is replaced with this latest value of 3. Step 3 is now complete.

setting Report1's output queue attribute value to Prt02. ProgramB continues with yet another call-level override. the system does not apply call-level overrides from lower call levels — namely. HLLPrtPgm1 therefore places the report in output queue Prt02. There are no job-level overrides for file Report2. running in the default activation group and with call-level overrides only. Because this call-level OvrPrtF command specifies Secure(*Yes). When you call ProgramA. ProgramB in turn calls two HLL programs. This override changes the output queue attribute value from Prt03 to Prt01. The DltOvr (Delete Override) command makes this possible. the activation group level. In other words. With this command. Protecting an Override In some cases. ProgramA and ProgramB. c. and when the job ends. There is no job-level override for file Report2 at call level 1. d. Before the call to each of these programs. There is no job-level override for file Report2 at call level 8.Step 4 — the most recent job-level overrides The system finishes processing overrides by checking for the most recently issued job-level override for file Report2. you can delete overrides at the call level. ProgramA calls ProgramB. or the job level as follows: . when an activation group ends. HLLPrtPgm1 and HLLPrtPgm2. the activation group level. Figure 3 shows excerpts from two programs. thereby creating a new call level. you want to ensure that an override issued in a program is the override that will be applied when you open the overridden file. There is no job-level override for file Report2 at call level 4.5) CPI(13. such as when a call level ends. a. Because this override occurs at the same call level as the first override in ProgramB. b. both of which function to print report Report1. ProgramB finally calls HLLPrtPgm2 to open and spool Report1 to output queue Prt01. There is no job-level override for file Report2 at call level 7. i. There is no job-level override for file Report2 at call level 5. e. the system uses the calllevel override from call level 1. There is no job-level override for file Report2 at call level 10. However. This completes the application of overrides. f. The final merged override that will be applied to printer file Report2 in call level 10 is LPI(7. ProgramB issues an override to file Report1 to change the output queue attribute value. ProgramA simply issues an override to set the output queue attribute value for printer file Report1 and then calls ProgramB. Step 4 is now complete. the system replaces the call level's override. and the job level by specifying Secure(*Yes) on the override command. You can protect an override from being changed by overrides from lower call levels. the override in ProgramA that sets the output queue attribute value to Prt01. There is no job-level override for file Report2 at call level 2. However. Explicitly Removing an Override The system automatically removes overrides at certain times. the system first issues a call-level override that sets Report1's output queue attribute to value Prt01. Therefore. letting you explicitly remove overrides. There is no job-level override for file Report2 at call level 9. There is no job-level override for file Report2 at call level 3. setting Report1's output queue attribute value to Prt03. this new override doesn't specify Secure(*Yes). These two overrides in ProgramB clearly demonstrate the behavioral difference between an unsecured and a secured override. you may want to remove the effect of an override at some other time. g. Notice that the OvrPrtF command specifies the Secure parameter with a value of *Yes.3) FormType(FormB) Copies(3) All other attribute values come from the file description for printer file Report2. There is no job-level override for file Report2 at call level 6. you may want to protect an override from the effect of other overrides to the same file. h. j. ProgramB then calls HLL program HLLPrtPgm1 to open and print Report1. ProgramB begins by issuing a calllevel override. Next.

For more information about overrides and system commands. For instance. In addition to the rules you've already seen.". Important Additional Override Information With the major considerations of file overrides covered. You can also explicitly control the scope of open files (ODPs) using the OpnScope (Open scope) parameter on the OvrDbF command. OvrPrtF's File parameter supports special value *PrtF. HLLPrtPgm opens file Report2. the call level is that of the process that invoked QCmdExc. The second tidbit involves a unique override capability that exists with the OvrPrtF command. Miscellanea I've covered quite a bit of ground with these rules of overriding files. The command's File parameter also supports special value *All. Overriding the Scope of Open File At times. All rules concerning the application of overrides still apply. of course). perhaps even in a different call level. Special value *PrtF simply gives you a way to include multiple files with a single override command. Consider the following code: OvrPrtF Call File(Report1) OutQ(Prt01) Pgm(HLLPrtPgm) However. the system has no way of knowing this. not Report1. yet wrong. I'd like to introduce you to a few tidbits you might find useful. you may recall an earlier reference to program QCmdExc and how its use affects the scope of an override. This option gives you a convenient way to remove overrides for several files with a single command. when you use the OpnQryF (Open Query File) command. You can therefore use QCmdExc from within an HLL program to issue a file override. let's now take a brief look at some additional override information of note. The system can't know your intentions. Remember that when you issue an override using this method. you'll want to share a file's ODP among programs in your application. This program's primary purpose is to serve as a vehicle that lets HLL programs execute system commands. this override could be used somewhere else in the job. You can override the open scope to the activation group level and the job level. The system happily spools Report2 without any regard to the override. If you don't specify parameter OvrScope on the DltOvr command. and the system gives you no warning you've done so. This seemingly odd behavior is easily explained. Also. Although this is clearly a mistake in that you've specified the wrong file name in the OvrPrtF command. specify a valid. file name on an override. letting you extend the reach of the DltOvr command. letting you extend the reach of an override to all printer files (within the override scoping rules. see "Overrides and System Commands. To share the ODP. you must share the ODP created by OpnQryF or your application won't use the ODP created by OpnQryF. this value is used. You should note that override commands may or may not affect system commands. Remember.Call level: DltOvr File(File1) OvrScope(*) Activation group level: DltOvr File(File2) OvrScope(*ActGrpDfn) Job level: DltOvr File(File3) OvrScope(*Job) Value *ActGrpDfn is the default value for the DltOvr command's OvrScope (Override scope) parameter. you specify Share(*Yes) on the OvrDbF (Override with Database File) command. You've probably grown accustomed to the way a CL program lets you know when you've coded something erroneously — the program crashes with an exception! However. Non-File Overrides .

you would override from the tape file to a printer file using the OvrPrtF command. For instance. the system provides support for overriding message files and program device entries used in communications applications. the system sends the message using the information from this message file. and the AS/400 has inherited that feature." Yes. but true. the system should first check the message file specified in the OvrMsgF for the identified message. You can override the message file used by programs by using the OvrMsgF (Override with Message File) command. OvrPrtF — You can issue this command from the initial thread of a multithreaded job. IBM sold the S/38 on the premise that it was the "logical choice. DltOvr — You can issue this command from the initial thread of a multithreaded job. If at some time you'd like to print the information that's written to tape. This command affects only message file references in the initial thread. If the message isn't found. Message file references performed in secondary threads are not affected. One of the S/38's strongest selling points was the relational database implementation provided by logical files. Of course. many restrictions apply when using file redirection. File Redirection You can use overrides to redirect input or output to a file of a different type. you may have an application that writes directly to tape using a tape file. When you redirect data to a different file type. However. that play on words was corny.In addition to file overrides. the system sends the message using the information from the original message file. You can find this manual on the Internet at IBM's iSeries Information Center (http://publib. Logical files control how data in physical files is presented. Overrides for program device entries let you override attributes of the Intersystem Communications Function (ICF) file that provides the link between your programs and the remote systems or devices with which your program communicates. In the case of our example. I mention file redirection so that you know it's a possibility. you can use an override to accomplish your task. The system supports the following override commands: • • • • OvrDbF — You can issue this command from the initial thread of a multithreaded job.boulder. Logical files on the AS/400 provide the flexibility needed to build a database for an interactive multiuser environment. the system frequently sends various types of messages to various types of message queues. IBM's File Management provides more information about file redirection. the rules for applying overrides with OvrMsgF are quite different from those with other override commands. Chapter 22 . only overrides scoped to the job level or an activation group level affect open operations performed in a secondary thread. refer to the documentation.Logical Files For many years.ibm. OvrMsgF provides a way for you to specify that when sending a message for a particular message ID. OvrMsgF — You can issue this command from the initial thread of a multithreaded job. most commonly using . you use the override appropriate for the new target file. so if you decide you'd like to use the technique. there are two kinds of database files: physical files and logical files. As I said in the last chapter.htm). not the attributes. Only overrides scoped to the job level or an activation group level affect open operations performed in a secondary thread. During the course of normal operations.com/pubs/html/as400/infocenter. Using the OvrICFDevE (Override ICF Program Device Entry) command. logical files do not. Physical files contain data. Overrides and Multithreaded Jobs The system provides limited support for overrides in multithreaded jobs. Some restrictions apply to the provided support. The system ignores any other override commands in multithreaded jobs. As with OvrDbF. If the message is found. you can issue overrides for program device entries. You can override only the name of the message file used.

A logical file field must either reference a field in the physical file record format or be derived by using concatenation or substring functions. File HREMFL1 identifies field EMEMP# as a key field (in DDS. and EMPAYR all appear in the physical file but are not included in file HREMFL2. every field in the physical file will be presented in logical file HREMFL1. When you access this logical file by key. EMSSN#. Making the primary key a part of the physical file has a distinct advantage: The primary key is always enforced because the physical file cannot be deleted without deleting the data. Also notice that the PFILE keyword in Figure 22. Figure 22. the primary key would be enforced. as I did in Figure 22. contain only selected physical file fields. you must specify which fields will exist in the logical file. the damaged access path prevents access to the data.1a lists the DDS for physical file HREMFP. Notice that the logical file references the physical file's record format (HREMFR). You use the DDS PFILE keyword to select the physical file referenced by the logical file record format. the records will be presented in employee number sequence. and Figure 22.the physical file remains intact.. the negative effect is that deleting the logical file results in leaving the physical file without a primary key. This method also means that you can access the physical file in arrival sequence. it must know which physical file to access for the requested data. Consequently. You can use the record format found in the physical file. The UNIQUE keyword in this source member tells the system to require a unique value for EMEMP# for each record in the file. thus establishing EMEMP# as the primary key to physical file HREMFP. you must select the record formats to be used and the physical files to be referenced.that is. or you can define a new record format. Placing the primary key in a logical file. Because the logical file does not contain any data.key fields (whose counterpart on the S/36 is the alternate index) so that data can be retrieved in key-field sequence. key fields are identified by a K in position 17 and the name of the field in positions 19 through 28). The logical file simply defines an access path for the access sequence -. . In Figure 22. Should the logical file be deleted. occurrence).1c.1b references physical file HREMFP. You specify the physical file in the PFILE keyword as a qualified name (i.1c again to see how key fields are used. A logical file can thus be a projection of the physical file -. If you use the physical file record format. but possible.1b shows the DDS for logical file HREMFL1. Therefore. However. Even if all dependent logical files were deleted. this logical file must define each physical file field it will use.1b and 22. ensures that access path damage results only in the need to recompile the logical file -. logical file HREMFL2 defines a record format not found in PFILE-referenced HREMFP.g. placing the key in the physical file also has a disadvantage. you must use the OVRDBF (Override with Database File) command or specify arrival sequence in your high-level language program. giving rise to a question that has been debated over the years: Is it better to use a keyed physical file or a keyed logical file to establish a file's primary key? You could specify EMEMP# as the key in the DDS for physical file HREMPF and enforce it as the primary key using the UNIQUE keyword. Your only recourse in that case would be to delete the member and restore it from a backup. Should the access path for a physical file data member be damaged (a rare. the use of key fields is not the only function logical files provide. However.1b. library_name\file_name) or as the file name alone. Another minor inconvenience is that any time you want to process the file in arrival sequence (e.. to maximize retrieval performance). Let me introduce you to the following basic concepts about logical files: • • • • record format definition/physical file selection key fields select/omit logic multiple logical file members Record Format Definition/Physical File Selection To define a logical file. records could be added to the physical file with a non-unique key. If you create a new record format. Notice that fields EMEMP#.it does not physically sort the records.e. Key Fields Let's look at Figures 22. As I mentioned earlier. every field in that record format is accessible through the logical file.

The UNIQUE keyword is also expensive in terms of system overhead. Select/Omit Logic Another feature that logical files offer is the ability to select or omit records from the referenced physical file. no termination date has been entered for that employee). rebuild or restore the logical files. Logical file HREMFL2 specifies three key fields -. When you need to update the entire file. forming a SELECT statement (notice the S in position 17). The UNIQUE keyword is not used here because the primary key is the employee number and there is no advantage in requiring unique names (even if you could ensure that no two employees had the same name). while alternative keys provide additional views of the same data. Access path maintenance is costly. but the following rules apply: a. Multiple statements coded with an S or an O form an OR connective relationship. VALUES. 5. Before looking at some examples. perform the update. The same method works best for very large files.EMLNAM (employee last name). OS/400 processes the statements only until one of the conditions is met. it reads only the selected records. When a program accesses the logical file. EMFNAM (employee first name). You can use the keywords COMP. Then delete the logical files (access paths) before you clear and reload the file. If the field name is found in more than one place. 1. 4. To locate the field definitions for fields named on a select/omit statement. The first true statement is used for select/omit purposes.e.Let me make a few comments concerning the issue of where to place the primary key. 3. You can use select/omit statements only if the logical file specifies key fields (the value *NONE in positions 19 through 23 satisfies the requirement for a key field) or if the logical file uses the DYNSLT keyword. This DDS line tells the system to select records from the physical file in which field EMTRMD is equal to 0 (i. With that in mind. thus omitting terminated employees (EMTRMD NE 0). OS/400 first checks the field name specified in positions 19 through 28 in the record format definition and then checks fields specified as parameters on CONCAT (concatenate) or RENAME keywords. . • • • For work files. A primary key protects the integrity of the file. the record is processed without being tested against the subsequent select/omit statements. for files where batch purges or updates result in many access path updates. which are frequently cleared and reloaded. If you specify both select and omit for a record format. and RANGE to provide select or omit statements when you build logical files. I want to go over some of the basic rules for using select/omit statements.2 shows logical file HREMFL3.. if a record satisfies the first statement or group of related statements. The overhead for this operation is relatively small in an interactive environment where changes are made randomly based on business demands. the system must determine whether any key fields have been modified. you can delete the logical files. create the physical file with no keys. when records are updated. Such additional statements form an AND connective relationship with the initial select or omit statement. the overhead can be quite detrimental to performance.) 2. The update will be much faster with no access path maintenance to perform. (I'll go into more detail about this keyword later. However. here are some suggestions. requiring the access path to be updated. and then rebuild or restore the logical files. and place the primary and alternate keys in logical files. Thus. putting the key in the physical file poses no performance problems. and EMMINT (employee middle initial). when you create logical file HREMFL3. For files updated primarily through interactive maintenance programs. so you should use it only to maintain the primary key. All related statements must be true before the record is selected or omitted. After the update. You can follow a select/omit statement with other statements containing a blank in position 17. Figure 22. You can specify both select and omit statements in the same file. Field EMTRMD (employee termination date) is used with keyword COMP to compare values. Therefore. OS/400 builds indexed entries in the logical file only for records in which employee termination date is equal to zero. Select/omit statements are specified by an S or an O in position 17. OS/400 uses the first occurrence of the field name.

6 shows the correct way to select records for current hourly employees using both select and omit statements. According to rule 4. The ALL keyword in the last statement tells the system to omit records that don't meet the conditions specified by the first two statements.4. which specifies how to handle records that do not meet either condition. you might think this combination of select and omit would provide the same result as the statements in Figure 22. As a guideline. If you specify both select and omit. Based on rule 3.3. These parameters are MBR (the logical file member name). DTAMBRS (the physical file data members upon which the logical file member is based). the DYNSLT keyword can reduce the overhead required to maintain that logical file without significantly affecting the retrieval performance of the file. OS/400 builds indexed entries only for those records that meet the stated select/omit criteria. it can improve performance in cases where that overhead is considerable. hourly). In general. For example. Because DYNSLT decreases the overhead associated with access path maintenance. you can use the ALL keyword to specify whether records that do not meet any of the specified conditions should be selected or omitted. thus allowing records for terminated hourly employees to be selected. the second statement is related to the previous statement by an AND connective relationship. In Figure 22.4 does not have an S or an O in position 17.b. all records in the physical file are indexed. The second reason the statement in Figures 22. Notice that the second statement in Figure 22. Contrast the statements in Figure 22. OS/400 selects any record in which employee termination date equals 0 or employee type equals H (i.e. If it does. DYNSLT lets you defer the select/omit process until a program requests input from the logical file. both comparisons must be true for a record to be selected. if you have a select/omit logical file that uses more than 75 percent of the records in the physical file member. It differs in one significant way: performance. the first statement determines whether employee type equals H. and MAXMBRS (the maximum number of data members the logical file can contain). it doesn't -. and 1. By limiting your definitions this way. Therefore. However. but the process is dynamic. At first glance. if the last statement was an omit. you can usually maximize performance by omitting the DYNSLT keyword and letting the select/omit process occur when the file is created.. the record is selected and the second test is not performed. If you do not use the ALL keyword. Figure 22. the omit statement).4. In the absence of the DYNSLT keyword.for two reasons. So now I guess you are wondering just how this differs from an example without the DYNSLT keyword.5. however. When the program reads the file.4 and 22.3 with the statements in Figure 22. . OS/400 presents only the records that meet the select/omit criteria. the order of the statements is significant. Consider the statements in Figure 22.5 produce different results is because of the absence of the ALL keyword. but the overhead of maintaining the logical file is increased.g. so all current hourly employees will be selected. records that do not meet either comparison are selected because the system performs the converse of the last statement listed (e. c. Select/omit statements give you dynamic selection capabilities via the DDS DYNSLT keyword. and the select/omit logic is not performed until the file is accessed. As rule 5a explains.5. Multiple Logical File Members The last basic concept you should understand is the way logical file members work. *ALL. The CRTLF (Create Logical File) command has several parameters related to establishing the member or members that will exist in the logical file. To keep it interesting. Access to the correct records is faster. the record is selected. Figure 22. Both statements have an S coded in position 17. representing an OR connective relationship..7 shows how to code the DYNSLT keyword. it is best to use only one type of statement (either select or omit) when you define a logical file. The default values for these parameters are *FILE. According to rule 5c. You only retrieve records that meet the select/omit criteria. respectively. let's change the statements to appear as they do in Figure 22. If the logical file uses less than 75 percent of the records in the physical file member. When you use DYNSLT. because most records will be selected anyway. you will avoid introducing errors that result when the rules governing the use of select and omit are violated. Now let's work through a few select/omit examples to see how some of these rules apply. the action taken for records not satisfying any of the conditions is the converse of the last statement specified.

your program) don't know in advance what members to create. in fact. a physical file has one data member. The AS/400 inherited a performance-related virtue from the S/38 that lets you "teach" your programs to share file resources. you would normally • Create the physical file with no members: CRTPF FILE(TESTPF) MBR(*NONE) • Create the logical file with no members: CRTLF FILE(TESTLF) MBR(*NONE) • For every user that signs on. I have learned that to maintain peace in the house. and finally in member DT1990. "Setting Up Logical Files. For example. the application processes those records in the order in which the members are specified on the DTAMBRS parameter. In this chapter. However. I call it a performance-related virtue because the benefit of teaching your programs to share boosts performance for many applications. As you master the methods presented.sometimes you teach. and sometimes you buy." in the AS/400 Database Guide (SC41-9659). In such a case. then in member DT1989. my wife and I must either teach our children to share or buy two of everything. When creating applications with multiple-data-member physical files. Chapter 23 . for each user you might add members to a temporary work file for each session when the user signs on. base this logical file member on the single physical file data member. as is the case with children. you often don't know precisely what physical and logical members you will eventually need. more accurately. Keys to the AS/400 Database Understanding logical files will take you a long way toward creating effective database implementations on the AS/400. Start with the description of the CRTLF command in IBM's Programming: Control Language Reference (SC41-0030) and also refer to Chapter 3. you (or. When you create a logical file to reference such a physical file. Those of you who can identify with this predicament know that in reality peace occurs only when you do a little of both -. and specify that a maximum of one logical file member can exist in this file. if the CRTLF command specifies CRTLF FILE(TESTLIB/TESTLF) MBR(ALLYEARS) + DTAMBRS((YRPF DT1988) (YRPF DT1989) (YRPF DT1990)) a program that processes logical file member ALLYEARS first reads the records in member DT1988.File Sharing As the father of two young children (ages 4 and 9). as we continue to examine files on .Typically. and your application finds duplicate records in the multiple members. Obviously. For instance. add a physical file member to the physical file: ADDPFM FILE(TESTPF) MBR(TESTMBR) TEXT('Test PF Data Member') • For every physical file member. Since I have introduced the basic concepts only. you will discover many ways in which logical files can enhance your applications. these default values instruct the system to create a logical file member with the same name as the logical file itself. I strongly recommend that you spend some time in the manuals to increase your knowledge about logical files. is more trouble than it's worth. add a member to the logical file and specify the physical file member on which to base the logical member: ADDLFM FILE(TESTLF) MBR(TESTMBR) DTAMBRS((TESTPF TESTMBR)) + TEXT('Test LF Data Member') When a logical file member references more than one physical file member. there will be times when sharing doesn't provide any benefits and.

The SHARE attribute is valid for database. The open options are *INP (input only). or OVRxxxF (Override with File) commands. the initial program's open of the file must use all the open options required for any subsequent programs in the same job. These options are significant when you use shared ODPs. When the SHARE attribute for both file TEST and file QPRINT is SHARE(*NO). the output appears as shown in Figure 23. The SHARE attribute does not control this automatic function. you will see that this type of sharing enhances modular programming performance. update. a unique set of resources is established to prevent conflict between programs. Besides sharing open options. When each program opens the file. programs also share the file pointer. as well as the file pointer (i. Both programs use print device file QPRINT to generate a list of the records read.5.1 displays the eight records that exist in file TEST. Figure 23. After TESTRPG1 reads a record. As we further examine the SHARE attribute. Each program reads only four records. it calls TESTRPG2. Thus. doing so can also generate programming errors if you try to share without understanding a few simple fundamentals. which then reads a record in file TEST. SHARE is a file attribute. device.. You may already be familiar with the general concept of file sharing. if the first open of a file with SHARE(*YES) uses option *ALL. which alternately read a record in file TEST. if PGMA opens file TEST (specified with SHARE(*YES)) with the open option *INP (for input only). On the AS/400. a capability that is both powerful and problematic. It goes beyond normal file sharing to let programs within the same job share the open data path (ODP) established when the file was originally opened in the job. . because the programs share the same ODP. This means that programs share the file status information (i. It is true that when you specify SHARE(*YES). the first open establishes the open options. One common misconception is that using SHARE(*YES) alters the way in which the database manager performs record locking -. When a program opens a file. *OUT (output only). If you specify SHARE(*YES) for a file. and save files. when you specify SHARE(*YES). if you also change or override the attribute of file QPRINT to be SHARE(*YES).a conclusion you could easily reach if you confuse record locking with file locking.2a and 23. the general and file-dependent I/O feedback areas). and so on. Each program reads all eight records because each program uses a unique ODP. a common feature for many operating systems that lets more than one program open the same file. PGMB will fail. You can establish the SHARE attribute or modify it for a file using any of the CRTxxxF (Create File). the output generated appears as displayed in Figure 23. which requires the open option *ALL (for an update or delete function) is called.e. the programs generate the lists displayed in Figure 23. a program's current record position in a file). distributed data management. Both programs share print file QPRINT and. file locking is handled differently than when you specify SHARE(*NO). source. but that you must manage it effectively to prevent conflicts between programs.4. respectively. output. the output is combined in a single output file. and *ALL (input. The first fundamental pertains to open options that programs establish. If SHARE(*NO) is specified for a file. each program operating on that file in the same job must establish a unique ODP. and then PGMB. while each program reads only four records.the AS/400.2b are RPG programs TESTRPG1 and TESTRPG2. If you change file TEST or override it to specify SHARE(*YES). The valid values are *YES and *NO. Finally. This lock occurs even when a particular program normally opens the file with *INP open options. which reads another record.e. we will focus on the SHARE (Share Open Data Path) attribute and how you can use it effectively in your applications. Sharing Fundamentals While sharing ODPs can be a window to enhancing performance. and delete operations). the options specified on the OPNDBF (Open Data Base File) command or by the high-level language definition of the file determine the open options. This type of file sharing is automatic on the AS/400 unless you specifically prevent it by allocating a file for exclusive operations (using the ALCOBJ (Allocate Object) command). every program using that file obtains a SHRUPD (Shared Update) lock on that file. For example.3.. TESTRPG2 calls TESTRPG1. CHGxxxF (Change File). In Figures 23.

For more information about SHARE.CL Programming: You're Stylin' Now! The key to creating readable. or output if you fail to write the programs so they recognize and manage the shared pointer. If users frequently switch between menu options. but by the RPG compiler. the wrong selection of records due to file pointer position) unless you understand it and use it carefully. the respective program need not open the file. SHARE(*YES) is a must. reducing the time needed to access the two inquiry programs. SHARE(*YES) will buy you nothing. you might accidentally cause the same problem in your application if you fit program modules together without considering the current value of the SHARE attribute on the files. The coding example in Figure 23.a style -. Sharing Examples The most popular use of the SHARE attribute is to open files at the menu level when users frequently enter and exit applications on that menu. the OVRDBF (Override with Database File) command specifies SHARE(*YES) for each file identified. each of which represents a program that uses one or more of the described files.6 illustrates a simple order-entry menu with five options. programs perform record locking on files with SHARE(*YES) the same way they perform record locking on files with SHARE(*NO). updated. in and of itself. they simply follow clearly defined coding standards. The moral of the story is this: When calling programs that use the same file. of doing your work to the best of your abilities and with pride. Figure 23. While you would never purposely code this badly. If PGMB ends before performing the update. SHARE(*YES) can shorten program initiation steps and can let programs share vital I/O feedback information. programs can easily become confused about which record is actually being retrieved. is not controlled by the open options. There is no doubt that SHARE is a powerful attribute.9 in a CL program that calls the order-entry program. keeping in mind the above-mentioned guidelines about placing the file pointer. the file pointer remains positioned at the record read by PGMB. The real hazard is that because SHARE(*YES) lets programs share the file pointer. Programmers with a consistent style don't think about how to arrange code. Either program uses a file already opened by the initial program. If SHARE(*NO) is defined for each file. First. you can ensure that the ODP for these files is shared. however. When users select an option on the menu.and create a comfortable environment for the person reading and maintaining the code. They also boost productivity. Then PGMA calls PGMB. But if you're considering highly modular programming designs. unless you are specifically coding to take advantage of file pointer positioning within those applications. always reposition the file pointer after the called program ends. see IBM's Programming: Data Base Guide (SC419659) and Programming: Control Language Reference (SC41-0030). Unfortunately. to plan carefully when using SHARE to open files. Then. they experience delays each time a file is opened. And programmers reading such code can directly interpret the program's actions without the distraction of bad style. If PGMA then performs an update. Good coding style transcends any one language. Thus. The program compiler determines which locks are needed for any input operations in the program and creates the object code to make them happen during program execution. and then remain idle until the next night. is not a serious concern. process the records. By including the statements in Figure 23. on the other hand. Figure 23. . Let me stress that this fact alone does not prevent the problems you must address when you write multiple programs to perform with files having SHARE(*YES) in an on-line update environment. The SHARE attribute also comes in handy when you write applications that provide on-line inquiries into related files. Remember. It's a matter of professionalism.7 provides a solution to this problem. Standards give your programs a consistent appearance -. an ODP is created for each program file. PGMA updates the values of the current record variables (from the first read in PGMA) into the record PGMB read because that is where the file pointer is currently positioned. The overhead required to open the files affects the menu program only. and thus the programs are initiated more quickly.Record locking. Chapter 24 . But record locking. PGMA first reads file TEST for update purposes.8 outlines an order-entry program that opens several files and that lets the end user call a customer inquiry program or item master inquiry program to look up specific customers or items. maintainable code is establishing and adhering to a set of standards about how the code should look. OPNDBF opens each file with the maximum open options required for the various applications. then each time one of these programs is called. which also reads file TEST for update. which become like second nature through habit. the power it provides can introduce errors (specifically. The following example illustrates this potential problem. If you're using batch programs that typically open files.

Let's take a closer look at the elements responsible for Figure 24. Every command begins in column 14. When I started writing CL." which identifies the type of code that follows. and the editor wraps the parameters onto continuation lines like a word processor wraps words when you've reached the margin. you should identify the type of program you are writing and label it appropriately in the header. outline the basic program functions to familiarize the programmer with how the program works. primarily because of the CL prompter's default layout. and comments that help you break the code down into logical. You should detail the summary only in terms of what happens and what events occur (e.1). I used the prompter to enter values for command parameters. Determining where to start each statement is one of the most basic coding decisions you can make. Another important part of the introduction is a description of what the program does. But whatever you call it.g. the CPO (command prompt override program). labels are to the left of the commands..1's code is crowded and difficult to read. To create a stylish CL program. If you don't have a standard CL program header. something's missing. I still use the prompter for more complex commands or to prompt for valid values when I'm not sure what to specify. create a template of one in a source member called CLHEADER (or some other obvious name) and copy the member into each CL program. and remember to maintain the information as part of the quality control checks you perform on production code. Another application can then read the database file and individually process each spool file (e. All programs.. and values creates code that is difficult. and mnemonic variable names. CL program types include the CPP (command processing program). the dates they were made. An essential piece of the program header is the "program type.1 to the version of CVTOUTQCL shown in Figure 24. The programs' styles are dramatically different. If you're used to prompting each CL statement. State the program's purpose concisely. including CL programs. While you should use prompting when necessary to enter proper parameter values. . Fill in the current information for each program. your first inclination would be to begin each one in column 14. An accurate introduction helps programmers who come after you feel more comfortable as they debug or enhance your code. at best.Although most CL programs are short and to the point. and. The program header begins with the program's name. Write a descriptive program header. Over the years.2's program header provides the basic information a programmer needs to become familiar with the program's purpose and function. readable chunks. I've collected several guidelines about where to place code and comments within CL programs. You may use other categories or different names to describe the types of CL programs. a consistent programming style is as essential to CL as it is to any other language. you would have to jot down the name of each output queue entry and enter each name into the CPYSPLF (Copy Spool File) command or any other command you use to process the entry. the VCP (validity checking program).g. In addition. building a file or copying records).2's code is much more readable and comprehensible. The program also features more attractive code alignment. this style lacks elements such as helpful spacing.2. Today. first write a program header that describes the program's purpose and basic function. Now compare the code in Figure 24. While using the prompter is convenient. which converts the entries of an output queue listing into a database file. the resulting alignment of commands. Format your programs to aid understanding. need an introduction. Without a program such as CVTOUTQCL. A good program header also includes a revision summary. If the first source statement in your CL program is the PGM statement. followed by the author's name and the date created. An informative program header relates the program's purpose and basic functions.2's clarity and some coding guidelines you can use to produce sharp CL code with a consistent appearance. spacing that divides the code into distinct sections. code alignment. code generated this way can be extremely difficult to read and maintain. and the PROMPT (prompter). Figure 24. in the program summary. featuring a list of revisions. Let's look at CL program CVTOUTQCL (Figure 24. Figure 24. Figure 24. The prompter produces a standard of sorts. an outdated one can be misleading and harmful. While an up-to-date program header is valuable. and the names of those who made them. the MENU (menu program). keywords. indentation for nested DO-ENDDO groups. to read and maintain. copy the contents of the spool file to a database file for saving or downloading to a PC).

g. And establishing a consistent comment line length (i. lets you create more-structured CL programs. GLOBAL_ERR.For starters. Nobody wants to read code in which comments outnumber program statements. a precompiler for CL that supports subroutines and other language enhancements. First. However. start commands in column 3. (CL/free.2 how you can quickly scan down column 1 and locate the labels (e. The code following the indented labels remains indented to help the labels stand out and to make the IF-THEN construct more readable. might appear like this: IF ("condition") DO IF ("condition") DO CL statement CL statement IF ("condition") DO CL statement ENDDO ENDDO ELSE DO CL statement CL statement ENDDO ENDDO Notice that the IF and ENDDO statements -. omit the following common keywords when using the associated commands: . A simple indented DO-ENDDO group might appear as follows: IF ("condition") DO CL statement CL statement ENDDO A multilevel set of DO-ENDDO groups. Simplify and align command parameters.) Because labels provide such a basic function. and make comment lines a standard length.. keywords. Labels in CL programs serve as targets of GOTO statements. Source Entry Utility (SEU) automatically places the selected keywords and values into the code. the number of spaces between the beginning /* and the closing */) makes the program look neat and orderly. I indented them to the expected nesting level to promote comprehension of the overall process. When you use the prompter to enter values for command parameters. Beginning commands in column 3 -. including an ELSE statement. begin all comments in column 1.. they should clearly reveal entry points into specific statements. In Figure 24. Instead of beginning these two labels in column 1.rather than the prompter's default starting in column (14) -.2. if located within an IF-THEN or IF-THEN-ELSE construct).e. Notice that comments describing a process are boxed in by lines of special characters (I use the = character).are clearly visible.and thus the logic -. A second guideline is to begin all label names in column 1 on a line with no other code (or at the appropriate nesting level. To help identify what code is executed in a DO group. notice the placement of labels RSND_RPT and RSND_END (at B and C). Beginning comments in column 1 gives you the maximum number of columns to type the comment. The AS/400 implementation of CL requires you to use GOTO statements to perform certain tasks that other languages can accomplish through a subroutine or a DO WHILE construct. Notice in Figure 24. CLEAN_UP. RSND_BGN). The exception to this guideline concerns using the DO command as part of an IF-THEN or IF-THEN-ELSE construct. and values appear in your CL programs. Starting a label name in column 1 and placing it alone on the line helps separate it from subsequent code. To offset command statements from comments and labels. It also gives you more room to arrange your code. a blank line precedes and follows each comment line to make it more visible. I recommend that you indent the code in each DO group.gives you much more room to enter keywords and values. Comments should also stand out in the source. Several simple guidelines can greatly enhance the way commands. But descriptive (not cryptic) comments that define and describe the program's basic sections and functions are helpful road signs.

use the DO-ENDDO construct. shorten. The second alternative is to place as many keywords and values as possible on the first line and arrange the continuation lines so the additional keywords and values appear as columns under those on the first line. This alignment rule also applies to multiple CHGVAR (Change Variable) commands. creating the alignment is a major headache. This is the simplest way to continue a command but the most difficult to read. Thus. a second guideline is to place the entire command on one line when possible.3 shows. Thus. the keywords just clutter up your code. place the command and first keyword on the first line and each subsequent keyword on a separate line. you can type most commands on one line. While you can apply the above rules to most commands. Although this option may be the easiest to read. A third guideline is to align the command and its parameters in columns when you repeat the same single-line command statement. the IF statement IF (&fl2exist) CRTDUPOBJ OBJ(QACVTOTQ) FROMLIB(KWMLIB) OBJTYPE(*FILE) TOLIB(&outlib) NEWOBJ(&outfile) should be written + + IF (&fl2exist) DO CRTDUPOBJ OBJ(QACVTOTQ) FROMLIB(KWMLIB) OBJTYPE(*FILE) TOLIB(&outlib) NEWOBJ(&outfile) ENDDO + + + + This construction implements guidelines discussed earlier and presents highly accessible code. This method is both simple to implement and easy to read. as Figure 24. using the + continuation symbol.2 shows how placing the DCL statement and parameter values in columns creates more readable code. The first alternative is to use the + continuation symbol. If the IF statement won't fit on a single line. The third alternative is simply to place each keyword and associated value on a separate line. or neatness. The following statements omit unneeded keywords: DCL &outq *CHAR 10 CHGVAR &outq (%SST(&i_ql_outq 1 10)) IF (&flag) GOTO FINISH GOTO RSND_RPT By starting commands in column 3 and following the indentation guidelines. But when you must continue the command to another line. VALUE COND. LEN VAR. The result looks more like a . For example. otherwise. and continue entering command keywords and values. TYPE. Figure 24. indent a couple of spaces on the next line. Align. The DCL statement is a good example. and simplify for neatness. Normally. the IF command may require special alignment consideration. THEN CMD CMDLBL MSGID The meanings of the parameter values are always obvious by position. you have several alternatives. One of the most common symptoms of poor CL style is a general overcrowding of code. This rule of thumb applies when you have a group of statements involving the same command. Such code moves from one statement to the next without any thought to organization. spacing.Command DCL CHGVAR IF ELSE GOTO MONMSG Keyword VAR. one or more groups of DCL statements appear at the beginning of each CL program to define variables the program uses.

The shorthand symbols shorten and simplify expressions that use concatenation keywords and commands and clearly identify breaks between strings and variables. I've aligned all + continuation symbols in column 69. crisp CL programs: Align all + continuation symbols so they stand out in the source code. For example. *TCAT. variables. space. The separate variables let you code a statement such as CHKOBJ OBJ(&outqlib/&outq) OBJTYPE(*OUTQ) instead of CHKOBJ OBJ(%SST(&i_ql_outq 11 10)/ %SST(&I_ql_outq 1 10)) OBJTYPE(*OUTQ) + You should also define variables to represent frequently used literal values. a qualified name or the contents of a data area). and &@zero. Spacing between blocks of code and between comment lines and code can really help programmers identify sections of code. The prefix indicates the parameter is both an input and an output variable. but I do have some suggestions. An essential part of CL style concerns how you use variables in your programs.blob of commands than a flowing stream of clear. First. ''. and the *CAT. When program CVTOUTQCL calls program CVTOUTQR. it uses parameter &io_rtncode (A in Figure 24. which you can easily create and maintain at the start of the program source. so feel free to space. the + symbol ignores blanks that precede the first nonblank character in the next record. The ql_ tells me the parameters' values are qualified names (i. To save yourself and others the eyestrain of trying to read a jumble of code. |<. Not only does alignment give your programs a uniform appearance. you find the following two statements: CHGVAR &outq CHGVAR &outqlib (%SST(&i_ql_outq 1 10)) (%SST(&i_ql_outq 11 10)) These two statements divide the qualified name into separate variables. Give program parameters distinct names that identify them as parameters. follow these suggestions for clean. they include the library name). Later in the program.e.for continuation because the + better controls the number of blanks that appear when continuing a string of characters. notice the difference between the following statements: IF (&value = ' ') DO IF (&value = &@blanks) DO .2. and *BCAT operatives. orderly statements. Blank lines don't cost you processing time. space. You should extract the values into separate variables before using the values in your program..2. define values such as x. consider the names you assign variables. and |> instead of the corresponding *CAT. I use the + symbol instead of the .continuation symbol includes them.. I don't have any hard-and-fast rules. Highlight variables with distinct. and generally "get into" the program. the two parameters processed by the CPP are &I_ql_outq and &i_ql_outf.2). and the rest of the name tells me program CVTOUTQR will return a value to the calling program. In addition. input parameter &i_ql_outq is the qualified name of the output queue. A second guideline concerns variables that contain more than one value (e. In Figure 24.g. Concatenation can be messy when you use a mixture of strings. In Figure 24. but it also clearly identifies commands that are continued on several lines. The i_ in the names tells me both parameters are input-only ( io_ would have indicated a return variable). In Figure 24. Both symbols include as part of the string blanks that immediately precede or follow the symbol on the same line. The . and *TCAT operators. Use the shorthand symbols ||.2. &@blanks. distinguish one command from another. *BCAT. This guideline lets you define all of a program's constants in one set of DCL statements. and 0 as variables &@x. But when continuing a string onto the next line in your source. and then use the variable names in tests instead of repeatedly coding the constants as part of the test condition. lowercase names. Use blank lines liberally to make code more accessible.

. The five techniques: • • • • • Error/exception message handling String manipulation Outfile processing IF-THEN-ELSE and DO groups OPNQRYF (Open Query File) command processing When I consider the CL programs I would label as classic. programmers have been collecting their favorite and most useful CL techniques and programs. Program Logic. Remember: When you're trying to read a program you didn't write. that means you must identify the objects that will have a new owner and then enter the CHGOBJOWN command for each of those objects. Let's take a quick look at how the program logic works and then examine how each technique is implemented. You will also quickly discover that this command works for only one object at a time.1 with Figure 24. and physical file associated with CL program CVOUTQCL shown in Figure 24. Or is there another way? When the solution includes the repetitious use of a CL command.2. CHGOWNCPP demonstrates three of the fundamental CL programming techniques: message monitoring. RPG program. the contrast in type will greatly improve the program's readability.CL Programming: The Classics Since the inception of CL on the S/38 in the early eighties. Compare Figure 24. CHGOWNCPP. Classic Program #1: Changing Ownership If you ever face the problem of cleaning up ownership of objects on your system. These programs are useful and the techniques valid on the S/38 as well. A final guideline concerning variables is to type variable names in lowercase. some of these have become classics.1b). If you are new to the AS/400. Which code would you like to encounter the next time you examine a CL program for the first time? I hope you can use these guidelines to create a consistent CL style from which everyone in your shop can benefit. Although typing the names in lowercase may not be easy using SEU. the syntax of qualified object names and some outfile file and field names). The lowercase variable names contrast nicely with the uppercase commands/parameters. When you execute the command CHGOWN (Figure 25. Over time. you will find the CHGOBJOWN (Change Object Owner) command quite useful. To that end.2 again. but also promotes consistency as programmers simply copy the variable DCL statements into new source members. try this first classic CL program. it invokes the command-processing program CHGOWNCPP (Figure 25. . appearance can be everything. string handling. A program-level message monitor traps any unexpected function check messages caused by unmonitored errors during program . I find these techniques being employed to some degree. Chapter 25 . And if you are an old hand at CL. we'll visit three timeless programs and five techniques essential to writing classic CL.g. you can almost always use a CL program to improve or automate that solution.1a). you may have missed one of these classics. In this chapter. Let's see . and outfile processing. You may recognize the classic programs we'll visit as similar to something you have created. I guarantee you will get excited about CL programming after you experience the power of these tools. They provide functions almost always needed and welcomed by MIS personnel at any AS/400 installation. Sidebar: CL Coding Suggestions Sidebar: Command.You can more easily digest the second statement because it explicitly tells you what value will result in execution of the DO statement. . although some of the details will be different (e. You may find that defining frequently used variables not only improves productivity.

in this instance. the program sends a completion message to the calling program's message queue and reads the next record from the file. The first is the . so you would need to eliminate it. the message monitor traps the message and invokes the EXEC portion of the MONMSG command -. if the CHGOBJOWN command fails. the message monitor traps the error and invokes the EXEC portion of that MONMSG -. When the CHGOBJOWN command is successful. For instance. the program ends in error. along with variable &CUROWNAUT in CHGOWNCPP. If the RCVF command causes error message CPF0864. The Technique Message Monitoring. The program-level MONMSG traps this function check. (Note: The CUROWNAUT parameter does not exist on the S/38 CHGOBJOWN command. and the program-level message monitor passes control to the RSND_LOOP label. and the CPP terminates.in this instance. The CL MONMSG command provides this function. A program-level message monitor is a MONMSG command placed immediately after the last declare statement in a CL program. Monitoring for system messages within a CL program is a technique that both traps error/exception conditions and directs the execution of the program based on the error conditions detected. Program CHGOWNCPP uses both command-level and program-level message monitoring. and the command-level message monitor causes the program to branch to the FINISH label.) After all records have been read. If this happens. For instance. This MONMSG handles any unexpected error since all errors that are unmonitored at the command level eventually cause a function check. the error message causes a function check. see IBM's AS/400 manual Programming: Control Language Programmer's Guide (SC41-8077). If the CHGOBJOWN command fails. the MONMSG (Monitor Message) command directs the program to continue at the RSND_LOOP label. and the EXEC command instructs the program to resume at label RNSD_LOOP and process those error messages. GOTO FINISH. the CPP executes a CHGOBJOWN command to give ownership to the user profile specified in variable &NEWOWN. String Handling. A command-level message monitor lets you monitor for specific messages that might occur during the execution of a single command. The program demonstrates two forms of string handling -. If CPF9801 is issued as a result of the CHKOBJ command. and if no program-level MONMSG for CPF9999 exists. In our program example. What happens if an error occurs on a command and there is no command-level MONMSG to trap the error? If there is also no program-level MONMSG for that specific error message. an escape message is then sent to the calling program using the SNDPGMMSG command. an error message is issued that then generates function check message CPF9999. the next RCVF command generates error message CPF0864. The program then processes the outfile until message CPF0864 ("End of file") is issued.substring manipulation and concatenation. The RSND_LOOP label is encountered only if an unexpected error occurs. a MONMSG command traps CPF9801. The CHKOBJ (Check Object) command verifies that the value in &NEWOWN is an actual user profile on the system. Another example in the same program is the MONMSG command that comes immediately after the RCVF statement. The first fundamental technique we will examine is error/exception message handling. For each record in the outfile. The DSPOBJD (Display Object Description) command generates the outfile QTEMP/CHGOWN based on the values for variables &OBJ and &OBJTYPE received from command CHGOWN. If the CHKOBJ command can't find the user profile on the system. The value in variable &CUROWNAUT specifies whether the old owner's authority should be revoked or retained. The variables &ODLBNM and &ODOBNM contain the object's library and object name. MONMSG CPF9801 EXEC(DO) immediately follows the CHKOBJ command to monitor specifically for message CPF9801 ("Object not found"). a DO command. If it encounters an unexpected function check message. in program CHGOWNCPP. Another fundamental technique program CHGOWNCPP employs is string manipulation. there is a program-level MONMSG CPF9999 EXEC(GOTO RSND_LOOP). obtained from fields in the outfile file format QLIDOBJD.execution. or Appendix E of the AS/400 manual Programming: Control Language Reference (SC41-0030). This section of the program is a loop to receive the unexpected error messages and resend them to the calling program's message queue. For more information on monitoring messages. the unexpected error causes function check message CPF9999.

The file declared in the DCLF (Declare File) command is QADSPOBJ. the system-supplied file in library QSYS that serves as the externally defined model for the outfile generated by the DSPOBJD command. The second form of string handling in this program is concatenation.concatenates two strings after trimming all blanks off the end of the first string. MYPROGRAM. The control language interface supports three distinct. the program will include the externally defined field descriptions when you compile it. To see how these functions work. the DSPOBJD command generates the outfile QTEMP/CHGOWN. has three arguments: the name of the variable containing the string.both perform the same job.1b) uses concatenation to build a string for the MSGDTA (Message Data) parameter. For instance. &ODOBNM. If the variables &ODLBNM. and the number of characters in the string to extract. built-in concatenation functions: • • • CAT (||): Concatenate .) Because file QADSPOBJ is declared in this program. you can execute the command "DSPOBJD QSYS/QA* *FILE". This works because . You must convert any numeric variables to character variables before you can use them in concatenation. This file contains the full description of any objects selected. The only limitation is that variables used with concatenation functions must be character variables because they will be treated as strings for these functions. (Note: To get a list of the model outfiles provided by the system.%SST (Substring) function. *BCAT (|>): Blank insert and concatenate . the variable exists as a 20-character string containing the object name in positions 1 through 10 and the library name in positions 11 through 20. DSPOBJD uses the object name and type passed from command CHGOWN to create outfile QTEMP/CHGOWN. so I make a practice of giving an outfile the same name as the command or program that creates it. In this program. let's apply them to these variables (where /b designates a blank): &VAR1 *CHAR 10 VALUE('John/b/b/b/b/b') and &VAR2 *CHAR 10 VALUE('Doe/b/b/b/b/b/b/b') The results of each operation are as follows: &VAR1 || &VAR2 = John/b/b/b/b/b/bDoe &VAR1 |< &VAR2 = JohnDoe &VAR1 |> &VAR2 = John Doe The SNDPGMMSG command (B in Figure 25. The outfile name is arbitrary. You can direct certain OS/400 commands to send output to a database file instead of to a display or printer.1b) to extract the library name and object name from the &OBJ variable into the &OBJNAM and &OBJLIB variables. The next step in using an outfile in this program is creating the contents of the outfile using the DSPOBJD command.concatenates two strings after trimming all blanks off the end of the first string and then adding a single blank character to the end of the first string. The CL program uses the %SST function in the CHGVAR (Change Variable) command (A in Figure 25.concatenates two string variables end to end. allowing it to recognize and use those field names during execution. and USERNAME. (%SST is a valid abbreviated form of the function %SUBSTRING -.) The %SST function. The program then executes the OVRDBF (Override with Database File) command to specify that the file QTEMP/CHGOWN is to be accessed whenever a reference is made to QADSPOBJ. respectively. which returns to the program a portion of a character string. the SNDPGMMSG statement generates the message "Ownership of object MYLIB/MYPROGRAM granted to user USERNAME. The final fundamental technique demonstrated in program CHGOWNCPP is how to use an outfile. *TCAT (|<): Trim and concatenate -. when the command CHGOWN passes the argument &OBJ to the CL program. the starting position." Outfile Processing. and &NEWOWN in the SNDPGMMSG command contain the values MYLIB. Notice that you can use a combination of constants and program variables to construct a single string during execution.

To create code that is easier to read and interpret. the program takes no action for that record.2b) determines whether dependencies exist for this physical file. the program performs two tests as decision mechanisms for program actions. where the program ends. If the file is not a physical file. the error message causes a function check. DLTDBR uses the same three fundamental techniques described above and adds a fourth: the IF-THEN clause. Classic Program #2: Delete Database Relationships The second utility is a real timesaver: the "Delete Database Relationships" tool provided by command DLTDBR and CL program DLTDBRCPP. The GOTO RCD_LOOP command sends control to the RCD_LOOP label to read the next record. the record represents a dependent file. You may have discovered the CHGLIBOWN (Change Library Owner) tool in library QUSRTOOL. The second test (B) checks whether &WHNO is greater than zero. When the DLTF is successful. Now when the program reads record format QLIDOBJD in file QADSPOBJ. A simple IF-THEN statement would be IF COND(&CODE = 'A') THEN(CHGVAR VAR(&CODE) VALUE('B')) In this example. the command-processing program DLTDBRCPP (Figure 25. The DSPDBR (Display Database Relations) command generates an outfile based on the file you specify when you execute the command DLTDBR. such as IF ((&CODE = 'A' *OR &CODE = 'B') *AND (&NUMBER = 1)) GOTO CODEA + . Let's take a quick look at the program logic and then discuss the IF-THEN technique. there are no dependencies for this file. and you can delete the file name specified in variables &WHRELI (dependent file library) and &WHREFI (dependent file name) with the DLTF (Delete File) command. a program-level MONMSG handles unexpected errors. it is usually best to omit the use of the keywords COND and THEN. The first test (A in Figure 25. For each record in the outfile. and the program sends a message (using the SNDPGMMSG command) to that effect. and the commandlevel message monitor causes the program to branch to the FINISH label.2a). The IF-THEN clause lets you add decision support to your CL coding via the IF command. These three fundamental CL techniques give you a good start in building your CL library. and the "Change Owner of Object(s)" tool is definitely handy. The above example is much clearer when written as IF (&CODE = 'A') CHGVAR VAR(&CODE) VALUE('B') Conditions can also take more complex forms. it just reads the next record. &WHNO represents the total number of dependencies. the CHGVAR command changes that value to B. If the DLTF command fails.QTEMP/CHGOWN is created with the same record format and fields as QADSPOBJ. the program sends a completion message to the calling program's message queue. As in CHGOWNCPP. If &WHNO is equal to zero. The CPP then processes this outfile until message CPF0864 ("End of File") is issued. As with the first program. Program Logic. Both tests check whether or not the record read is a reference to a physical file (&WHRTYP = &PFTYPE). the actual file it reads will be QTEMP/CHGOWN. and the program-level message monitor directs the program to resume at the RSND_LOOP label. When you execute command DLTDBR (Figure 25. which has two parameters: COND (the conditional statement) and THEN (the action to be taken when the condition is satisfied). This IBM-provided tool offers a similar function. If it is.2b) is invoked. if the value of variable &CODE is A. you will encounter the RSND_LOOP label only if an unexpected error occurs. the RCVF command generates error message CPF0864. After all records have been read. The Technique IF-THEN-ELSE and DO Groups.

The override must specify SHARE(*YES) to ensure that the HLL program will use the Open Data Path (ODP) created by the OPNQRYF . Because a CL program cannot send output to a printer. Examine the following statements: IF (&CODE = 'A') DO CALL PGMA CALL PGMB CALL PGMC ENDDO If the condition in the IF command is true. brings us face-to-face with one of the most powerful influences on CL programming -.This example demonstrates several conditional tests. If both conditions are met.to be true to satisfy the first condition.(&CODE = 'A') or (&CODE = 'B') -. Examine these statements: IF (&CODE = 'A') CALL PGMA ELSE CALL PGMB The program executes the ELSE command if the preceding condition is false. the program executes the GOTO command. The *OR connective requires at least one of the alternatives -.) The ELSE command provides additional function to the IF command. The *AND connective then requires that (&NUMBER = 1) also be true before the THEN clause can be executed. provided via the LSTPGMREF (List Program References) command and the LSTPRCPP CL program. can find outfile QTEMP/LSTPGMREF. The OVRDBF is required so the HLL program. the command-processing program LSTPRCPP (Figure 25. the "Display Program References" tool. Classic Program #3: List Program-File References The last of the classic CL programs and fundamental techniques. Then we can take a close look at the OPNQRYF command. the program executes the DO group until it encounters an ENDDO. (SC41-0030) or the Programming: Control Language Programmer's Guide. but the SHARE(*YES) parameter has been added. When you execute command LSTPGMREF (Figure 25. There is no need to declare the file format because the program will not access the file directly. Notice that this program does not use the DCLF statement. LSTPRCPP uses the DSPPGMREF (Display Program References) command to generate an outfile based on the value you entered for the PGM parameter. The outfile LSTPGMREF then contains information about the specified programs and the objects they reference.3b) is invoked.the one and only OPNQRYF (Open Query File) command.3a). see Chapter 2 of the AS/400 manual Programming: Control Language Programmer's Guide. this classic technique is one of the richest and most powerful tools available through CL. Program Logic. Let's take a quick look at the program logic for this tool. which references file QADSPPGM. LSTPRCPP must call a high-level language (HLL) program to print the output. (For more information about how to use *AND and *OR connectives. You will also notice that the program uses the OVRDBF command. see the AS/400 manual Programming: Control Language Reference. As this program demonstrates. as this example shows: IF (&CODE = 'A') DO CALL PGMA CALL PGMB ENDDO ELSE DO CALL PGMD CALL PGME CALL PGMF ENDDO For more information about IF and ELSE commands. The DO command also works with the ELSE command. You can also use the IF command to process a DO group.

When you write program LSTPRCPP. The &QRYSLT value would be 'WHOBJT = "F"' • You may specify a generic value. If you do. These two basic functions are the bread-and-butter classic techniques demonstrated in program LSTPRCPP. The command provides many functions. and the program continues. such as IC* or AP??F*. Then the CPP determines the sequence of records desired (based on the value entered for the OPT parameter in the LSTPGMREF command) and uses the OPNQRYF command to select the records and create access paths that will allow the HLL program to read the records in the desired sequence. you would use the argument IC*. The CL program then calls HLL program LSTPRRPG to print the selected records (I haven't provided code here -you will need to build your own version based on your desired output format). you should not add any selection criteria to the QRYSLT parameter. and the program compares the value of &FILE to the . what are the possible values for the FILE parameter on command LSTPGMREF? • You may specify a value of *ALL. Record selection is accomplished with OPNQRYF's QRYSLT parameter. But the real strength of ONPQRYF's record selection capability is that you can construct the QRYSLT parameter at runtime to match the particular user requirements specified during execution. If the program QCLSCAN finds the character * in the string &FILE. so the code must allow this selection criteria to be specified dynamically. if you wanted to select all files beginning with the characters IC. The &FILE value is unknown until execution time. you can use the statement CHGVAR VAR(&QRYSLT) VALUE('WHOBJT = "F"') to initially provide a value for &QRYSLT to satisfy that requirement. and they will appear to be sorted in the desired sequence. First. For instance. If the program does not find *ALL and does not find a * in the name. you will see that it performs a series of tests on the variable &FILE to determine how to build the QRYSLT parameter. including selecting records and establishing keyed access paths without using an actual logical file or DDS. An example of the &QRYSLT variable for this generic selection would be 'WHOBJT = "F" *AND WHFNAM = %WLDCRD("IC*")' • You may specify an actual file name. After the DSPPGMREF command is executed. one of the more powerful commands available to CL programmers is the OPNQRYF command. OPNQRYF uses the same system database query interface SQL uses on the AS/400. LSTPRCPP uses IF tests to construct the selection statement. Without doubt. If *ALL is the value for &FILE.g. Therefore. all other IF tests are bypassed. the value of &FILE is assumed to represent an actual file name. The %WLDCRD function lets you select a group of similarly named objects by specifying an argument containing a wildcard (e. If you do. Files used with OPNQRYF require SHARE(*YES). If you know the exact record selection criteria when you write the program. and the selection string will be compiled with the program. The Technique The OPNQRYF Command. The outfile will appear to contain only the selected records. it uses the %WLDCRD function to build the appropriate QRYSLT parameter. filling in the QRYSLT parameter is easy. file LSTPGMREF contains records for program-file references as well as program references to other types of objects. the CL program must first determine that fact and then simply use the compare function in OPNQRYF to build the value for the QRYSLT parameter. Program LSTPRCPP demonstrates both the compile-time and runtime capabilities of OPNQRYF. An example for this &QRYSLT variable would be 'WHOBJT = "F" *AND WHFNAM = "FILE_NAME" Examining the program. the CL program must determine that &FILE contains a generic name and then use OPNQRYF's %WLDCRD (wildcard) function to build the appropriate QRYSLT selection criteria. The next step is to build an OPNQRYF selection statement in variable &QRYSLT that selects only *FILE object-type references and optionally selects the particular files named in the FILE parameter.command instead of creating a new ODP and ignoring the work the OPNQRYF has performed. the requirement to include only references to physical files is a given. * or ?). If you enter a generic value..

WHFNAM.field WHFNAM for record selection. Program LSTPRCPP tests the value of &OPT to determine whether the requester wants the records listed in *FILE (file library/file name) or *PGM (program library/program name) sequence. refer to IBM's Programming: Control Language Reference or Programming: Data Base Guide (SC41-9659). I DCLare! Perhaps the most crucial point in understanding how CL programs process database files is knowing when you need to declare a file in the program. When &OPT equals &PGMSEQ (declared with the value P). The command has only one required parameter: the file name. No DDS is required. you'll probably try to find more ways to use CL as part of your iSeries applications. When &OPT is equal to &FILESEQ (which was declared with the value F). you must use the DclF (Declare File) command to declare that file to your program. For more information concerning the use of the OPNQRYF command with database files. let's address the question you're probably asking yourself: "Why would I want to read records in CL instead of in an HLL program?" In most cases. WHLNAM. you'd probably prefer to write a single CL program that can handle the entire task. the key fields are in the order WHLIB. after going over the basics of file processing in CL. The appropriate OPNQRYF statement is executed based on the result of these tests (see A in Figure 25. reading records in CL is a sensible programming solution. To declare a file.3b). supports both database file (read-only) and display file (both read and write) processing. With this overview. Chapter 26 . Say you want to perform a DspObjD (Display Object Description) command to an output file and then read the records from that output file and process each object using another CL command. you probably wouldn't. Obviously. You'll learn how to declare a file. WHLIB (program library). CL is more procedural. and parameters in a job stream. Studying these programs and mastering these techniques will help you hone your skills and write some classic CL code of your own. WHFNAM (file name). such as DspObjAut (Display Object Authority) or MovObj (Move Object). The HLL program called to process the opened file can provide internal level breaks based on the option selected.Processing Database Files with CL Once you've learned to write basic CL programs. such as when you want to use data from a database file as a substitute value in a CL command. we examine one of those fundamental differences of CL: its ability to process database files. which serves primarily to control steps. Classic CL programs and techniques are a part of the S/38 and now AS/400 heritage. the power of the QRYSLT parameter is in the hands of those who can successfully build the selection value based on execution-time selections. the OPNQRYF statement sequences the file using the field order of WHLNAM (file library). you need only code in your program either DclF File(YourFile) or . Because executing a CL command is much easier and clearer from a CL program than from an HLL program. but they're not simply oldies to be looked at and forgotten. read a file sequentially. Why Use CL to Process Database Files? Before we talk about how to process database files in CL. sorts. In this article. The rule is simple: If your CL program uses the RcvF (Receive File) command to read a file. But sometimes. CL offers more. WHPNAM (program name). and position a file by key to read a specific record. and lets you extend the operating-system command set with your own user-written commands. In contrast to operations languages such as a mainframe's Job Control Language (JCL). WHPNAM. extract the field definitions from a file. The second basic bread-and-butter technique is using OPNQRYF to build a key sequence without requiring additional DDS. We'll show you just such a program a little later. you should be able to begin processing database files in your next CL program. DclF tells the compiler to retrieve the file and field descriptions during compilation and make the field definitions available to the program.

in any CL program. require the & prefix when referenced in a program. When the file is externally described. including those implicitly defined using the DclF command and the file field definitions. you extract the fields using the built-in function %Sst (or %Substring). commands are not executable). Because there's no externally defined record format.that is. You can then extract the subfields with which you need to work. It can. or OpnQryF (Open Query File) command. For instance.DclF File(YourLib/YourFile) When using the DclF command. Fields in a declared file automatically become available to the program as CL variables . you must remember three implementation rules. In CL. the program can access the fields associated with that file.for example. you must prefix the field name with the ampersand character (&). you can use the RcvF command to process only the file named in the DclF statement. That field is always named &FileName. Second. What about program-described files . First. As it does with externally defined files. with a length equal to DiskFile's record length. the compiler must be able to find the file in the current library list during compilation. you can declare only one file .there's no need to declare the variables separately. If you don't qualify the file name. Figure 1 shows the DDS for sample file TestPF. The third rule is that the declared file must exist when you compile the CL program. The statements . the compiler recognizes each record in the file as consisting of a single field. To declare this file in a program. however. However. the DclF statement must come after the Pgm (Program) command in your program and must precede all executable commands (the Pgm and Dcl. Extracting Field Definitions When you declare a file to a CL program. All CL variables. the compiler uses the external record-format definition associated with the file object to identify each field and its data type and length. if you code DclF DiskFile your CL program recognizes one field.0 *Char 30 Your program can then use these variables despite the fact that they're not explicitly declared. OvrDbF (Override with Database File). where FileName is the name of the file. This doesn't mean your program can't operate on other files . Therefore. you code DclF TestPF The system then makes the following variables available to the program: Variable &Code &Number &Field Type *Char 1 *Dec 5. the CL compiler automatically provides access to program-described files. using the CpyF (Copy File).either a database file or a display file . you could include in the program the statements If ((&Code *Eq 'A') *And + (&Number *GT 10)) + ChgVar &Code ('B') Notice that when you refer to the field in the program. or Declare CL Variable. files with no external data definition? Suppose you create the following file using the CrtPF (Create Physical File) command CrtPF File(DiskFile) RcdLen(258) and then you declare file DiskFile in your CL program. &DiskFile.

When using program-described files. Unlike HLLs. You can declare only one file . If it doesn't. Although you can execute an OvrDbF command containing a Position parameter after your program ." When the MonMsg (Monitor Message) command traps this message. You can use the ChgVar (Change Variable) command to move character data to a numeric field. be sure that the substring contains only numeric data. The loop continues until the RcvF statement causes the system to send error message "CPF0864 End of file detected. The declared file must exist when you compile your CL program. Reading the Database File Reading a database file in CL is straightforward. control skips to ReadEnd. thus ending the loop. The first time through the loop. CL doesn't let you reposition the file for additional processing after the program receives an end-offile message. the system will send your program a distinctly unfriendly error message. The CL substring operation supports only character data. you must retrieve that data as a character field and then move the data into a numeric field. and your program will end abnormally. Notice that the program uses a manual loop to return to the ReadBegin label and process the next record in the file. Assume that positions 251-258 in file DiskFile hold a numeric date field eight digits long. you must use the DclF (Declare File) command to declare that file to your program. The DclF statement must appear after the Pgm command and precede all executable commands (Pgm and Dcl are not executable commands). Figure 2 shows part of a program to process file TestPF. To extract that date into a numeric CL variable requires the following Dcl statements and operations: Dcl Dcl . . However. you must use the %Sst function to retrieve the subfields. Using these two capabilities.in any CL program. .either a database file or a display file . If a certain range of positions holds numeric data. you must extract the subfields every time you read a record from the file. You must also ensure that the variable's data type matches that of the data you're extracting. ChgVar ChgVar &DateChar &DateNbr *Char *Dec ( ( 8 8 ) 0) &DateChar (%Sst(&DiskFile 251 8)) &DateNbr (&DateChar) The first ChgVar command extracts the substring from the single &DiskFile field and places it in variable &DateChar. Unlike RPG. Rules for Database File Processing in CL • • • • • • If your CL program uses the RcvF (Receive File) command to read a database file. Second. CL has no global data-structure concept that can be applied for every read. The only complication is that you need a program loop to process records one at a time from the file. CL does offer the primitive GoTo command and lets you put labels in your code. you must extract the subfields every time you read a record from the file. the %Sst function supports only character data. The second ChgVar places the contents of field &DateChar into numeric field &DateNbr. With each read cycle. First. you can write the necessary loop. and CL doesn't directly support structured operations such as Do-While or DoUntil. the RcvF command opens the file and processes the first record.ChgVar &Field1 (%Sst(&DiskFile 1 10)) ChgVar &Field2 (%Sst(&DiskFile 11 25)) ChgVar &Field3 (%Sst(&DiskFile 50 1)) extract three subfields from &DiskFile's single field. When you use this technique. You'll need to remember two rules when using program-described files.

and you can read the file again! Remember. any ensuing RcvF command simply elicits another end-of-file message. you can ensure that the data in the file will remain static for the duration of the read cycles. To retrieve records by key. but each has its restriction. sort of. the system retrieves the record that matches the largest previous value. The second circumvention is perhaps even trickier because it requires a little application design planning. File Positioning One well-kept secret of CL file processing is that you can use it to retrieve records by key . Consider a simple CL program that does nothing more than perform a loop that reads all the records in a database file and exits when the end-of-file condition occurs (i.receives an end-of-file message. . Using this command's NbrCurRcd (CL variable for NBRCURRCD) parameter. The OvrDbF command's Position parameter lets you specify the position from which to start retrieving database file records. and each time the system detects end-of-file. When possible. Simply specify *Start for the Position parameter. the program has read the last record in the file. Then. *KeyBE (key-before or equal) — The first record retrieved is the one identified by the search values. you supply four search values in the Position parameter: a key-search type. Two possible workarounds to this potential problem exist. If no record matches those values. or to a particular record using a key. and the key value. the end-of-file condition is not yet set. Notice that with this technique. When the two numbers are equal. comparing it with the number of records currently in the file. . you can use another variable to count the number of records read. The technique involves use of the RtvMbrD (Retrieve Member Description) command. . in your loop to read records.e. and only if. The system sets this condition and issues the CPF0864 message indicating end-of-file only after attempting to read a record beyond the last record. thereby reading the file again. the process restarts. to a particular record using relative record number. You can position the file to *Start or *End (you can also use the PosDbF command to position to *Start or *End). This approach is the clearest and least errorprone method. this technique gives you a way to avoid the end-of-file condition. you can retrieve into a program variable the number of records currently in the file. as appropriate). if you need to read a database file multiple times. the program should accept the &Stop parameter. we advise you to construct your application in such a way that you can call multiple CL programs (or one program multiple times. If you replace the statement MonMsg (CPF0864) Exec(GoTo End) with MonMsg If ChgVar TfrCtl EndDo (CPF0864) Exec(Do) (&Stop *Eq 'Y') GoTo End &Stop ('Y') Pgm(YourPgm) Parm(&Stop) where YourPgm is the name of the program containing the command. you must add code to the program to prevent an infinite loop. Each of these programs (or instances of a program) can then process the file once. You also must add code to ensure that only those portions of the code that you want to be executed are executed. use this technique only when you can ensure that the data will in no way change while you're reading the file. the name of the record format that contains the key fields. the number of key fields. the system starts the program over again. You can then use the PosDbF (Position Database File) command to set the file cursor back to the beginning of the file. Therefore. Fail to add these groups of code. Although the program has read the last record. The key-search type determines where in the file the read-by-key begins by specifying which record the system is to read first. The key-search value specifies one of the following five key-search types: • • *KeyB (key-before) — The first record retrieved is the one that immediately precedes the record identified by the other Position parameter search values. when the system issues message CPF0864).. In addition to the changes shown above. You can use the first workaround if.

19. the called program will retrieve the preceding record.e. 2 (Code = B). If no record matches those values. *KeyAE. the system retrieves the record with the next highest key value.2 field occupies three positions). Key-search types *KeyBE.• • • *Key (key-equal) — The first record retrieved is the one identified by the search values. You must code the key value in the Position parameter as a hex string equal in length to the length of the two key fields together (i. a CL program can position the database file and then call an HLL program to process the records.. and *KeyA would cause the RcvF statement to retrieve records 2 (Code = B). and contains the following records: Code A B C E Number 1 100 50 27 Field Text in Record 1 Text in Record 2 Text in Record 3 Text in Record 4 The statements OvrDbF Position(*Key 1 TestPFR 'B') RcvF RcdFmt(TestPFR) specify that the record to be retrieved has one key field as defined in DDS record format TestPFR (Figure 1) and that the key field contains the value B. (If your CL program calls an HLL program that issues a read-previous operation. As we've mentioned. respectively. especially when one of the key fields is a packed numeric field. a packed 5. As a simple example. . Now let's suppose that the program contains these statements: OvrDbF Position(&KeySearch 1 TestPFR 'D') RcvF RcdFmt(TestPFR) Here's how each &KeySearch value affects the RcvF results: • • • • • *KeyB — returns record 3 (Code = C) *KeyBE — returns record 3 (Code = C) *Key — causes an exception error because no match is found *KeyAE — returns record 4 (Code = E) *KeyA — returns record 4 (Code = E) Using the Position parameter with a key consisting of more than one field gets tricky. For instance. the value Position(*Key 2 YourFormat X'C323519F') tells the system to retrieve the record that contains values for the character and packed-numeric key fields of C and 235. *KeyA (key-after) — The first record retrieved is the one that immediately follows the record identified by the search values. suppose the key consists of two fields: a one-character field and a five-digit packed numeric field with two decimal positions. Code. the CL program can use OvrDbF's Position parameter to set the starting point in a file and then call an RPG program that issues a read or read-previous to start reading records at that position. If the key-search type were *KeyB instead of *Key. the same statements would cause the RcvF command to retrieve the first record (Code = A). For instance. For example. let's assume that file TestPF has one key field. 1 + 3. you must code the key string in hexadecimal form. and if any key field is other than a character or signed-decimal field.) *KeyAE (key-after or equal) — The first record retrieved is the one identified by the search values. You must code the key string to match the key's definition in the file. These statements will retrieve the second record (Code = B) from file TestPF. and 3 (Code = C). respectively.

may be just the solution you need. though. When the RcvF command reads record format QLiDObjD. you suddenly realize that your CL program can't perform any other form of I/O with them. it is almost imperative that you let them enter selections in a format they understand (e. at times. Notice that the CPP declares file QADspObj in the DclF statement. (Remember: You can declare only one file in the program. the override causes the RcvF to read records from file ObjList. What About Record Output? Just about the time you get the hang of reading database files.all common tasks in a menu environment. or printing records in a database file. Thus. we avoid using the OvrDbF or PosDbF command to position a file before we process it with an HLL program when we can more explicitly and clearly position the file within the HLL program itself.Having this capability doesn't necessarily mean you should use it. However. In other words. I talked about processing database files using a CL program.g. modify a user's library list. To use these techniques. Security administrators would likely find a program that prints the object authorities for selected objects in one or more libraries useful. Chapter 27 . a prompt screen) and then build the command string in CL. let's look at a practical example. the CL program substitutes data from the appropriate fields into the DspObjAut command and prints a separate authority report for each object represented in the file. One of our fundamental rules of programming is this: Make your program explicit and its purpose clear.CL Programs and Display Files In Chapter 26. and file ObjList did not exist at compile time. which does just that. and processing database records. I want to examine how CL programs work with display files. A Useful Example Now that you know how to process database files in a CL program.) The CPP uses the OvrDbF command to override QADspObj to newly created file ObjList in library QTemp. Figure 3B shows PrtObjAut's command processing program (CPP). whose file description includes record format QLiDObjD and fields from the QADspObj file description. For example. CL is also a popular choice for implementing a friendly interface at which users can enter parameters for commands or programs that print reports or execute inquiries. It is much easier to build and execute complex CL commands in CL than it is in other languages. especially RPG/400 and COBOL/400. the program can then format and substitute those dates into a STRQMQRY (Start Query Management Query) command to produce a report covering a certain time period. In this chapter. a CL program can present an easily understood panel to prompt the user for a beginning and ending date. CL is an appropriate choice for certain situations that require displays.. submit jobs. and check authorities -. extracting field definitions (both externally described and program-described). When you want users to enter substitution values for use in an arcane command such as OPNQRYF (Open Query File). that's the file we must process. CL works well with display files for menus because CL is the language used to override files. positioning the file therein can simplify the solution. We're sure you'll find uses for the CL techniques you've learned in this article. the DspObjD command creates output file ObjList. you must first create the query management query or enter the SQL source statements to execute with RunSQLStm. This IBM-supplied file resides in library QSys and is a model for the output file that the DspObjD command creates. As it reads each record. Because we declare file QADspObj in the program. Processing database files in CL is a handy ability that. . writing. In the CPP. that output file is modeled on QADspObj's record format and associated fields. PrtObjAut1. when you use DspObjD to create an output file. CL provides no direct support for updating. I discussed declaring a file. Figure 3A shows the command definition for the PrtObjAut (Print Object Authorities) command. For example. Some programmers use command StrQMQry (Start Query Management Query) to execute a query management query or use the RunSQLStm (Run SQL Statement) command to effect one of these operations from within CL. There's just no good reason to hide the positioning function in a CL program that may not clearly belong with the program that actually reads the file. when you process a file in a CL program.

the program driver for this menu. see Chapter 26). .1.4 shows CL program USERMENU. RCVF. and then. you can output record formats to a display device using the SNDF (Send File) command.2 shows part of a compiler listing for a CL program that declares USERMENUF. READ. a standard header format.g. tells the compiler to identify and retrieve the descriptions for all record formats in the file. you use a combination of the SNDF and SNDRCVF commands as you would use a combination of WRITE and EXFMT in RPG/400: SNDF RCDFMT(HEADER) SNDF RCDFMT(FKEYS) SNDRCVF RCDFMT(DETAIL Notice that the RCDFMT parameter value in each statement specifies the particular format for the operation. read formats from the display device using the RCVF (Receive File) command. From the DDS. and EXFMT opcodes. Declaring the file lets the compiler locate it and retrieve the field and format definitions. Record format PROMPT01 is a panel that lets the user enter selection values for Batch Report 1. depending on user input. The default for DCLF's RCDFMT parameter. or SNDRCVF command without coding a parameter. Figure 27. CL Display File Examples Let's look at an example of how to use CL with a display file for a menu and a prompt screen. and an input-capable field). If your display file has many formats and you plan to use only one or a few of them. either executes the code that corresponds to the menu option selected or exits the menu.CL Display File Basics As with a database file. For instance.1 shows the DDS for a sample display file. These commands parallel RPG/400's WRITE. you could code your CL as SNDF RCDFMT(PROMPT) RCVF RCDFMT(PROMPT) or as SNDRCVF RCDFMT(PROMPT) To send more than one format to the screen at once (e.. respectively. *FILE. to present a record format named PROMPT on the display. you can see that record format MENU displays the list of menu options. displays the menu. USERMENUF. As you can see. you must use the DCLF (Declare File) command to tell your CL program which display file you want to work with (for more details about declaring a file. and then simply use the SNDF. If there is only one format in the file. you can use RCDFMT's default value. a function key format. you can specify up to 50 different record formats in the RCDFMT parameter instead of using the *ALL default value. or perform both functions with the SNDRCVF (Send/Receive File) command. Figure 27. and Figure 27. USERMENU sets up work variable &pgmq. After you declare a display file. Notice that the field and format definitions immediately follow the DCLF statement on the compiler listing.3 shows a menu based on the DDS in Figure 27. *ALL. Figure 27. and record formats MSGF and MSGCTL control the IBM-supplied message subfile function that sends messages to the program message queue. Doing so reduces the size of the compiled program object by eliminating unnecessary definitions.

By changing the value of variable &pgmq to USERMENU. the message queue named in variable PGMQ) and displaying the message in the form of a subfile. the message subfile will hold up to 10 messages and will display one message on each page. you can see that I change indicator 40 (variable &in40) to &@on and then output format MSGCTL using the SNDF command.1). thus specifying that the program message queue to be displayed is the one associated with program USERMENU.. You can think of the message subfile as simply a mechanism by which you can view the messages on the program message queue. Clearing the queue at the beginning of the program ensures that old messages from a previous invocation do not remain in the queue. SLFINZ.. You can alter the SFLMSGRCD and SFLPAG values to display as many messages as you like and have room for on the screen. This record format establishes the message subfile for this display file with a SFLSIZ value of 10 and a SFLPAG value of 1. Immediately after D in Figure 27. "What does program USERMENU have to do to fill the message subfile?" The answer: Absolutely nothing! This fact often confuses programmers new to message subfiles because they can't figure out how to load the subfile.e. the RMVMSG command clears *ALL messages from the current program queue (i. and function key description are all part of the MENU record format on the DDS.The sample menu's menu options. queue USERMENU).e.g. The CL program assigns the value USERMENU to variable &pgmq (A in Figure 27. the user can scroll through the pages by pressing Page up and Page down.) When the program outputs the MSGCTL format." and "Bottom" subfile controls after the last record. uses the standard subfile keywords (e. At B in Figure 27. The message subfile record format is format MSGSFL (B in Figure 27. select an option (s)he is not authorized to. SLFDSP) along with the SFLPGMQ keyword. .4. The associated SFLMSGKEY keyword and the IBM-supplied variable MSGKEY support the task of retrieving a message from the program message queue associated with the SFLPGMQ keyword (i.1) to initialize the subfile before loading it and to display the appropriate + or blank to let the user know whether more subfile records exist beyond those currently displayed. option field. SFLSIZ. Should the user enter an invalid menu option. The user can move the cursor onto a message and press the Help key to get secondary text. the message will be displayed on line 23. If more than one page of messages exists. You can specify any line number for this keyword that is valid for the panel you are displaying. but be sure your screen has a blank line at the bottom so that these subfile controls can be displayed. I specified which program message queue to associate with the message subfile. (You can specify SFLEND(*MORE) if you prefer to have the message subfile use the "More. indicator 40 controls the SFLINZ and SFLEND keywords (C in Figure 27.4. You might be asking.. To display these fields to the user and allow input. or encounter an error. program USERMENU uses the SNDRCVF command (C in Figure 27. That's all it takes. The keyword SFLMSGRCD(23) tells the display file to display the messages in this subfile beginning on line 23 of the panel. the next record format in the DDS. In the DDS.4). MSGCTL. The message subfile is a special form of subfile whose definition includes some predefined variables and special keywords.) Figure 27. Because of the value of the SFLMSGRCD keyword in the MSGSFL format. and can scroll through all the error messages in the subfile. when it is available. In other words... (I discuss this record format in more detail in a moment.4). the PGMQ and MSGKEY variables coded in the MSGSFL record format cause all messages to be retrieved from the program message queue and presented in the subfile.5 shows a completion message at the bottom of the sample menu. the program displays the appropriate message at the bottom of the screen by displaying message subfile record format MSGCTL (D).

The fact that you cannot write or update database file records greatly reduces CL's usefulness in an interactive environment. ERRMSG. and then I leave you to discover the rewards as you apply this knowledge to your own applications. the display file uses an error subfile to display the error message at the bottom of the screen. One crucial point to remember when using OPNQRYF is that you must use the SHARE(*YES) file attribute for each file opened by the OPNQRYF command. When you specify SHARE(*YES). I give you the foundation you need to use the OPNQRYF (Open Query File) command. The purpose of the error subfile is to group error messages generated by these keywords for a particular record format. An error subfile provides a different function than a message subfile. You use the ERRSFL keyword in the DDS (A in Figure 27. and string manipulation capabilities make it a good choice for menus. Instead.OPNQRYF Fundamentals In this chapter. CL's strengths more than offset these limitations. The lack of support for user-written subfiles also limits its usefulness in applications that require user interaction. While not always appropriate. sorting records by one or more key fields. CL can only read database files. (For more information about error subfiles. CHECK.Figure 27. like the error message in Figure 27. message handling.g. OPNQRYF's basic function is to open one or more database files and present records in response to a query request. Chapter 28 . but instead will perform another full open of the file.1) to indicate an error subfile. In essence. you can choose the best and easiest language for applications that use display files. You can use the OPNQRYF command to perform a variety of database functions: joining records from more than one file.) Considerations The drawbacks to using CL for display file processing are CL's limited database file I/O capabilities and its lack of support for user-written subfiles. not to view messages on the program message queue. VALUES). OPNQRYF works as a filter that determines the way your programs see the file or files being opened.7. The user keys the appropriate values and presses Enter to submit the report. see the Data Description Specifications Reference. Be aware that the OPNQRYF command ignores any . performing aggregate calculations such as sum and average. prompt screens.6 shows a prompt screen a user might receive to specify selections for a menu option that submits a report program to batch. grouping records.. CL's command processing. the resulting file or files appear to high-level language (HLL) programs as a single database file containing only the records that satisfy query selection criteria. But in many common situations. As I explained in Chapter 26. the next open of the file will not use the open data path created by the OPNQRYF command. With this knowledge under your belt. Don't assume the file description already has the SHARE(*YES) value when you use the OPNQRYF command. If OPNQRYF opens a file using the SHARE(*NO) attribute. Once opened. If the program encounters an error when validating the values. and calculating new fields using numeric or character string operations. selecting records before or after grouping. The error subfile automatically presents any error messages generated as a result of DDS message or validity-checking keywords (e. SFLMSG. and other nondatabase-related screen functions. subsequent opens of the same file will share the original open data path and thus see the file as presented by the OPNQRYF process. always use the OVRDBF (Override with Database File) command just before executing OPNQRYF to explicitly specify SHARE(*YES) for each file to be opened. for many basic interactive applications CL offers a simple alternative to a high-level language for display file processing. SC41-9620.

You can select a particular data member to be queried by supplying a member name. if you use the command OVRDBF FILE(MYJOIN) TOFILE(MYLIB/MYFILE) SHARE(*YES) with this OPNQRYF command OPNQRYF FILE(MYLIB/MYFILE) FORMAT(MYJOIN) .. parallels between OPNQRYF parameters and specific SQL concepts. The default value of *ONLY for record format tells the database manager to use the only record format named in file MYFILE in our example. You cannot use FORMAT(*FILE) when the FILE parameter references more than one file. Therefore. the key field specifications parallel SQL's ORDER BY statement. If you compare OPNQRYF to SQL (page 351).and for data. or record format. data member.. format. The FORMAT parameter specifies the format for records made available by the OPNQRFY command. but awkwardly structured. When joining more than one record format. OPNQRYF has five major groups of parameters (specifications for file. you must enter values in the join field specifications parameter (JFLD) to specify the field the database manager will use to perform the join.parameters on the OVRDBF command other than TOFILE. the record format of file MYFILE would be used for the records presented by the OPNQRYF command. When you have more than one record format. member. and the grouping field names expression parallels the GROUP BY statement. the file and format specifications parallel the more basic functions of the SQL SELECT and FROM statements. When you use the default value of *FILE for the FORMAT parameter. like this: OPNQRYF FILE(MYLIB/MYFILE) . respectively. WAITRCD. and record format. you'll see that the OPNQRYF command is basically a complicated SQL front end that offers a few extra parameters. if you key OPNQRYF FILE(MYLIB/MYFILE) . you must use the record format element of the FILE parameter to name the particular record format to open. There are some strong. there must be data -. OPNQRYF's file specifications parameters identify the file or files that contain the data. the record format of the file defined in the FILE parameter is used for records selected. LVLCHK. For instance. This partial command identifies MYLIB/MYFILE as the file to be queried. On the other hand. You can enter a plus sign in the "+ for more values" field and enter multiple file specifications to be dynamically joined (as opposed to creating a permanent join logical file on the system). The Command Figure 28. The fields defined in this record format must be unique from those named in the FILE or MAPFLD parameter. join field. and SHARE. the query selection expression parallels SQL's WHERE statement.1 has three separate parameter elements: the qualified file name. I specify the qualified file name only and do not enter a specific value for the second and third elements of the FILE parameter. key field. or a Distributed Data Management file. To return to our example.. Start with a File and a Format For every query. SEQONLY. INHWRT. and mapped field) and a few extra parameters not in a group. A specified file must be a physical or logical file. A simple OPNQRYF command might name a single file.1 shows the entire OPNQRYF command. Using the OPNQRYF command is easier once you master the parameter groups. there must be a file. an SQL view.. Notice that the FILE parameter in Figure 28. MBR. In the sample command above. the default values of *FIRST and *ONLY are used for the member and record format.

(MYJOIN)). enclose all character constants in apostrophes or quotation marks (e. %VALUES. Numeric constants and variables cause undue anxiety for newcomers to the OPNQRYF command. Instead. MYLIB/MYFILE.g. must be enclosed in apostrophes (because it comprises a character string for the command to evaluate). (MYLIB/MYJOIN JOINR)). differentiate between upper and lower case when specifying character variables. The format for MYJOIN would present records from MYFILE. and the OVRDBF command would redirect the open to the queried file. The QRYSLT expression uses the %RANGE function to determine whether the CSTNBR field is in the range of 10000 to 49999 and then checks whether CURDUE is greater than the credit limit.. in the discussion of field mapping. The FORMAT parameter can specify a qualified file name and a record format (e. OPNQRYF will make available only those fields defined by the record format named in the FORMAT parameter.. For example. and can consist of one or more logical expressions connected by *AND or *OR. Character variables in the QRYSLT parameter are case-sensitive. This simple logical expression QRYSLT('DLTCDE = "D"') instructs the database manager to select only records for which the field DLTCDE contains the constant value D.. the record selection portion of the OPNQRYF command parallels SQL's WHERE statement. and CRDFLG (credit flag) is a character field. CURDUE (current due). Second. consider the following logical expression comparing a field to a character constant: CRDFLG "EQ "Y" If you want to substitute runtime CL variable &CODE for the constant. or it can simply name the file containing the format to be used (e. CSTNBR (customer number). I won't talk any more about join files. you must either specify a "Y" or a "y" or provide for both possibilities. and %WILDCARD).. Finally. Although you can select (via the QRYSLT parameter) any fields defined in the record format of the file named in the FILE parameter. and the expression is valid. You can minimize trips to the manual by remembering a few rules about the QRYSLT parameter.the database manager uses the record format for file MYJOIN. and HLL processing. First. sorting. you would code the expression as: 'CRDFLG *EQ "' *CAT &CODE *CAT '"' After substitution and concatenation. that record is selected.g. %RANGE. in other words. When all tests are true for a record in the file.000 characters long. Record Selection As I said earlier. The QRYSLT parameter provides record selection before record grouping occurs (record grouping is controlled by the GRPFLD parameter). let's focus on creating queries for single file record selection. Because this chapter is only an introduction to OPNQRYF. In the previous example. and CRDLIM (credit limit) are numeric fields. 'char-constant' or "char-constant"). %SST. Look again at this example: QRYSLT('CSTNBR *EQ %RANGE(10000 49999) *AND + . The OPNQRYF command also offers built-in functions that you can include in your expressions (e.g. Later. A more complex query might use the following expression: QRYSLT('CSTNBR *EQ %RANGE(10000 49999) *AND + CURDUE *GT CRDLIM *AND CRDFLG *EQ "Y"') In this example. Each logical expression must use at least one field from the files being queried. The query selection expression can be up to 2. quotation marks enclose the value supplied by the &CODE variable.g. it tests CRDFLG against the value Y. the HLL program would open file MYJOIN. I'll explain why you might want to do this. mapping fields.

The second logical expression CURDUE *GT CRDLIM compares two fields defined in the record format or mapped fields. whether to sequence the field in ascending or descending order. You can use the character variables and concatenation to build the QRYSLT string in variable &QRYSLT. you use the CHGVAR (Change Variable) command to move these numeric values into character variables &LOWCHR and &HIHCHR. so long as the referenced field definition exists in the referenced record format. and concatenation can only be performed on character fields. Suppose you want to let the user enter the range of customer numbers to select from rather than hard-coding the range. you must use concatenation. When the substitution is made. The KEYFLD default value of *NONE tells the ... but later they must appear as numbers in the QRYSLT parameter so they can be compared to the numeric CSTNBR field. Key Fields Besides selecting records. The following OPNQRYF command: OPNQRYF FILE(MYLIB/MYFILE) QRYSLT('. Suppose the user enters the numeric values at a prompt provided by display file USERDSP. First. Again. which brings us to the third QRYSLT parameter rule: Don't enclose numeric or character variables in quotation marks if the value of a variable should be evaluated as numeric. To build a dynamic QRYSLT.2 shows one way to create the correct QRYSLT value. You must specify the field name.. Figure 28. just as though the numbers were entered as constants.') + KEYFLD((CURBAL *DESCEND) (CSTNBR)) would present the selected records in descending order by current balance and then in ascending order by customer number.') KEYFLD(CSTNBR) would cause the selected records to appear in ascending order by customer number because *ASCEND is the default for the key field order. Let's look at a couple of examples. However.. they must appear as numbers for the system to recognize and process them. except the selection is performed after records have been grouped. A dragon could rear its ugly head when you create a dynamic query selection in a CL or HLL program.. the numeric values appear without quotation marks. In the first expression 'CSTNBR *EQ %RANGE(10000 49999)' notice there are no apostrophes or quotation marks around the numeric constants. The key fields specified in the KEYFLD parameter can be mapped from existing fields. This means that the variables that define the range of customer numbers must be converted to characters before concatenation. there are no quotation marks around the names of these numeric fields. and whether or not to enforce uniqueness. Although these numbers appear in a character string (the QRYSLT parameter).. The command OPNQRYF FILE(MYLIB/MYFILE) QRYSLT('. Any key field you name in the KEYFLD parameter must exist in the record format referenced by the FORMAT parameter. whether or not to use absolute values for sequencing. The same QRYSLT functions are available for the GRPSLT expression. The KEYFLD parameter consists of several elements. The GRPSLT parameter functions exactly like the QRYSLT parameter.CURDUE *GT CRDLIM *AND CRDFLG *EQ "Y"') Two of the logical expressions use numeric fields or constants. you can establish the order of the records OPNQRYF presents to your HLL program by entering one or more key fields in the key field specifications. you would probably require the user to enter numeric values so you could ensure that all positions in the field are numeric. and the same rules apply.

and creating an access path takes a long time at the machine level. which is used as the format for the selected records. fewer than 10. If the application is a batch application run infrequently or only at night. an OS/400 command you can use to "train" your programs to communicate. and %XLATE performs character translation using a translation table. To help you make a decision. the database manager must create a temporary one. especially if it eliminates the need for logical files used only to support those infrequent or night jobs. unless the file is relatively small (i. program! Speak!" That's one way to try to get your program to talk (perhaps success is more likely if you reward good behavior with a treat). However. Entering the value *FILE tells the query to use the access path definition of the file named in the FILE parameter to order the records. Likewise. you can use these guidelines: • • • If the application is interactive. You can use the resulting fields to select records and to sequence the selected records. the calculation defined in the MAPFLD parameter ('INVQTY * IPRICE') is performed. use OPNQRYF when existing access paths support the selection and sequencing or when the files are relatively small. I want to introduce SNDUSRMSG (Send User Message). OPNQRYF Command Performance Whenever possible. As each record is read from the INPDTL file. and. But remember. if the system finds no access path it can use. use OPNQRYF sparingly. For example. The mapped field specifications let you derive new fields (known as "virtual" fields in relational database terms) from fields in the record format being queried.. %DIGITS converts numbers to characters. . In other words. the database manager will use an existing access path if possible. to avoid finding you actually barking orders. ensure that existing access paths support the selection and sequencing..database manager to present the selected records in any order. run the report at night! Sidebar: SQL Special Features Chapter 29 . OPNQRYF is a poor performer when many temporary access paths must be created to support the query request. If the application runs frequently and in batch during normal business hours. thus enhancing the performance of the OPNQRYF command. %SST returns a substring of the field argument. especially if the file is large. However.000 records).000 records) or when many (more than three or four) temporary access paths are required. You can map fields using a variety of powerful built-in functions. you can use OPNQRYF without hesitation. You may also need to weigh flexibility against performance to decide which record-selection method is best for a particular application. the OPNQRYF command provides flexibility that is sometimes difficult to emulate using only HLL programming and the native database. However.Teaching Programs to Talk Speak. Mapping Virtual Fields One of the richer features of the OPNQRYF command is its support of field mapping. Overall. when you specify one or more key fields in your query. the database manager will use that access path to perform the selection. otherwise. use the OPNQRYF command to do the work and write one HLL program to do the reporting. The next time a user requests a report that requires more than a few selections and whose records must be in four different sequences. and the value is placed in field LINTOT. again degrading performance. The database manager then uses the value in LINTOT to determine whether to select or reject the record.e. it creates a temporary one. the OPNQRYF command uses an existing access path for record selection and sequencing. Look at the following OPNQRYF statement: OPNQRYF FILE(INPDTL) FORMAT(DETAIL) QRYSLT('LINTOT *GT 10000')+ KEYFLD((CSTNBR) (INVDTE)) MAPFLD((LINTOT 'INVQTY * IPRICE')) Fields INVQTY (invoice item quantity) and IPRICE (invoice item price) exist in physical file INPDTL. to be on the safe side. Use native database and HLL programming when the files are large (greater than 10. Mapped field LINTOT (line total) exists in the DETAIL format.. if you select all customer numbers in a specific range and an access path exists for CSTNBR.

and &3. The default value in the SNDUSRMSG command overrides defaults specified in the message description of predefined messages. For more information about creating and using messages. you can supply a default reply to be used for an inquiry message when the message queue that receives the message is in the *DFT delivery mode or when an unanswered message is deleted from the message queue. I covered the commands you can use to send impromptu messages from one user to another: SNDMSG (Send Message).The SNDUSRMSG command exists for the sole purpose of communicating from program to user and includes the built-in ability to let the user talk back. To send an impromptu message. The message can be an impromptu message or one you've defined in a message file. &2. It is important that the character string you supply is the correct length and that each substitution variable is positioned properly within that string. enter a message ID in the MSGID parameter. and SNDNETMSG (Send Network Message). In the DFT parameter. If the user enters an invalid value. The system then verifies the response against the valid values listed in the VALUES parameter.. but because these commands provide no means for the sending program to receive a user response. and &3) and that the expected length of each variable is 10. the SNDUSRMSG command lets a CL program send a message to a user or a message queue and then receive a reply as a program variable. When you use the SNDUSRMSG command to send this message. message CPF2105 (Object &1 in &2 type *&3 not found) has three data substitution variables: &1. If you supply these values in the MSGDTA string: 'CSTMAST ARLIB FILE ' the message appears as Object CSTMAST in ARLIB type *FILE not found If you do not supply any MSGDTA values. When you specify MSGTYPE(*INQ) and a CL variable in the MSGRPY parameter (discussed later). Programs can also use these commands to send an informational message to a user. Object in type * not found). The system uses the default value when the message is sent to a message . if one is requested. you should list the valid values as part of your inquiry message. The MSGDTA parameter lets you specify values to take the place of substitution variables in a predefined message. In Chapter 7. The character string specified in the MSGDTA parameter is valid only for messages that have data substitution variables. see the AS/400 Control Language Reference (SC41-0030) and the AS/400 Control Language Programmer's Guide (SC41-8077). SNDBRKMSG (Send Break Message). respectively. Every AS/400 is shipped with QCPFMSG (a message file for OS/400 messages) and several other message files that support particular products.1 shows the SNDUSRMSG command screen. 10. You can also create your own message files and message IDs that your applications can use to communicate with users or other programs. Basic Training Figure 29. the original message is sent without values (e. How do I know that? Because each system-defined message has a message description that includes detailed information about substitution variables. To use a predefined message. just type a message of up to 512 characters in the MSG parameter. their use for communication between programs and users is limited. you can also send a MSGDTA string that contains the substitution values for these variables. The previous example assumes that the message is expecting three variables (&1. and I used the DSPMSGD (Display Message Description) command to get this information. &2. which lets you specify the value or values that will be accepted as the response to your message. The next parameter on the SNDUSRMSG command is VALUES. the system automatically supplies a prompt for a response when it displays the message. making the entire MSGDTA string 27 characters long. and 7.g. the system displays a message saying that the reply was not valid and resends the inquiry message. The message you identify must exist in the message file named in the MSGF parameter. In contrast. To make sure the user knows what values are valid. For example.

which lets you specify a translation table to process the response automatically. Both kinds appear on the destination message queue as text.2 solves this problem. This oddity presents some subtle problems for programmers. One problem emerges when using the SNDUSRMSG command to communicate with a user from the batch job environment. The program in Figure 29. The MSGTYPE parameter lets you specify whether the message you are sending is an *INFO (informational. the MSGQ parameter on the SBMJOB (Submit Job) command tells the system where to send a job completion message. Alas. If your application requires the retrieval of a numeric value. Inquiry messages to batch jobs will automatically be answered with the default value. Make sure that the length of the variable is at least as long as the expected length of the reply. Keep in mind that although messages can be up to 512 characters long for first-level text. whereas a truncated reply may cause an unexpected glitch in your program. the SNDUSRMSG command also exhibits another oddity: If you don't specify a MSGRPY variable but do specify MSGTYPE(*INQ). *SYSOPR -. The last parameter on the SNDUSRMSG command is TRNTBL. the command causes the job to wait for a reply from the message queue but doesn't retrieve the reply into your program. There are no parameters to communicate with the user who submitted the job. I strongly recommend that you use only valid values (those listed in the VALUES parameter) when you supply a default value. You can enter the name of any message queue on the local system. the default) or *INQ (inquiry) message. specify *SYSOPR to send the message to the system operator at message queue QSYS/QSYSOPR. You can enter the recipient's user profile. but an inquiry message also supplies a response line and waits for a reply. the only values provided for TOUSR and TOMSGQ direct messages to the system operator as the external user. or with a null value (*N) if no default is specified. or when a system reply list entry is used that specifies the *DFT reply.2 uses the RTVJOBA command to retrieve the name of the message queue and tests variable &type to determine whether the current job is a batch job (&type=&@batch). but if the reply exceeds the length of the variable. and the default is not listed in the VALUES parameter. However. If the job is interactive. The first result causes no problem. You can retrieve this value using the RTVJOBA (Retrieve Job Attributes) command and the SBMMSGQ and SBMMSGQLIB return variables. when the message is inadvertently removed from a message queue without a reply. The default translation table is QSYSTRNTBL. if a user types the default value as a reply. if the reply is too short. To avoid such a mess. the SNDUSRMSG command can simply direct the message to the external user by specifying TOUSR(*REQUESTER). it is accepted.queue that is in the *DFT delivery mode. QSYS/QSYSOPR. In the interactive environment. SNDUSRMSG can send the message to the message queue defined by the &sbmmsgq and &sbmmsgqlib variables. The CL code in Figure 29. If so. both the TOUSR and TOMSGQ parameters supply values that let you communicate easily with the external user of the job. Oddly enough. *EXT -. only the first 76 characters will be displayed when messages are sent to *EXT. When you submit a job.tells the system to send the message to the system operator message queue. In the batch environment. This approach ensures that validity checking is performed for numeric values. If the system supplies a default value not listed in the VALUES parameter.instructs the system to send the message to the job's external message queue. or you can use one of the following special values: • • • * -. You can use the MSGRPY parameter to specify a CL character variable (up to 132 characters long) to receive the reply to an inquiry message. it is best to use DDS and a CL or high-level language (HLL) program to prompt the user for a reply. An inquiry message reply must be a character (alphanumeric) reply. this value need not match any of the supplied values in the VALUES parameter.instructs the system to send the message to the external message queue (*EXT) if the job is interactive or to message queue QSYS/QSYSOPR if the program is being executed in batch. or enter *REQUESTER to send the message to the current user profile for an interactive job or to the system operator message queue for a batch job. The TOMSGQ parameter names the message queue that will receive the message. the system will notify the user that the reply was invalid. The TOUSR parameter is similar to TOMSGQ but lets you specify the recipient by user profile instead of by message queue. it will be padded with blanks to the right. it will be truncated. which translates lowercase .

prompts the user to verify that the report should indeed be run again.g.g. Notice that SNDUSRMSG is first used for an inquiry message. branch number. because SNDUSRMSG offers minimal validity checking and is not as user-friendly as a DDS-coded display file prompt can be. you'll learn how a job stores messages. This function is somewhat different from prompting for data when you submit a job.. if it has. The program explicitly checks for a reply of Y or N and takes appropriate action.g. . I don't recommend using the SNDUSRMSG command to retrieve data for program execution (e. Putting the Command to Work Figure 29. which lets you send messages from program to program with information such as detected program errors and requirements for continued processing. In a nutshell. In Chapter 29. Now that you know how to train your program to talk to users. N.characters (X'81' through X'A9') to uppercase characters. Who says you can't teach an old dog new tricks? Chapter 30 . The SNDUSRMSG command can teach your programs to talk. none of these commands will do the job.") or to sending an inquiry message that lets the user choose further program action. But when you want to establish communications between programs. multiple submissions of the same job because the user wasn't sure the first job submission worked). In this chapter. but the vocabulary associated with this command is specific to these two tasks..Just Between Us Programs In Chapter 7. date range). Instead. Also notice that the SNDUSRMSG command is used again in Figure 29. you need to understand the importance of job message queues. You will receive a message when your job is complete. The next challenge: teaching your programs to communicate with each other! You'll be able to master that after I explain how to use the SNDPGMMSG (Send Program Message) command. it is best if all the logical tests are explicit and obvious to the person who maintains the program.3 shows how the SNDUSRMSG command might be implemented in a CL program. order number range. depending on the user reply). "Your job has been submitted. you can save the biscuits for the family pooch.. Knowing When To Speak As shown in the CL program example.g. y.g. But first. SNDBRKMSG (Send Break Message). Y or N) rather than having to code painstakingly for all lowercase and uppercase possibilities (e. how to have one program send a message to another. what types of messages a program can send.. You will find that supplying informational program-to-user messages will endear you to your users and help you avoid headaches (e. The message is sent to *REQUESTER to make sure the entire message text is displayed on the queue. The job determines whether or not the daily report has already been run for that day and. n). the SNDUSRMSG command is best suited to sending an informational message to the user to relate useful information (e. you can assume that N is the only other possibility. I explained that you can use the SNDMSG (Send Message). Therefore. you can check only for uppercase replies (e. or SNDNETMSG (Send Network Message) command to communicate with someone else on your AS/400. Although you can make assumptions.3 to send informational messages that let the user know which action the program has completed (the completion of the task or the cancellation of the request to process the daily report. because if you specified VALUES('Y' 'N') and you check for Y first. Program messages are normally used for one of two reasons: to send error messages to the calling program (so it knows when a function has not been completed) or to communicate the status or successful completion of a process to the calling program. you should create prompts for data as display files (using DDS) and process them with either a CL or HLL program. you need the SNDPGMMSG (Send Program Message) and RCVMSG (Receive Message) commands. and what actions they can require a job to perform. Some people might argue that this is overcoding. Now I want to introduce the SNDPGMMSG command (see Chapter 31 for a discussion of the RCVMSG command). using SNDUSRMSG to prompt the user for a simple reply makes good use of the command's capabilities. I showed how to use one of these commands or the SNDUSRMSG (Send User Message) command to have a program send a message to a user.. Y.

"Teaching Programs to Talk. TOPGMQ consists of two values: relationship and program. you can specify *PRV (indicating the message is to go to the target program's caller or requester). You can enter an impromptu message (up to 512 characters long) on the MSG parameter. For example. as well as other essential job information. Figure 30.1 illustrates a sample job message queue. or you can use the MSGID. see Chapter 29. see "Understanding Job Logs". *SAME (the message is to be sent to the target program itself).. The second value specifies the target program and can be either the name of a program within the sending program's job or the special value *. OS/400 automatically creates a job message queue that consists of an external message queue (*EXT. (For more information about job logs. The default value for the TOMSGQ parameter. MSGF.") The TOPGMQ parameter is unique to the SNDPGMMSG command and identifies the program queue to which a message will be sent. through which a program communicates with the job's user) and a program message queue (for each program invocation in that job). The job message queue becomes the basis for the job log produced when a job is completed.e. Although programs can use nonprogram message queues to communicate to other programs. For each job on the system. For this value. *TOPGMQ. either a workstation or a user message queue). Assuming that PGM_D is the active program. Because of the introduction of ILE (Integrated Language Environment) support in OS/400 V2R3. Let's look at the job message queues shown in Figure 30. tells the system to refer to the TOPMGQ parameter to determine the destination of the message. OS/400 creates a nonprogram message queue when a workstation device or a user profile is created. Both users and programs can send messages to and receive them from nonprogram message queues. let's suppose PGM_D executes the following SNDPGMMSG command: SNDPGMMSG MSG('Test message') TOMSGQ(*SYSOPR) MSGTYPE(*INFO) + . You can use the SNDPGMMSG command in a CL program to send a program message to a nonprogram or program message queue. OS/400 also creates a program message queue when a program is invoked and deletes it when the program ends (before removing the program from the job invocation stack). OS/400 creates an external message queue when a job is initialized and deletes the queue when the job ends.Job Message Queues All messages on the AS/400 must be sent to and received from a message queue. (To review predefined messages. OS/400 provides a more convenient means of communication between programs in the same job. you might want to create a message queue for communication between programs that aren't part of the same job.2 shows the parameters associated with the SNDPGMMSG command prior to OS/400 V2R3. which tells OS/400 to use the sending program's name. User-to-user and program-touser messages are exchanged primarily via nonprogram message queues (i.1. Or you might want to create a central message queue to handle all print messages. the SNDPGMMSG command now includes some additional parameter elements that address specific ILE requirements (see the section "ILE-Induced Changes"). and MSGDTA parameters to send a predefined message. You can also use the CRTMSGQ (Create Message Queue) command to create nonprogram message queues. The job log includes all messages from the job message queue. or *EXT (the message is to go to the target job's external message queue). The first value specifies the relationship between the target program and the sending program.) The SNDPGMMSG Command Figure 30.

you entered this special value in the "relationship" parameter element of the SNDPGMMSG command. The module identifies the module into which the procedure was compiled. "relationship.. (Notice that this time I chose not to specify the TOMSGQ parameter. Note that. Message Types The next parameter on the SNDPGMMSG command is MSGTYPE. You can specify *ALLACT for the TOUSR parameter to send a message to each active user's message queue. You can send an informational message (*INFO) to any user. a user or workstation message queue) or to the current job's external message queue. it does not guarantee that users immediately see the message. which contains two elements." works just the same as in V2R2. Because TOPGMQ specifies *SAME for the relationship and * for the target program. but to let it default to *TOPGMQ. If this entry is a procedure name. when using the new SNDPGMMSG command. Now consider the command SNDPGMMSG MSG('Test message') + TOPGMQ(*PRV *) MSGTYPE(*INFO) In this case. workstation.4 lists the message types and describes the limitations (message content and destination) and normal uses of each. Each user message queue processes the message based on the DLVRY attribute specified for the message queue.) As on the SNDUSRMSG command. Figure 30. the SNDPGMMSG TOPGMQ parameter is expanded to include ILE (Integrated Language Environment) support.PGM_D would send the message "Test message" to the system operator's workstation message queue because the value *SYSOPR was specified for the TOMSGQ parameter. If more qualifications are needed to correctly identify the procedure message queue. "Module name" and "Bound program name. Figure 30. the system sends the message "Test message" to program message queue PGM_D. In the following SNDPGMMSG command. Because inquiry messages (*INQ) expect a reply. Prior to V2R3.3 presents the new TOPGMQ parameter structure. it is in this third item that you can enter the single value "*EXT" to tell OS/400 to send the message to the external message queue of the current job.e. the name can be a maximum of 256 characters. SNDPGMMSG's TOUSR parameter lets your program send a message to a particular user profile. The second element is now called "Call stack entry identifier" and is expanded to multiple fields that help identify the exact program message queue to receive the message. This field represents the name of the program or procedure message queue. . you can use the next two items. the message is sent to program message queue PGM_C. Although this value provides an easy way for a program to send a message to all active users. ILE-Induced Changes In OS/400 V2R3. The first entry field is the "Call stack entry" field and is similar to the V2R2 implementation. The bound program name identifies the program name into which this procedure was bound. because PGM_C is PGM_D's calling program (*PRV). you can send an inquiry message only to a nonprogram message queue (i. The first element. or program message queue. SNDPGMMSG MSG('Test message') TOMSGQ(*TOPGMQ) + TOPGMQ(*SAME *) MSGTYPE(*INFO) the parameter TOMSGQ(*TOPGMQ) tells OS/400 to use the TOPGMQ parameter to determine the message destination. The system will begin searching for this procedure in the most recently called program or procedure. You can use six types of messages in addition to the informational and inquiry message types that you can create with the SNDMSG and SNDUSRMSG commands. Each message type has a distinct purpose and communicates specific kinds of information to other programs or to the job's user." to specifically point to the exact procedure message queue.

To understand how key variables work. You must use an impromptu message on the MSG parameter to send the request. An escape message (*ESCAPE) specifically identifies the error that caused the sending program to fail. The only valid values are *PGMQ. If the receiving program doesn't monitor for the message. the message functions as a warning message. the message is displayed on the workstation screen. the system immediately returns control to the sending program. . The KEYVAR parameter specifies the CL return variable containing the message key value of the message sent by the SNDPGMMSG command. the message acts like an inquiry message and waits for a reply. the message causes the sending program to end. SC41-8077). If the notify message is sent to an interactive job's external message queue. An escape message can be sent only to a program message queue. You can send a request message (*RQS) to any message queue as a command request. and the escape message terminates the sending program. When a program sends a status message to an interactive job's external message queue. The return variable must be defined as TYPE(*CHAR) and LEN(4). If the program receiving the notify message monitors for it. examine the following CL statement: SNDPGMMSG MSG('Test message') TOPGMQ(*PRV *) MSGTYPE(*INFO) KEYVAR(&MSGKEY) + + The SNDPGMMSG command places the message on the calling program's message queue. and control returns to the sending program.in other words. the default reply for that message is sent. or a qualified nonprogram message queue name. If the program receiving the status message does not monitor for that message. which specifies that the reply is to go to the sending program's message queue. returning control to the calling program. When a notify message is sent to a program message queue. MSGTYPE(*ESCAPE) cannot be specified if the MSG parameter is specified -. and OS/400 assigns to that message a unique message identifier that is returned in the &MSGKEY variable. the message functions as a warning. see Chapter 8 of the Control Language Programmer's Guide. the message functions as an escape message by terminating the sending program. which the sending program can then receive. Status messages (*STATUS) describe the status of the work that the sending program performs. which I will discuss in Chapter 31. escape messages follow diagnostic messages. all escape messages must be predefined. You can receive or remove a specific message by using a key value to identify that message.A completion message (*COMP) is usually sent to inform the calling program that the requested work is complete. You can either define the default reply in the message description or specify it on the system reply list. OS/400 uses notify messages (*NOTIFY) to describe a condition in the sending program that requires a correction or a reply. which lets you specify the program or nonprogram message queue to which the reply should go. (For more information about request messages and request message processing. and the sending program does not require a response. and control returns to the receiving program. In the example RMVMSG PGMQ(*PRV *) + MSGKEY(&MSGKEY) CLEAR(*BYKEY) the RMVMSG (Remove Message) command uses the &MSG KEY value to remove the correct message from the queue. The Receiving End The next parameter on the SNDPGMMSG command is RPYMSGQ. or if the message is sent to a batch job's external message queue. If the program receiving the message monitors for this message (using the MONMSG (Monitor Message) command. processing continues. When a status message is sent to a program message queue. Diagnostic messages (*DIAG) are sent to program or external message queues to describe errors detected during program execution. telling the calling program that diagnostic messages are present and that the requested function has failed. It can be sent to a program message queue or to the job's external message queue. Typically.

output queue and /) and two variables (&outqlib and &outq) to construct the diagnostic message "Output queue &outqlib/&outq not found. which can receive it from the program message queue after control returns to the calling program.g. SNDPGMMSG MSGID(CPF9898) + MSGF(QCPFMSG) + MSGDTA('Copy of spooled files is complete') + TOPGMQ(*PRV) + MSGTYPE(*ESCAPE) The following sample status message goes to the job's external message queue and tells the job's external user what progress the job is making.Program Message Uses Now that you're acquainted with SNDPGMMSG parameters. see Figure 30. When the program sends the message.means that substitution variable &1 will supply the message text. OS/400 provides a special message ID." The current program sends this message to the calling program. In the following example. -. The message text for CPF9898 -&1. SC41-0030. the sending program is immediately terminated. I have concatenated constants (e. Because this means you cannot simply use the MSG parameter to construct text for these message types. (For a more complete explanation of message variables.3).) The next example is an escape message that might follow such a diagnostic message: SNDPGMMSG MSGID(CPF9898) MSGF(QCPFMSG) MSGDTA('Operations ended in error. the current program sends a completion message to the calling program to confirm the successful completion of a task. "Teaching Programs to Talk. SNDPGMMSG MSGID(CPF9898) + MSGF(QCPFMSG) + MSGDTA('Copy of spooled files in progress') + TOPGMQ(*EXT) + . When a program sends an escape message. the MSGDTA text becomes the message through substitution into the &1 data variable. to handle this particular requirement..' |> 'See previously listed messages') TOPGMQ(*PRV) MSGTYPE(*ESCAPE) + + + + + OS/400 uses an escape message to terminate a program when it encounters an error. Notice that the message text in the preceding example is constructed in the MSGDTA parameter." the Programming: Control Language Reference. see Chapter 29. The following is a sample diagnostic message: SNDPGMMSG MSGID(CPF9898) + MSGF(QSYS/QCPFMSG) + MSGDTA('Output queue' |> + &outqlib |< + '/' || + &outq |> + 'not found') + TOPGMQ(*PRV) MSGTYPE(*DIAG) In this example. and the Programming: Control Language Programmer's Guide. As I mentioned in my discussion of the MSGTYPE parameter. let's look at a few examples that demonstrate how to use this command. which you can construct using the MSGDTA parameter. CPF9898. SC41-8077. and control returns to the calling program. you must supply a valid message ID for the MSGID keyword when you send certain message types (to review which types require a message ID.

WAIT (time to wait). In Chapter 31. MSGQ (message queue). MSGKEY (message key). Figure 31. Why would you want to do this? You may want to look for a particular message in a message queue to trigger an event on your system. you have only half the picture. Within a job. you will learn how programs receive and manipulate messages. In Chapter 30. Or you may want to log specific messages received at a message queue. MSGTYPE (message type).the RCVMSG (Receive Message) and MONMSG (Monitor Message) commands. Let's suppose that PGM_D is the active program and that it issues the following command: RCVMSG Because no specific parameter values are provided. For our purposes. Figure 31. Whatever the reason. The first six parameters -.MSGTYPE(*STATUS) When you send a status message to an interactive job's external message queue. each message queue contains one message.determine which message your program will receive from which message queue and how your program processes a message. as well as each job. sending and receiving program messages functions much like phone mail." One program within the job can leave a message for another program or for the job.Hello. the place to begin is the RCVMSG command. Although you may be ready to send messages to another program. and RMV (remove message) -. OS/400 displays the message on the screen until another program message replaces it or until the message line on the display is cleared. Or you may want to look for messages that would normally require an operator reply. each program. we look at the "listening" side of the equation -. Any Messages? On the AS/400. each program or job can "listen" to messages in its mailbox. Receiving the Right Message You can use the RCVMSG command in a CL program to receive a message from a message queue and copy the message contents and attributes into CL variables. and programs can remove old messages from the mailbox.1 lists the RCVMSG command parameters. has its own "mailbox. In this chapter.PGMQ (program queue).2 illustrates a job message queue comprised of the job's external message queue and five program message queues. Chapter 31 . and instead. have your program supply the reply. and I'll give you some sample code that contains helpful messaging techniques. I explained how programs can send messages to other program message queues or to the job's external message queue. OS/400 would use the following default values for the first six parameters: RCVMSG PGMQ(*SAME *) MSGQ(*PGMQ) MSGTYPE(*ANY) MSGKEY(*NONE) WAIT(0) RMV(*YES) + + + + + .

*NEXT. which tells OS/400 to use the current program's name. there is only one message to receive -. because the PGMQ value is (*SAME *). Receiving the Right Values All the remaining RCVMSG parameters listed in Figure 31. tells the program to delete the message from the queue after processing the command. The first value specifies the relationship between the program named in the second value and the receiving program. You can use RMV(*NO) to instruct OS/400 to leave the message on the queue after RCVMSG receives the message. the RCVMSG command parameters also changed in V2R3 to accommodate ILE.1 provide CL return variables to hold copies of the actual message data or message attributes. combined with the value MSGKEY(*NONE). the following command: RCVMSG MSGQ(MYMSGQ) MSGTYPE(*COMP) RMV(*NO) MSG(&MSG) MSGDTA(&MSGDTA) MSGID(&MSGID) + + + + + + . PGM_D would receive a message from the PGM_D message queue. *LAST.3). Note on the V2R3 RCVMSG Command Parameter Changes As with the SNDPGMMSG command. see Chapter 30. it returns blanks or zeroed values for any return variables.2.The PGMQ parameter of the pre-V2R3 RCVMSG Command. which consists of two values -. For more information about the new parameter items you see for the PGMQ parameter in Figure 31. Because this book is at an introductory level and many of you will not use ILE until ILE RPG and/or ILE COBOL is available. I will not discuss these parameter changes in detail." In our example.) If RCVMSG finds a message immediately. the value MSGTYPE(*ANY). You can specify one of three values: • • • *PRV to indicate the program is to receive the message from the message queue of the program that called the program named in the second value *SAME to indicate the program is to receive the message from the message queue of the named program *EXT to indicate the program is to receive the message from the job's external message queue The value for the second element of the PGMQ parameter can be either the name of a program within the current program's job or the special value *. The last parameter value in the sample command. If RCVMSG finds no message on the queue during the WAIT period. In the example above. For example. You can use the WAIT parameter to specify a length of time in seconds (0 to 9999) that RCVMSG will wait for the arrival of a message. RMV(*YES). or before the number of seconds specified in the WAIT value elapses. (You can also specify *MAX.relationship and program -lets you receive a message from any program queue active within the same job or from the job's external message queue (see Figure 31.3. According to Figure 31. You normally use the RCVMSG command to retrieve the actual message text or attributes to evaluate that message and then take appropriate actions. instructs the program to receive the first message of any message type found on the queue regardless of the key value (for more information about the MSGTYPE and MSGKEY parameters see "RCVMSG and the MSGTYPE and MSGKEY Parameters."). OS/400 then marks the message as an "old" message on the queue. A program can receive an old message again only by using the specific message key value to receive the message or by using the value *FIRST."First message on PGM_D queue. The value WAIT(0) in the example tells the program to wait 0 seconds for a message to arrive on the message queue. which means the program will wait indefinitely to receive a message. or *PRV for the MSGTYPE parameter. RCVMSG receives the message.

during the execution of certain programs. the message identifier. the message data. &MSGDTA. The current program then immediately uses RCVMSG to receive that message from the *PRV message queue. and &SENDER. When you create a return variable for the SENDER parameter.SENDER(&SENDER) retrieves the actual message text. it is helpful to know the name of the calling program without having to pass this information as a parameter or hardcoding the name of the program into the current program. Another RCVMSG command parameter that you will use frequently is RTNTYPE (return message type). the variable must be at least 80 characters long and will return the following information: Positions 1 through 26 identify the sending job: 1-10 = job name 11-20 = user name 21-26 = job number Positions 27 through 42 identify the sending program: 27-38 = program name 39-42 = statement number Positions 43 through 55 provide the date and time stamp of the message: 43-49 = date (Cyymmdd) 50-55 = time (hhmmss) Positions 56 through 69 identify the receiving program (when the message is sent to a program message queue): 56-65 = program name 66-69 = statement number Positions 70 through 80 are reserved for future use. In this particular example. For instance. in the following command: RCVMSG PGMQ(*SAME *) MSGTYPE(*ANY) MSG(&MSG) RTNTYPE(&RTNTYPE) + + + the variable &RTNTYPE returns a code that provides the type of the message that RCVMSG is receiving. you have the name of the calling program. The current program sends a message to the calling program. After RCVMSG is processed. The possible codes that are returned are: 01 Completion 02 Diagnostic 04 Information 05 Inquiry 08 Request . When you use RCVMSG to receive messages with MSGTYPE(*ANY). &MSGID. Positions 56 through 65 of the &SENDER return value contain the name of the program that received the original message. For example. Notice the SENDER parameter used in this example. the current program might be looking for a particular completion message on a nonprogram message queue (MYMSGQ) to determine whether or not a job has completed before starting another job. your program can use a return variable to capture and interrogate the message type value. and the message sender data into the return variables &MSG. You can use the technique in Figure 31. The program may receive messages looking for a particular message identifier. The SENDER return variable can be extremely helpful when processing messages. the program can use these return variables. respectively.4 to retrieve that information. thus.

you should avoid writing code that appears something like IF (&rtntype = '02') DO . If the message data matches the comparison data (actually only the first 28 positions are compared).. Figure 31. Then. which includes all messages in the CPF9801 through CPF9899 range. It also provides a technique to direct the execution of the program based on the particular error conditions detected. You can use the MSGID parameter to name from one to 50 specific or generic message identifiers for which the command will monitor. the command MONMSG CPF9802 + EXEC(GOTO ERROR) monitors for the specific message CPF9802. you can change the code above to appear as IF (&rtntype = &@diag) DO .g. notify. It provides a technique to trap error/exception conditions by monitoring for escape... ENDDO ELSE IF (&rtntype = &@escape) DO .10 Request with prompting 14 Notify 15 Escape 21 Reply. checked for validity 23 Reply. not checked for validity 22 Reply.. A specific message identifier is a message ID that represents only one message. *NOTIFY) that are used with the MSGTYPE parameter on the SNDPGMMSG (Send Program Message) command but instead chose to use codes. the MONMSG command is successful and the action specified by the EXEC parameter is taken. notify.. from system reply list As you can see. ENDDO ELSE IF (&rtntype = '15') DO . such as the CL code listed in Figure 31. For example. ENDDO Monitoring for a Message The MONMSG command is available only in a CL program. whereas the command MONMSG CPF9800 + EXEC(GOTO ERROR) monitors for all escape. Thus. such as CPF9802..6 lists the MONMSG command parameters. in the program.. IBM did not choose to return the "word" values (e. message default used 24 Reply. The second parameter on the MONMSG command is the CMPDTA parameter. *ESCAPE. You have the option of using this parameter to specify comparison data that will be used to check against the message data of the message trapped by the MONMSG command. you should include a standard list of variables.. ENDDO Instead. However. and status messages in the CPF9801 through CPF9899 range. and status messages. when you write a CL program that must test the RTNTYPE return variable." A generic message is a message ID that represents a group of messages.5.. such as CPF9800. *DIAG. which is the message ID for the message "Not authorized to object &2. to make your CL program easier to read and maintain. system default used 25 Reply. the command .

The program then uses the CLRPFM (Clear Physical File Member) command to clear the existing file (if the program just created the new file. The EXEC parameter lets you specify a CL command that is processed when the MONMSG traps a valid message. In addition to using the command-level message monitor to plan for errors from specific commands. Figure 31. First. if the CPF2105 error is found. You should use this implementation to anticipate error conditions in your programs. and you must position it immediately after the last declare statement and before any other CL commands. If no EXEC value is found. you can use another form of MONMSG to catch other errors that might occur. CPF9999 catches some messages that CPF0000 will not catch because CPF9999 is the "Function Check" error. which occurs only after some other program error. When a commandlevel MONMSG traps a message. Now. and escape messages from other message identifier groups. the program simply continues with the next statement found after the MONMSG command. When you implement a program-level message monitor. . You can use the MONMSG command to monitor for messages that might occur during the execution of a single command.MONMSG CPF9802 CMPDTA('MAINMENU') EXEC(DO) monitors for the CPF9802 message identifier. including errors triggered by CPFxxxx escape messages. MCHxxxx escape messages (machine errors). That may be appropriate for some programs. It is placed immediately after the CL command that might generate the message and might appear as CHKOBJ &OBJLIB/&OBJ &OBJTYPE MONMSG CPF9801 EXEC(GOTO NOTFOUND) MONMSG CPF9802 EXEC(GOTO NOTAUTH) The MONMSG commands here monitor only for messages that might occur during the execution of the CHKOBJ command. CPF0000 only monitors for actual CPFxxxx messages. you might code the following: DLTF QTEMP/WORKF MONMSG CPF2105 to monitor for the CPF2105 "File not found" message. If the file does not exist.7 illustrates the placement of a program-level message monitor. the member will already be empty). the CPF9999 "Function Check" message provides the actual failing statement number. which is not available from the CPFxxxx error message. Second. I recommend that you use the message identifier CPF9999 instead of the widely used CPF0000. the program uses the CRTPF (Create Physical File) command to create the file. For example. but only executes the command found in the EXEC parameter if the CMPDTA value 'MAIN MENU' matches the first eight positions of the actual message data of the trapped CPF9802 message. In this example. Using CPF9999 provides two important functions over CPF0000. Specifying the CPF9999 message ID as the program-level message monitor makes this additional information available. This form of MONMSG use is called a command-level message monitor. This form of MONMSG use is called a programlevel message monitor. you can then take the appropriate action in the program to continue or end processing. the program simply continues processing as if no error occurred. examine the following code: CHKOBJ MONMSG CRTPF ENDDO CLRPFM QTEMP/WORK *FILE CPF9801 EXEC(DO) FILE(QTEMP/WORK) RCDLEN(80) QTEMP/WORK This code uses the MONMSG command to determine whether or not a particular file exists.

Working with Examples Figure 31. The order print program simply waits for messages to arrive on the queue. The first example is the MONMSG CPF9801 that follows the CHKOBJ command (B). That escape message terminates the current program and returns control to the calling program. but is simplified because you can display message information (you cannot display a data queue without writing a special program to perform that task). See Chapter 8.. the program sends one generic escape message "Operation ended in error . the program first removes this message by using RCVMSG with RMV(*YES). (Remember. What Else Can You Do with Messages? Now that you understand the mechanics. The &msg_flag is set to &@true and GOTO CLEAN_UP sends control of the program to the CLEAN_UP label. If the CPF9801 "Object not found" message is trapped. this MONMSG causes execution to continue at the label GLOBAL_ERR. An infinite loop might occur when an unexpected error occurs during the error-handling process that the program-level MONMSG controls. If the MONMSG traps the CPF0864 "End of file" message. the program has found an error condition.) The program-level message monitor (A in Figure 31. If an unexpected error occurs during program execution. where cleanup and then error message processing occurs. See Chapter 8 of the Control Language Programmer's Guide (SC41-8077)." to the calling program. The &msg_flag controls the overall message process. The MONMSG CPF0000 is used here to catch any error that might occur during the RCVMSG command process and immediately go to the end of the program without attempting to receive any other messages. Finally. whether the program ends normally or abnormally with an error. The program sets the value of &msg_flag to &@true and continues at label CLEAN_UP. the return variable &RTNTYPE is tested and only the messages that have a &RTNTYPE of &@diag or &@escape are processed.. at the RSND_END label. Notice that RCVMSG is used to receive each message from the program queue (D). The SNDPGMMSG command sends each processed message to the calling program message queue as a *DIAG message.. For instance.8 contains several examples of command-level message monitors. If the &msg_flag variable is &@true. and then sends a more meaningful message to the program queue using the SNDPGMMSG command. the program first prevents an infinite loop by testing to determine whether the program has already initiated the error-handling process. Because the "End of file" message is expected and not an error. Your programs should have a mechanism for cleaning up any temporary objects. the program continues at label RSND_BGN. After processing statements at CLEAN_UP. Another example of the command-level message monitor is the MONMSG CPF0864 that appears immediately after the RCVF statement (C). it is appropriate to remove that message from the program queue to prevent confusion in debugging any errors. you have to use your editor to do the copying because CL has no /COPY equivalent. Next the program uses the GOTO RCD_END statement to pass control of the program to the RCD_END label where the program sends a normal completion message to the calling program message queue. you might send a string of order data to a message queue where the order print program uses RCVMSG to receive and print the order data. You may want to place these variables in a source member that you can copy into CL programs as needed. you may want to know what else you can do with messages. .8 is a portion of a CL program that provides several examples of program message processing to help you tie together the information I've presented here and in Chapter 30. This functions much like data queue processing. Create a request message processor (a command processor like QCMD). This avoids having to submit a job or call a program. Use the SNDPGMMSG and RCVMSG commands to send and receive data strings between programs. The sample code in Figure 31.8) is coded to handle any unexpected errors. At GLOBAL_ERR. the program removes this message from the current program message queue using RCVMSG with RMV(*YES). The program contains a standard list of DCL statements for defining variables used in normal message processing. As the program receives each message. Notice that the value for the TOPGMQ parameter on the SNDPGMMSG command is *SAME to direct the message to the current program queue. The program then sends each message on the current program message queue to the calling program (which in turn might continue to send the messages back in the program stack to the command processor or some other program that either ends abnormally or displays the messages to the user who requested the function). Listed below are three possible solutions using messages: • • • Create a message break-handling program for your message queue.

remember that being uncomfortable doesn't mean you're incompetent. or why not say DSPFD (Display File Description) instead of this 'LISTF . GO CMDWRK for "work with" commands). You might easily decide that the procedures you've already memorized on another system are certainly better and fail to see why IBM would think the OS/400 commands could possibly be helpful! Well. It is certainly understandable to look at the IBM-supplied system commands and wonder just how many there are. "Why didn't Hewlett-Packard think to provide the WRKSPLF (Work with Spooled Files) command. However. GO CMDPTF for PTF-related commands. DLT. and more than two-thirds of the existing commands are constructed using just 10 verbs (CRT. after stumbling around for days. the system releases that spooled file. type "GO CMDxxx". Once you understand most of those objects. I recommend that you first familiarize yourself with the various objects that can exist on the system.friend or foe? That's the big question for anyone new to the AS/400. Below the screen format. you are using an OS/400 command.2' stuff?" Anyway. Type a command on the command line and press F1 (Help). or creating a subsystem. WRK. CRTOUTQ -.. Next to each menu option I have added the command the system executes when you select that option. I can empathize with you. In many cases. For help identifying and using OS/400 commands. OS/400 commands consist basically of a verb and a noun (e. it's quicker and easier to key in the command than it is to use the menus.Create Output Queue). CPY. why so many are needed. you can't delete a job. Obviously. prompt it. after recently trying to navigate my way around an HP3000. OS/400 will present you with a list of those commands. You can simply key in the command to achieve the same results.. and END). CHG. let me say something about how system commands are organized and named. this example is not typical of all OS/400 commands. This is good news if you are worried about remembering all the commands.OS/400 Commands OS/400 commands -. In Figure 32. With the mechanics under your belt. When you select an option from an OS/400 menu or from a list panel display. Commands: The Heart of the System The command is at the heart of the AS/400 operating system. you can type in RLSSPLF (Release Spooled File).g. . If you are familiar with the system commands. try using one or more of the following resources: • • • On any command line. calling everyone I could think of. where you fill in the xxx with either a verb or an object (e. displaying messages. and to know which commands are worth learning. it's time for you to explore how you can use messages to enhance your own applications. and how you are ever going to remember them all. press F4 (Prompt). RMV. Chapter 32 . Whether you are working with an output queue. you are executing a command. OS/400 offers online help for all CL commands. Before I continue with this chapter. let me introduce OS/400 commands and give you a few helpful tips and suggestions for customizing system commands to make them seem more friendly. ADD. and scouring the books for information.2 you see the familiar Work with Output Queue display. To know which technique to use. On any command line. STR. For instance. DSP.These are only examples of how you might use messages to perform tasks on the system. Figure 32. typing in the RLSSPLF command is much more time consuming than entering a "6" in the appropriate blank. For example. Let me give you a couple of examples. you can quickly figure out what verbs can operate upon each object type.1 shows the AS/400 User Tasks menu.g. if you enter a "6" next to a spooled file entry on the list. and fill in the appropriate parameters to accomplish the same thing. I have listed the available options and the command the system executes for each. but you can cancel one. creating an object. it's helpful to have a firm grasp of how commands are organized and how they can be used. I kept thinking. You can choose menu options to find and select the command you need. OS/400 will present you with a menu of the major command groups. it only makes you feel that way! With that said. So if you get frustrated when you find the procedures you are accustomed to have been twisted into something that seems foreign. I finally managed to memorize a few of the needed commands and complete the "short" job I had set out to do. and realizing that many of you need to master the AS/400 sooner or later.

let's practice a few commands. Keeping the above guidelines in mind. and the value *LIB for the OBJTYPE parameter. Now press F11. Tips for Entering Commands By putting a little time and effort into learning a few phrases in this new language. you can code all parameters positionally. type "GO INFO"). The OBJ keyword requires a qualified value. In the resulting screen (Figure 32. In this example. Then press Enter. You access InfoSeeker by pressing F11 on any Help Display Panel.3). you have entered. by typing STRSCHIDX (Start Search Index) on a command line and pressing Enter.e. The number of allowed positional parameters is designated in the syntax diagram by a "P" in a box. If the symbol does not appear in the syntax diagram. you'll be comfortable and productive with day-to-day tasks on the AS/400.g. or by selecting option 20 from the Information Assistant menu (to get this menu. consider the DSPOBJD (Display Object Description) command. When entering parameter values positionally (i. Suppose you type DSPOBJD QGPL *LIB *FULL Will this work? Sure. CRTDEV*) and press Enter. Notice that the keyword OUTPUT is not in bold. you can enter a generic name directly on the command line (e.• • • Execute the SLTCMD (Select Command) Command to find commands using a generic name (e. Now you can key in the values QGPL and QSYS for the object name and the library name. key them in the same order as they appear in the command syntax diagram. which instructs the system to display the results of the command on the screen. If you are on V2R3 (or beyond). type in the same command as follows: DSPOBJD QSYS/QGPL *LIB or DSPOBJD QGPL *LIB Either command meets the syntax requirements. If you are on V2R3 (or beyond). the line next to "Object" will be in bold. the system will search for the object in the job's library list. STR*). InfoSeeker helps you find further command help and related information. What if you want to direct the output to the printer. and you type DSPOBJD QGPL *LIB *FULL *PRINT .. and the order of the values is correct. indicating that Object is a required parameter. Following these tips for entering commands will help ensure correct syntax and get you up to speed: • • • Be sure to enter values for required parameters.g. The default value *LIBL indicates that if you don't enter a specific library name. If you exceed the number of allowed parameters. it's often easier to key them in on the system command line than it is to go through the menus. OBJ for object name and OBJTYPE for object type).4. STR*. Once you've become acquainted with some of the most frequently used commands. in the correct order.. an error message is issued. WRK*. Now. showing that it is an optional parameter.g. positional parameter (DETAIL). WRK*. and you will see the screen shown in Figure 32. OS/400 will present you with a list of commands that begin with the same letters you specify before the asterisk. respectively. Specify values for positional parameters unless you want to use the default values. First.. using only the command line. Refer to the IBM reference guide Programming: Reference Summary (SX41-0028). use InfoSeeker. which exists in library QSYS. Keywords aren't needed because all the parameters used are positional. Type "DSPOBJD" and press F4 to prompt the command. values for the two required parameters and the value (*FULL) for the optional. and the screen displays the object description for library QGPL. The default value for OUTPUT is an asterisk (*). which means that you must supply the name of the library in which the object is found. without keywords).. Notice that the keywords appear beside each field (e.

Don't place the new command in library QSYS or any other system-supplied library: New releases of OS/400 replace these libraries. Although you can place such requirements in a data processing handbook or a standards guide. WRKOUTQ. and you should include the new library in the library list of those who will use your modified commands. You should name your new library USRCMDLIB. some (translation: "many") IBM-supplied commands are long. Or you may want the SHARE parameter to contain the value *YES rather than the IBM-supplied default *NO. How can you accomplish this without renaming the actual IBM commands or having to create your own command to execute the real system command? Easy! Just use the CRTDUPOBJ (Create Duplicate Object) command. which would be faster: to prompt WRKOUTQ just to enter the output queue name. If you prompt for the parameters. the originating library. but learning how to enter a few frequently used commands with minimal keystrokes can save you time. or all objects in a particular library. and the command DSPMSG (Display Messages) could become MSG. QSYS.e. The command WRKOUTQ could become WO. enter CRTDUPOBJ OBJ(WRKOUTQ) FROMLIB(QSYS) OBJTYPE(*CMD) + TOLIB(USRCMDLIB) NEWOBJ(WO) Either of these commands places the new command (WO) into library USRCMDLIB.Will this work? No! You have to use the keyword (OUTPUT) in addition to the value (*PRINT). by an initial character string common to a group of objects. and the object type. For example. this method relies on your staff to either remember the substitute values or look up the values each time they need to key them in. You can change these defaults by using one of two methods. For example. or you can duplicate objects generically (i. so the new command functions just the same as the IBM-supplied command. or anything that describes the purpose of the library. you may want to specify the SIZE parameter as (1000 1000 999). or to enter "WRKOUTQ outq_name"? Should you prompt the WRKJOBQ (Work with Job Queue) command just to enter the job queue name. You may want to change default values for the CRTxxx (Create) commands especially. . all the object's attributes are duplicated. Why would you want to? Well. you could shorten the command WRKSBMJOB (Work with Submitted Jobs) to WSJ or JOBS. or should you simply enter "WRKJOBQ job_name"? In both cases you will save yourself a step (or more) if you simply enter the command. enter CRTDUPOBJ WRKOUTQ QSYS *CMD USRCMDLIB WO In this example. When the destination library is ready. let's explore how you might create friendlier versions of certain useful system commands. followed by an asterisk). Modifying Default Values The final touch for tailoring commands is to modify certain parameter default values when you know that you will normally use different standard values for those parameters. you would get the description specified by the default value (*BASIC). The first method requires that everyone who uses a command remember to specify the desired values instead of the defaults for certain parameters. You might want to shorten the commands you use most often. Before trying this command. This means that the command processing program for WO is the same as for WRKOUTQ. CRTDUPOBJ lets you duplicate individual objects. Most of the time you will probably prompt commands. To rename the WRKOUTQ command. because OUTPUT is beyond the positional coding limit. or CMDLIB. and your modified command will be lost. requiring multiple keystrokes. use the CRTDUPOBJ command to copy the commands you want to customize into the new library. and *CMD are values for required parameters that specify the object. take a few minutes to look over the CRTDUPOBJ command description in Volume 3 of IBM's Programming: Control Language Reference manual. For example. or multiple object types. Let's say you skip *FULL and just enter DSPOBJD QGPL *LIB OUTPUT(*PRINT) Because you haven't specified a value for the positional parameter DETAIL. respectively.. When you duplicate an object. for every physical file created. Customizing Commands Taking our discussion one step further. Then create a library to hold all your new customized versions of IBM-supplied commands.

Someone living in my old house in Florida will someday find that special outlet adaptor I never used.. The system commands on the new release might have new parameters. you should know about OS/400 data areas. You can modify your user profile attribute USROPT to include the value *CLKWD if you want the CL keywords to be displayed automatically when you prompt commands (rather than having to press F11 to see them). You might want to use a good documentation package to find all uses of specific commands and to evaluate the risk of changing certain default values. and several little metal doohickeys I removed from the back of my PS/2 when I installed feature cards. rather than having to type WO your_outq you can simply type "WO" (of course. Why not change the default value of *ALL for the OUTQ parameter to be the name of your own output queue? Then. or a serial number. I don't happen to believe that everyone should be able to enter every command without prompting or using any keywords. When you use the CHGCMDDFT or CRTDUPOBJ command to customize CL commands. But I am convinced that having a good working knowledge of the available OS/400 commands not only will help you save time.g. CHGCMDDFT simply modifies the default values that will be used when the command is processed. See? Commands can be fun! To modify system command parameter defaults using the CHGCMDDFT command. However. you would type CHGCMDDFT CMD(CRTPF) NEWDFT('SIZE(1000 1000 999) SHARE(*YES)') You could use CHGCMDDFT to enhance the WO command you created earlier. I hope he gets some use out of them! This tendency to misplace things also finds its way into the world of computer automation. OS/400 provides a simple solution.OS/400 Data Areas I'd like to have a dollar for every time I've put something in a special place only to forget where I put it. you should create a CL source program that performs those changes. a software version identification number. you should be cautious when using this command because it affects all uses of the changed command (e. list the library before QSYS on the system library list. . a vendor-supplied software package might be affected by a change you make). or new default values. if you have retained the CL command names rather than renaming the commands. but also will make you more productive on the system. to make the changes mentioned above for the CRTPF (Create Physical File) command. but fortunately for those of us who need a place to keep some essential chunk of information. Then change the command defaults and. To modify this user profile attribute. Chapter 33 . the current job step in progress for a long-running job. He will probably also find the casters I removed from the baby bed. different command processing programs. For instance. If you want to work with another output queue. this personalized command should only exist in your library list). Then whenever a new release of OS/400 is installed. If you ever write applications that use data such as the next available order number. you should run the CL program. see IBM's Programming: Control Language Reference. someone with the proper authority should enter the CHGUSRPRF (Change User Profile) command as follows: CHGUSRPRF user_profile USROPT(*CLKWD) For more information about the USROPT keyword. Take a few minutes to read the command description in IBM's Programming: Control Language Reference. you can still type in the queue name to override the default value. thus duplicating or modifying the new version of the system commands. The AS/400 provides a function-rich command structure that lets you maneuver through the operations of your system. Using CHGCMDDFT is an effective way to control standards. Volume 2. you should duplicate the command into a different library.The other method for modifying the default values of IBM-supplied commands is to use the CHGCMDDFT (Change Command Default) command. a stash of new golf balls. Suppose that you usually use the WO command to work with your own output queue.

The DTAARA parameter on this command lets you either replace the contents of a data area or change only a portion (substring) of the data area. and DLTDTAARA (Delete Data Area) commands. You can create and name a data area for each program whose status you need to know. The CL program statement RTVDTAARA DTAARA(QGPL/MYDTAARA) RTNVAR(&ALL) . it can get the next check number from the data area and then update the data area to reflect the use of that check number. Suppose you wanted to display the data area we just created. Just as you may forget where you've placed a special object in your home. The RTVDTAARA command provides a simple way for CL programs to retrieve the data area value. you can easily forget what you've created a data area for. if PRP101 is a program to be checked. other OS/400 commands associated with data areas are the DSPDTAARA (Display Data Area). Thus. Another use for data areas is to emulate the S/36's IF-ACTIVE feature. Notice that you can specifically identify the data area with the TEXT parameter on the CRTDTAARA (Create Data Area) command. it can be executed only from within a CL program. For instance. indicating that the program is active. the command CHGDTAARA DTAARA(QGPL/MYDTAARA) VALUE('123') would replace the entire contents of the data area. However. Then you can modify program PRP101 to acquire a shared update (*SHRUPD) lock on the data area using the ALCOBJ (Allocate Object) command. RTVDTAARA (Retrieve Data Area). Besides CRTDTAARA. For example. which lets you check a program's execution status. it means the data area object is currently allocated by program PRP101.A data area is an AS/400 object you can create to store information of a limited size. the command CHGDTAARA DTAARA(QGPL/MYDTAARA (1 3)) VALUE('123') replaces only the first three positions of the data area with the value '123'. the DTAARA parameter lets you reference all or only a portion of the data area. For example. Creating a Data Area The best way to acquaint you with data areas is to walk you through the process of creating one. the new value of the data area would be '123 '. Because the command provides return variables. Here again. An application needing to check the execution status of program PRP101 can simply try to acquire an exclusive (*EXCL) lock on the data area. CHGDTAARA (Change Data Area). you can create a data area named PRP101 in a user library. the original value of MYDTAARA would be modified to '123DEFGHIJ'. a payroll application might use a data area to store the next available check number. If the value being placed into the return variable is shorter than the return variable. You would execute the following command: DSPDTAARA QGPL/MYDTAARA The system would then display the description and contents of the data area on your workstation. the value is padded to the right with blanks. Data areas typically are used to store some incremental number. Therefore. You can use the CHGDTAARA command interactively or from within a program. If the allocation attempt fails. Wise use of the TEXT parameter can help you effectively document data area objects. A data area exists independently of programs and files and therefore can be created and deleted independently of any other objects on the system. you would type the following command: CRTDTAARA DTAARA(QGPL/MYDTAARA) TYPE(*CHAR) LEN(10) VALUE('ABCDEFGHIJ') TEXT('Data Area to + store ABCDEFGHIJ') + This data area object can contain 10 characters of data and can be referenced by any user or program authorized to use library QGPL. To create a data area object named MYDTAARA in library QGPL and initialize it with the value 'ABCDEFGHI''. Each time the application writes or records a check.

starting with position 4. the INPUT data area data structure defines fields internally to the program. The RTVDTAARA statements in Figure 33. the system creates an LDA for the submitted job and copies the contents of the submitting job's LDA into it. RPG's DEFN. To delete this data area from your system. you can pass a data string from a given job to every job it submits. and OUT opcodes provide a method for explicitly retrieving and updating a data area and for explicitly controlling the lock status of a data area object. Local Data Areas A local data area (LDA) is a special kind of data area automatically created for each job on the system. IN. For more information about how to use these opcodes with data areas. however. the LDA is associated with that job. putting the first into variable &FIELD1 and the second into &FIELD2. Figure 33. Any subsequent programs the job invokes that require this information can simply retrieve it from the LDA rather than performing additional file I/O. The CHGDTAARA command replaces positions 101 through 150 of the LDA with the value of variable &NEWVAL. . In this example. When one job submits another job.1 provides sample RPG/400 code to retrieve information from a data area named INPUT. Every RTVDTAARA command must access the data area -.2 retrieve two substrings from the LDA.. an initial program might retrieve information relating to that employee (e.a time-consuming operation. nor does it have an associated library (not even QTEMP). During program execution. You can perform any number of retrievals and changes on the LDA.would retrieve the entire contents of MYDTAARA ('ABCDEFGHIJ') into return variable &ALL. when an employee signs on to a workstation. see IBM's AS/400 Languages: RPG/400 Reference (SC09-1349) and AS/400 Languages: RPG/400 User's Guide (SC09-1348). starting in the left-most position of the return variable. type the command DLTDTAARA QGPL/MYDTAARA You can also use high-level language programs to retrieve and modify data area values. The variable &JUST2 would return the value 'DE'. it cannot be displayed or modified by any other job on the system. the LDA is dependent on a particular job. The LDA is often used to store static information that must be available to many different programs executed within a job.024 characters long and initialized with blanks. Thus. branch number or employee number) and put it into the LDA. the program implicitly reads and locks the data area when the program is initialized and then implicitly writes and unlocks it when the program ends. keep in mind that only one copy of the LDA exists for each job. As long as the job is running. You cannot manually create or delete an LDA. The LDA is simply maintained as part of the job's process access group (a group of internal objects that is associated with a job and that holds essential information about that job). For example. The LDA is a character-type data area 1. Unlike the data area object discussed earlier. Now consider the following CL program statement: RTVDTAARA DTAARA(QGPL/MYDTAARA (4 2)) RTNVAR(&JUST2) This RTVDTAARA command retrieves from MYDTAARA a substring of two characters.g. One performance tip to remember when using the RTVDTAARA command is that it is more efficient to retrieve the entire data area into a single CL variable and use several CHGVAR (Change Variable) commands to pull substrings from that variable than it is to execute several RTVDTAARA commands to retrieve multiple substrings.

you may want to look at data areas that already exist there. However. As you begin to think of reasons to use data areas on your system. These are the basics you need to begin exploring OS/400 data areas. You cannot create or delete a GDA (although you can modify it). the system creates an additional data area called the group data area (GDA). If so. Check your libraries to see whether your software provider has supplied any data areas. . the GDA simply provides additional temporary storage (beyond the LDA that exists for each job). but it is only 512 characters long. and it has no associated library. Another unique limitation is that you cannot use it in a substring function on a parameter. The GDA is accessible from any job in the group and is deleted when the last job in the group has ended or when the job is no longer part of a group job. the GDA is a blank-initialized character-type data area. For jobs that run as group jobs. Similar to the LDA. you can retrieve the entire GDA and then use the CHGVAR command to reference particular portions of the data. how are they used? You may discover that you have used OS/400 data areas all along.Group Data Areas If an interactive job becomes a group job (via the CHGGRPA (Change Group Attributes) command).

Sign up to vote on this title
UsefulNot useful