SG24-4526-01

AS/400 Client/Server Performance using the Windows Clients December 1996

This soft copy for use by IBM employees only.

IBML

International Technical Support Organization AS/400 Client/Server Performance using the Windows Clients December 1996

SG24-4526-01

This soft copy for use by IBM employees only.

This soft copy for use by IBM employees only.

Take Note! Before using this information and the product it supports, be sure to read the general information in Appendix F, “Special Notices” on page 501.

Second Edition (1995, 1996)
This edition applies to OS/400 Version 3, Release 1 and OS/400 Version 3, Release 6. Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. JLU Building 107-2 3605 Highway 52N Rochester, Minnesota 55901-7829 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. © Copyright International Business Machines Corporation 1996. All rights reserved. Note to U.S. Government Users — Documentation related to restricted rights — Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.

This soft copy for use by IBM employees only.

Contents
Preface . . . . . . . . . . . . . . . . . How This Redbook Is Organized The Team That Wrote This Redbook . . . . . . . . Comments Welcome
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xi xi xii xiv 1 1 2 3 4 4 5 6 7 8 9 12 12 13 13 13 14 15 16 16 18 19 20 21 22 22 22 23 24 24 30 30 32 33 34 34 35 36 38 38 38 38 38 39 39 39

Chapter 1. Application Design . . . . . . . . . . . . . . . . . . . . . . . 1.1 Client/Server Application Design for Performance . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Online Analytical Processing (OLAP) 1.1.2 Exploit Strengths of the Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.3 Event Driven Application Processing . . . . . . . . . . . . . 1.1.4 Exploit Strengths of the AS/400 Server 1.1.5 I/O Considerations - AS/400 versus Client/Server Applications . . . . . . . . . . . . . . . . . . 1.1.6 Serial Processing in the Client . . . . . . 1.1.7 Overlap Client Requests for Database Processing . . . . . . . . . . . . . . . . . . . 1.2 Client/Server Application Models . . . 1.2.1 Database Serving (Remote Data and Distributed Data) . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Distributed Display . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Distributed Logic . . . . . . . . . . . . . . . . . . . . 1.2.4 Distributed Logic and Data . . . . . . . . . . . . . . . . . 1.2.5 Data Placement Considerations 1.2.6 Distributed Logic Methods . . . . . . . . . . . . . . . . . . . . . 1.2.7 Other Considerations for the AS/400 System as a Server . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.8 Commitment Control 1.3 Database Design for Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Referential Integrity (RI) 1.3.2 Triggers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 Indexes and Views 1.3.4 Denormalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.5 Database Design Conclusion 1.4 AS/400 SQL Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 SQL Catalog 1.4.2 Optimizer Makes the Decisions . . . . . . . . . . . . . . . . . . 1.4.3 Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.4 Access Plan 1.4.5 Tools Available to Verify SQL Processing . . . . . . . . . . . . . . . . . 1.5 Understanding V3R1 Enhancements for SQL Processing 1.5.1 The WHERE Clause . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.2 Multi-key Row Positioning with OR Criteria . . . . . . . . . . . 1.5.3 Multi-key Row Positioning and JOIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.4 HAVING to WHERE . . . . . . . . . . . . . . . . . . . . 1.5.5 Data Space Scan Selection 1.5.6 Extended Join Support for SQL . . . . . . . . . . . . . . . . . . 1.5.7 Conflicting Index Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Tips and Techniques for SQL Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.1 Sort 1.6.2 Temporary Index Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.3 Maintain Useful Indexes Over Tables . . . . . . . . . . . . . . . . . . . . . 1.6.4 Avoid Temporary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.5 Data Skew . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.6 Dynamic SQL . . . . . . . . . . . . . . . . . . . . . 1.6.7 Minimize Data Movement

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

© Copyright IBM Corp. 1996

iii

This soft copy for use by IBM employees only.

1.6.8 Avoid Data Conversion 1.6.9 SQL Program Compiles 1.6.10 Keep Predicates Clean 1.6.11 Other Considerations

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39 40 40 40 41 41 41 42 43 43 43 43 44 44 44 44 48 49 50 52 53 53 54 56 57 57 58 58 59 60 60 61 62 63 63 63 64 65 67 67 67 69 70 72 73 73 74 74 74 75 78 78 78

Chapter 2. Communications Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction to Communication SNA . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 What Is SNA? 2.1.2 The Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.3 What Is an LU? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.4 What Is LU 6.2? . . . . . . . . . . . . . . . . . . . . . . 2.1.5 What Is a Session? . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.6 What Is a TP? 2.1.7 What Is a Conversation? . . . . . . . . . . . . . . . . . . . 2.1.8 Attach Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.9 Parallel Sessions 2.1.10 What Is a Mode? . . . . . . . . . . . . . . . . . . . . . . . 2.1.11 What Is an RU (Request/Response Unit)? . . . . . . . . 2.1.12 SNA Pacing . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.13 Line Protocol Frame Size and Response Requirements 2.1.14 AS/400 Error Recovery Parameters . . . . . . . . . . . . 2.1.15 What Is an APPC Verb? . . . . . . . . . . . . . . . . . . . 2.1.16 Types of APPC Verbs . . . . . . . . . . . . . . . . . . . . 2.1.17 SNA Layers over Token-Ring Network . . . . . . . . . . . . . . . . . 2.1.18 SNA RU and Line Protocol Frame Analogy 2.2 Sockets Communications Support over TCP/IP . . . . . . . . . . . . . . . 2.2.1 Establishing Client/Server Communications 2.2.2 Server Mapper Daemon . . . . . . . . . . . . . . . . . . . 2.2.3 Server Daemons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 Service Table 2.3 Where Do You Begin with Unacceptable Performance? . . . . . . . . . . . . . . . . . . . 2.4 Components of Communications 2.4.1 Communications Tasks CPU Time . . . . . . . . . . . . . 2.4.2 Line Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 Wait Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 Autostart Jobs . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.5 Prestart Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.6 Communications Trace as a Performance Tool 2.5 Performance Recommendations . . . . . . . . . . . . . . . . . Chapter 3. Work Management . . . . . . . . . . . . . . . . . . . 3.1 Performance Concepts 3.1.1 Queuing Concepts . . . . . . . . . . . . . . . . . . . 3.1.2 Response Time Curve 3.1.3 Components of Response Time . . . . . . . . . . . . . . . . . . . . 3.2 Subsystems 3.2.1 IBM Supplied Subsystems . . . . . . . . . . . . . . . . 3.2.2 QBASE Subsystem 3.2.3 QCTL, QBATCH, QINTER Subsystems . . . . . . . . . . . 3.2.4 QSPL Subsystem . . . . . . . . . 3.2.5 QSNADS Subsystem . . . . . . . . 3.2.6 QSYSWRK Subsystem 3.2.7 QLPINSTALL . . . . . . . . . . . . . . 3.2.8 QPGMR . . . . . . . . . . . . . . . . . 3.2.9 Client/Server Subsystems . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

iv

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

3.2.10 Prestart Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Memory Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Job Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Identifying Database Server Job 3.4.2 Identifying Database Server Jobs Using WRKOBJLCK . . . . . . 3.4.3 Identifying Server Jobs Using Sockets Communications Support Chapter 4. Client/Server Application Serving . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction 4.1.1 Application Serving APIs . . . . . . . . . . . 4.2 Program-to-Program Communications (APPC) . . . . . . . . 4.2.1 APPC Programming Options . . . . . . . . . . . . . 4.2.2 APPC Conversations . . . . . 4.2.3 Personal Computer Programming 4.2.4 Personal Computer Programming Examples 4.2.5 Visual Basic Example Code . . . . . . . . . 4.2.6 C + + E x a m p l e C o d e . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.7 AS/400 Processing 4.2.8 Example DDS for the ICF File . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.9 ICF Creation 4.2.10 Example RPG Code . . . . . . . . . . . . . 4.3 Program-to-Program Communications (Sockets) 4.3.1 Sockets Flow of Events . . . . . . . . . . . . 4.3.2 Sockets Performance . . . . . . . . . . . . . 4.4 Data Queues Interface . . . . . . . . . . . . . . . 4.4.1 Data Queues Implementation . . . . . . . . 4.4.2 Remote Data Queue Function . . . . . . . . 4.4.3 PC API Interfaces . . . . . . . . . . . . . . . 4.4.4 Data Queue Implementation . . . . . . . . . 4.4.5 Commonly Used Data Queue APIs . . . . . . . . . . . . 4.5 Distributed Program Call Interface . . . . . . . 4.5.1 Distributed Program Call Flow 4.5.2 Typical Code for DPC . . . . . . . . . . . . . 4.5.3 Visual Basic Example Code for DPC . . . . 4.5.4 C++ Example Code for DPC . . . . . . . . . . . . . . . . . 4.5.5 Comparison of Techniques 4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . Chapter 5. Client/Server Database Serving . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . 5.1.1 AS/400 Database Serving 5.1.2 Client Access/400 Servers . . . . . . . . . 5.2 Remote SQL Interface . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Remote SQL Architecture 5.2.2 Remote SQL Enhancements . . . . . . . . 5.2.3 SQL Verbs . . . . . . . . . . . . . . . . . . 5.2.4 Other SQL Verbs Supported . . . . . . . . 5.2.5 Example Program Flow . . . . . . . . . . . 5.2.6 Example Code . . . . . . . . . . . . . . . . 5.2.7 Remote SQL Summary . . . . . . . . . . . 5.3 Open Database Connectivity (ODBC) Interface . . . . . . . . . . . . . . . 5.3.1 ODBC Interface 5.3.2 ODBC Components . . . . . . . . . . . . . 5.3.3 Types of ODBC Drivers . . . . . . . . . . . 5.3.4 ODBC Conformance Levels . . . . . . . .

. . . . . . . . . . . . . . . . . .

80 83 84 85 86 87

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

89 89 . 89 . 90 . 90 . 91 . 92 . 93 . 93 . 95 . 97 . 98 . 98 . 98 100 102 103 103 104 104 105 106 107 110 111 112 112 114 115 116 119 119 119 120 121 122 122 123 123 124 124 125 125 126 126 127 127

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Contents

v

This soft copy for use by IBM employees only.

5.3.5 PC Support/400 V2R3 ODBC Driver . . . . . . . . . . . . . . . . 5.3.6 Client Access/400 Windows 3.1 ODBC Driver 5.3.7 ODBC Support . . . . . . . . . . . . . . . . . . . . . . . . 5.3.8 Calling ODBC Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.9 Basic Application Steps . . . . . . . . 5.3.10 Simplified ODBC Application Structure . . . . . . . . . . . . . . . . . . 5.3.11 Basic Application Flow . . . . . . . . . . . . . . . 5.3.12 Programming Prerequisites . . . . . . . . . . . . . . . 5.3.13 Using ODBC DLL Functions . . . . . . . . . . . . . . . . . 5.3.14 API Conformance - Core 5.3.15 API Conformance - Level 1 . . . . . . . . . . . . . . . . 5.3.16 API Conformance - Level 2 . . . . . . . . . . . . . . . . 5.3.17 Environments, Connections, Statements . . . . . . . . . . . . . . . . . . . 5.3.18 Simplified C Example - Data Entry 5.3.19 ODBC Function Return Codes . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.20 General ODBC Application Flow 5.3.21 Coding the ODBC APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.22 ODBC Stored Procedures 5.3.23 Calling Stored Procedures Using Result Sets . . . . . 5.3.24 Examples Using Stored Procedure with Result Sets . . . . . . . 5.3.25 Using Parameters with AS/400 Languages 5.3.26 Commitment Control Considerations . . . . . . . . . . 5.3.27 Using Optimistic Record Locking . . . . . . . . . . . . 5.3.28 Using Stored Procedures to Run Commands . . . . . 5.3.29 Extended Fetch . . . . . . . . . . . . . . . . . . . . . . . 5.3.30 Block Insert . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.31 Visual Basic Controls and Database Objects . . . . . 5.3.32 Configuring an ODBC Data Source for Windows 3.1 . . . . . . . . 5.3.33 Performance Tuning IBM′s ODBC Driver 5.3.34 Windows 3.1 ODBC Administrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.35 ODBC.INI 5.3.36 Configuring an ODBC Data Source for Windows 95 . 5.3.37 ODBC Parameters . . . . . . . . . . . . . . . . . . . . . 5.3.38 Using the Predictive Query Governor from ODBC . . 5.3.39 Exit Programs . . . . . . . . . . . . . . . . . . . . . . . . 5.3.40 Running 16 Bit ODBC Applications under Windows 95 5.3.41 Comparison of ODBC Techniques Using Windows 3.1 5.3.42 Comparison of ODBC Techniques Using Windows 95 5.4 OLTP Serving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Client/Server 4GL and Middleware . . . . . 5.6 Query Download/Upload (Database File Transfer) 5.6.1 Query Download . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.2 Query Upload . . . . . . . . . . . . . . . . . 5.7 Summary - Database Serving Chapter 6. Client/Server File Serving . . . . . . . . . . . . . . . . . . . . . . 6.1 AS/400 Client/Server Options . . . . . . . 6.2 Integrated File System (IFS) Overview 6.2.1 IFS File Server . . . . . . . . . . . . . . . . . . . 6.2.2 AS/400 Integrated PC Server (FSIOP) Concepts 6.2.3 LAN IOP Response Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.4 LAN IOP Throughput . . . . . . . . . . 6.3 AS/400 File Serving Performance 6.4 File Serving Performance Positioning . . . . . . . . 6.4.1 File Serving Workloads and Configurations . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

130 131 132 132 132 132 133 134 134 135 136 137 139 140 141 144 146 151 154 155 160 161 161 164 165 166 166 168 171 171 172 173 178 184 186 188 189 190 191 192 193 193 194 194 195 195 196 196 196 197 199 199 200 200

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

vi

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

6.5 Client Access/400 File Serving Performance . . . . . . . . . . . . . . 6.5.1 Performance Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Conclusions and Recommendations . . . . . . . . . . . . . . . . . 6.5.3 CA/400 for Windows 95 File Serving . . . . . . . . . . . . . . . . . 6.5.4 Performance Tips/Techniques for Client Access/400 File Serving 6.6 LAN Server/400 and FSIOP File Serving Performance . . . . . . . . . 6.6.1 LAN Server/400 and FSIOP Sizing Guidelines . . . . . . . . . . . . . . . . . . . . . . 6.6.2 BAPCo5 Workload File Serving Comparisons 6.6.3 Conclusions and Recommendations . . . . . . . . . . . . . . . . . 6.7 Multimedia File Serving . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.1 Conclusions and Recommendations . . . . . . . . . . . . . . . . . 6.8 FSIOP Performance Monitor Query - Cache . . . . . . . . . . . . . . . 6.9 FSIOP Performance Monitor Query - CPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10 FSIOP Recommendations 6.11 Save/Restore Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12 OS/400 Integration for Novell NetWare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.1 Configurations . . . . . . . . . . . . . . . . . . . . . . . . 6.12.2 Workload Descriptions 6.12.3 Measurement Results . . . . . . . . . . . . . . . . . . . . . . . . . 6.12.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.13 OS/400 Integration of Lotus Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.13.1 Number of Notes Clients Supported . . . . . . . . . . . . . . . . . . 6.13.2 Workload Scenario Descriptions . . . . . . . . . . . . . . . . 6.13.3 Conclusions and Recommendations 6.13.4 Client and Server Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.14 Lotus Notes DB2 Integration Performance . . . . . . 6.14.1 Importing DB2/400 Data To a Lotus Notes Database . . . . . 6.14.2 Shadowing DB2/400 Data To a Lotus Notes Database . . . . . . . . . . . . . . . . 6.14.3 Conclusions and Recommendations 6.14.4 Exit Program: Data from a Notes Database to DB2/400 . . . . . Chapter 7. Client/Server Performance Tuning . . 7.1 AS/400 Utilization Guidelines . . . . . . . . . . 7.2 AS/400 Server Tuning . . . . . . . . . . . . . . 7.2.1 Workload and Memory . . . . . . . . . . . . . . . . . . . . 7.2.2 Assigning a Storage Pool 7.2.3 Expert Cache . . . . . . . . . . . . . . . . . 7.2.4 Set Job Access (SETOBJACC) Command 7.2.5 Prestart Jobs . . . . . . . . . . . . . . . . . 7.2.6 Parallel Pre-Fetch . . . . . . . . . . . . . . 7.2.7 Communications - SNA . . . . . . . . . . . 7.2.8 Communications - TCP/IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Client Tuning 7.3.1 The SmartDrive (SMARTDRV) Command 7.3.2 Microsoft Software Diagnostics (MSD) . . 7.3.3 The Defragment (DEFRAG) Program . . . . . . . . . . . . . . . . . . 7.3.4 Data Placement . . . . . . . . . . . . . 7.3.5 Application Design 7.3.6 Client Hardware Performance Comparison . . . . . . . . . . . . . . 7.3.7 Client Check List Chapter 8. Client/Server Performance Analysis 8.1 A Methodology Overview . . . . . . . . . . . . . . . . . . . . . . 8.2 Data Collection Tools . . 8.2.1 AS/400 Server Performance Data

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

201 202 202 203 204 207 208 211 211 213 215 215 216 220 222 222 223 223 223 224 225 225 225 227 229 229 229 231 232 233 235 236 237 237 238 239 240 242 243 244 246 249 249 250 251 251 251 251 252 255 255 259 261

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Contents

vii

This soft copy for use by IBM employees only.

8.2.2 AS/400 Communications Trace . . . . . . . . 8.2.3 AS/400 Job Trace . . . . . . . . . . . . . . . . . . . . . . 8.2.4 Detailed Job Information - Server 8.2.5 Client Access/400 Client Tools - ODBC Trace 8.2.6 Client Access/400 Client Tools - Start Debug 8.2.7 Client Response Time Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Print Reports 8.3.1 Performance Reports . . . . . . . . . . . . . . 8.3.2 ODBC API Trace Example . . . . . . . . . . . 8.3.3 SQL Package . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.4 Communications Trace Reports 8.4 Data Collection Checklist . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

267 271 271 276 277 277 279 279 293 301 304 309 313 313 317 317 323 328 330 330 336 343 348 349 350 351 351 351 353 353 354 355 355 355 356 356 357 360 361 361 361 371 375 376 377 381 384 390 391 393 394 394

Chapter 9. Client/Server Capacity Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Client/Server Modeling . . . . . . . 9.2 Creating a Model Using User-defined Job Classification 9.2.1 Exercise 1: Creating the Model from Performance Data . . . . . 9.2.2 Exercise 2: Calibrating the Model . . . . . . . . . . . . . . . . . . 9.2.3 Exercise 3: Saving the ODBCWL User-Defined Workload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Growth Analysis 9.3.1 Exercise 1: Increasing the ODBCWL Number of Users . . . . . . 9.3.2 Exercise 2: Manually Modeling CISC Traditional to RISC Server 9.3.3 Exercise 3: Automatically Upgrading CISC to RISC . . . . . . . . . . . . . . . . . . . . . 9.4 AS/400 Performance in a Server Environment 9.4.1 Impact of Interactive Work on Server Model Performance . . . . . . . . . . . . . . . . . . . 9.5 Client/Server Capacity Planning Summary Chapter 10. Case Study . . . . . . . . . . . . . . . . . . . . . 10.1 Overview of the Application . . . . . . . . . . . . . . . . 10.1.1 The Company . . . . . . . . . . . . . . . . . . . . . . 10.2 CPW Benchmark Database Layout . . . . . . . . . . . . 10.2.1 District . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Customer 10.2.3 Order . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.4 Order Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.5 Item (Catalog) 10.2.6 Stock . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Database Terminology . . . . . . . . . . . . . . . . . . . 10.4 CPW (New Order) Application Example . . . . . . . . . 10.5 Case Study Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Case Study Analysis 10.6.1 Response Time Log . . . . . . . . . . . . . . . . . . 10.6.2 Performance Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.3 ODBC Trace 10.6.4 SQL Package . . . . . . . . . . . . . . . . . . . . . . 10.6.5 Query Optimizer Decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.6 JOB LOG 10.6.7 Job Trace . . . . . . . . . . . . . . . . . . . . . . . . 10.6.8 Communication Trace . . . . . . . . . . . . . . . . . 10.6.9 Client/Server Order Entry Benchmark Test Results 10.6.10 Conclusions . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Appendix A. Example Programs . . . . . . . . . . . . . . . . . . . . . A.1 Database Serving Using Visual Basic and Windows 3.1 . . . . . A.1.1 Client/Access ODBC Using Visual Basic Database Objects

. . . . . . . . . . . . . .

viii

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

A.1.2 Client/Access ODBC Using ODBC APIs . . . . . . . . . . . . . . . . . . . A.1.3 Client/Access ODBC Using Stored Procedures A.2 Database Serving Using Visual Basic and Windows 95 . . . . . A.2.1 Client/Access ODBC Using Visual Basic Database Objects A.2.2 Client/Access ODBC Using ODBC APIs . . . . . . . . . . . . . . . . . . . A.2.3 Client/Access ODBC Using Stored Procedures . . . A.3 Application Serving Using Visual Basic with Windows 3.1 A.3.1 Using APPC and Visual Basic to Access the AS/400 System A.3.2 Application Serving Using DPC . . . . . . . . . . . . . . . . . A.3.3 Application Serving Using Visual Basic and Data Queues . . . . A.4 Database Serving Using Visual C++ and Windows 3.1 A.4.1 Client/Access ODBC Using ODBC APIs . . . . . . . . . . . . A.4.2 Client/Access ODBC With Block Inserts and Extended Fetch . . . . . . . A.4.3 Client/Access ODBC Using Stored Procedures A.5 Application Serving Visual C++ With Windows 3.1 . . . . . . . A.5.1 Visual C++ Example Using APPC With Windows 3.1 . . . A.5.2 Application Serving Using DPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.5.3 Application Serving Using Data Queues A.6 AS/400 Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.7 Running the Application

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

397 399 402 402 405 408 410 410 412 414 414 415 417 418 420 420 422 423 424 425 429 430 444 448 467 489 489 490 493 494 495 496 497 498 501 503 503 503 503 505 505 506 507 509 511

Appendix B. Communications Trace Examples . . . . . . . . . . . . . . . B.1.1 SNA ODBC Communication Trace (No Blocked Inserts) Example B.1.2 SNA ODBC Communication Trace (Blocked Insert) Example . . B.1.3 TCP/IP ODBC Communication Trace Example . . . . . . . . . . . Appendix C. ODBC Trace Example

. . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

Appendix D. BEST/1 CISC to RISC Conversion Example D.1 Evaluating High CPU or Disk I/O Workloads . . . . . D.1.1 BEST/1 Workload Calculation Example . . . . . Appendix E. Database Server Function Code Summaries . . . . . . . . . . . . . . . E.1.1 SQL Server Functions . . . . . . . . . . . . . . E.1.2 SQL Attribute Functions E.1.3 SQL Reply Functions . . . . . . . . . . . . . . . . E.1.4 SQL RPB Functions . . . . . . . . . . . . . . . . . E.1.5 Native Database (NDB) Server Functions . . . . Appendix F. Special Notices

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . .

Appendix G. Related Publications . . . . . . . . . . . . . . . . G.1 International Technical Support Organization Publications G.2 Redbooks on CD-ROMs . . . . . . . . . . . . . . . . . . . . G.3 Other Publications . . . . . . . . . . . . . . . . . . . . . . . How To Get ITSO Redbooks . . . . . . . . . . How IBM Employees Can Get ITSO Redbooks How Customers Can Get ITSO Redbooks . . . . . . . . . . . . . IBM Redbook Order Form List of Abbreviations Index

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Contents

ix

This soft copy for use by IBM employees only.

x

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Preface
Achieving maximum performance with AS/400 client/server applications is a key concern for application developers. This redbook focuses on how to optimize client/server performance, as well as providing guidance in resolving performance problems. It covers key performance related application development topics for both the AS/400 platform and the personal computer platform. Also covered are communications, AS/400 work management, performance measurement and AS/400 capacity planning. Many examples are given throughout the document and PC media containing sample code is included. This redbook was written for technical personnel who have responsibility for developing, maintaining, or performance tuning AS/400 client server applications. Some knowledge of SQL, PC technology, AS400 Work management and communications is assumed.

How This Redbook Is Organized
This redbook contains 517 pages. It is organized as follows:

Chapter 1, “Application Design” This chapter covers client/server application design with good performance being a key requirement.

Chapter 2, “Communications Performance” This chapter looks at some of the key communications issues within the client/server environment.

Chapter 3, “Work Management” This chapter deals with the work management issues for the client/server environment.

Chapter 4, “Client/Server Application Serving” This chapter describes application serving performance issues. The APPC, Data Queue and Distributed Program Call interfaces are discussed.

Chapter 5, “Client/Server Database Serving” This chapter describes database serving performance issues. The Remote SQL and ODBC interfaces are discussed.

Chapter 6, “Client/Server File Serving” This chapter describes file serving performance issues. The Integrated File System and the FSIOP options are discussed.

Chapter 7, “Client/Server Performance Tuning” This chapter describes performance tuning for both the AS/400 system and the client system.

Chapter 8, “Client/Server Performance Analysis” This chapter describes performance analysis. AS/400 Performance measurement tools, communciation traces and ODBC traces are discussed

Chapter 9, “Client/Server Capacity Planning”

© Copyright IBM Corp. 1996

xi

This soft copy for use by IBM employees only.

This chapter describes capacity planning for client/server. The use of BEST/1 is discussed for a client/server environment.

Chapter 10, “Case Study” This chapter presents a performance analysis case study. It offers a step by step performance analysis of an AS/400 client/server application.

Appendix A, “Example Programs” This appendix describes the content of the included PC media of example programs.

Appendix B, “Communications Trace Examples” This appendix has some sample communications traces.

Appendix C, “ODBC Trace Example” This appendix has some sample ODBC traces.

Appendix D, “BEST/1 CISC to RISC Conversion Example” This appendix has examples of BEST/1 CISC to RISC conversion factors for capacity planning.

Appendix E, “Database Server Function Code Summaries” This appendix provides a set of tables that contain the ″database server function codes″ that are exchanged between the OS/400 server and the client workstation requesting the functions.

The Team That Wrote This Redbook
This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization Rochester Center. Bob Maatta is a Consulting International Technical Support Specialist for AS/400 from the United States at the International Technical Support Organization, Rochester Center. He writes extensively and teaches IBM classes worldwide on all areas of AS/400 client/server. Before joining the ITSO 2 years ago, he worked in the U.S. AS/400 National Technical Support Center as a Consulting Market Support Specialist. He has over 20 years of experience in the computer field and has worked with all aspects of personal computers for the last 10 years. Jim Cook is a Consulting International Technical Support representative from the U.S. with 30 years of experience within IBM. He has been assigned to the Rochester ITSO since 1994, working primarily in the performance area. He presents ITSO skills transfer sessions for the AS/400 system and has advised and authored several redbooks. Ana Cristina Dias de Carvalho is a Product System specialist in Brazil. She has worked at IBM for almost 4 years and has 3 years of experience in the AS/400 field. Her areas of expertise include performance, database and Client Access/400. Klaus Subtil joined IBM Germany in October 1985 after graduating with a M.S. in Computer Science and Economics from University Fredericiana, Karlsruhe, Germany. His initial year at IBM Klaus spent in the IBM Distribution Center supporting international applications and their users. From 1987 to the end of 1992, Klaus has been working in the German Field Support Center for System/36 and AS/400, respectively. From 1992 to the 1996 Klaus was on assignment to the

xii

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

International Technical Support Organization, Rochester, MN. At the present he is a Technical Marketing Specialist for AS/400 in IBM Germany. The authors of the first edition of this redbook were: Bob Maatta ITSO Rochester Jim Cook ITSO Rochester Klaus Subtil ITSO Rochester Trevor Campbell Kaz Australia Klas Karlsson IBM Sweden Oliver Fortuin IBM South Africa Lloyd Perera IBM Australia Lee Recknor IBM Rochester Gottfried Schimunek IBM Germany The advisor of the first edition of this redbook was: Bob Maatta International Technical Support Organization, Rochester Center Thanks to the following people for their invaluable contributions to this project: Linda Allen Lance Amundsen John Bazey John Broich Bob Driesch Randy Egan Dave Johnson Janet Krueger Ray Morse Bob Nelson Rick Peterson Tom Schreiber Bob Schuster John Sears Susan Tomanek Mark Williamson Larry Youngren All from IBM Rochester
Preface

xiii

This soft copy for use by IBM employees only.

Carmen Sandin, IBM Spain Bruce Wassell, IBM UK

Comments Welcome
We want our redbooks to be as helpful as possible. Should you have any comments about this or other redbooks, please send us a note at the following address: redbook@vnet.ibm.com Your comments are important to us!

xiv

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Chapter 1. Application Design
This chapter looks at the design considerations for producing a good performing client/server application. Completeness is not intended in this chapter because some of the performance considerations are general client/server considerations, while others are AS/400 system related.

1.1 Client/Server Application Design for Performance
What are the different components to be aware of when considering the performance of your application?

Understanding application design issues: − − − − What are the performance objectives? What is the communication line time impact? How to establish efficient database serving. How to use different application serving approaches.

Implementation of proper database design techniques: − − − Understanding the effect of data placement alternatives. Implementing indexing to improve performance. Considerations when normalizing and denormalizing.

Create an awareness of performance considerations in client/server applications.

This chapter introduces you to a number of issues regarding the design of client/server applications to achieve maximum performance. Various performance-related aspects of this scenario are addressed. You are given insight into a variety of different considerations such as understanding the performance objectives for different kinds of client/server applications. You are also given a greater understanding of the impact that communications has on response time in this environment, which probably is the most important factor affecting performance. Various database serving approaches are discussed and you will learn techniques to establish well-performing application serving applications. One major component in creating well-performing client/server applications is to consider performance aspects in your database design techniques. You will learn advantages and considerations regarding distribution of data, as well as an understanding of the use of correct indexing. You will be prepared to denormalize the database to gain better performance. You will also get an idea of the effect of using advanced database design issues, such as referential integrity, triggers, and stored procedures, regarding both performance and the enforcement of business rules.

© Copyright IBM Corp. 1996

1

This soft copy for use by IBM employees only.

1.1.1 Online Analytical Processing (OLAP)

Figure 1. OLAP - Online Analytical Processing

One of the first issues to consider when developing a client/server application is to understand what the end user expects from the application in terms of performance. The term client/server has been established as one of the most popular buzz-words in the industry today. There is, however, no such thing as the client/server application. End users have different needs, expectations, and objectives when it comes to not only function, but also performance of their applications. E.F Codd & Associates have developed a theory and a paper on the theory of Online Analytical Processing (OLAP), which is an expansion on the popular theory of relational databases. The study discusses the need for efficient tools needed in the continuing analysis of a corporation′s performance. A summary of the result from this study shows that end users in an organization have different requirements regarding:
• • •

The tools The response time Actuality of the data

One way of describing these requirements shows that end users and their client/server applications are divided into three major categories:

Executive Information Systems: − There is no need for sub-second response time, queries can sometimes be long-running (15 minutes or more). − The user must be able to create drill-down queries, where information on one level is used to select more detailed information on the next level. − The entry-level of query can often be summarized, but drilling down should always lead down to production data. − The application is mostly database-serving orientated. − Access to the database is mostly read only. Decision support:

2

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

There is no need for sub-second response time, but the result must be returned to the end user in an intermediate time frame (less than 5 minutes). − Drill-down capabilities are not always essential, the user often runs queries on one level of aggregation. − Application is mostly database-serving oriented. − Access to the database is mostly read only. Transaction processing: − Sub-second response times are often a must. − The application has a great impact on the organization. − To gain the best performance, application logic often needs to be split between the client and server. − Access to the database is read/write.

One conclusion drawn from this is that user expectation regarding response time is a vital factor to consider when designing client/server applications. The most common method used today is to design the application from a database serving approach. Unfortunately, this does not often meet the requirements for a transaction processing application. Other terms used in technical publications and the industry that you can relate to the OLAP concept are data mart or data warehouse, where data is aggregated or denormalized.

1.1.2 Exploit Strengths of the Client
Customers moving to a client/server environment want to take advantage of:
• • • • •

Dedicated processor Intuitive Graphical User Interface (GUI) High functional keystroke and mouse processing Tailoring work place to user Application portfolio: − Personal applications: - Word processors - Spread sheets Workgroup applications: - Mail - Calendar Business applications: - Executive Information System - Decision support - Ad hoc queries - Transaction processing

Event driven application processing

Even if the PC′s processor is dedicated to serve only one user, it is no longer dedicated to one single task on the client. With the new operating systems available on the market, such as OS/2 and Windows 95, you find the users starting multiple applications on their workstations. This naturally degrades performance because all of these applications share the resources in the client. The intuitive Graphical User Interface employed in current client applications offer an undisputed ease-of-use and a very short learning curve. Mouse
Chapter 1. Application Design

3

This soft copy for use by IBM employees only.

processing, however, is not the most efficient way to enter data into the system. Keystroke processing is still faster and should be used for data entry applications such as Order Entry. The vast number of applications available on the client lets the user perform various numbers of tasks, both personal and workgroup related. Business applications are often adapted to specific needs, and on some instances, even tailor-made. EIS, Decision Support, and ad hoc queries are among the types of applications that use standard tools, sometimes customized for specific purpose. The performance requirements for these types of applications are often measured in minutes, rather than seconds. Transaction processing requires the shortest response time and has the greatest impact on business. It is this type of application that poses the greatest challenge with regard to achieving sub-second response time, and it is often the number one choice when a business decides to modernize its applications. The remainder of this chapter concentrates on the design of transaction-processing applications.

1.1.3 Event Driven Application Processing
Application processing in a multi-tasking client/server environment is very different from a procedural application written for non-programmable terminals.
• • • •

There is Click on The end There is

one application function per window. the window to select the function. user controls the application flow. increased workload on the network and server.

1.1.3.1 Increased User Productivity
The main difference in design between the traditional application design and modern graphical applications is that designers no longer control the work flow of the end user. You simply supply the user with a number of functions (windows), from which users choose in whatever sequence they prefer. This means that the design of each function has to be very robust, since you cannot predict which part of the application the user comes from or exits to. If your design objective is to achieve maximum throughput, you will probably experience a significant increase in the workload placed on both the network and the server. Take this into consideration, especially if these resources are already highly utilized, since it has a major adverse effect on the overall performance of your application.

1.1.4 Exploit Strengths of the AS/400 Server
The AS/400 system as a server provides many services in the following areas:
• • • •

Multiprogramming experience Large system algorithms Security Integrated File System: − − − ″Root″ system (Stream Files) Relational DBMS (Sharing, Integrity...) Office (Mail, Calendar...)

4

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

− − −

POSIX (Unix/AIX) LAN Server (Domain Controller, NetWork Servers) Recovery (Database, Indexes, System...) Refer to Database 2/400 Advanced Database Functions , GG24-4249, for more on DB2 for OS/400.

System Management: − − − − − − − ADSTAR Distributed Storage Manager Backup/Recovery Media Services DataPropagator Client Access/400 Manageware/400 LANServer/400 SystemView SystemManager/400

• •

Networking Scalability

The AS/400 system is designed for multiprogramming applications, where one program serves multiple users, still separating job information. Its algorithms are designed to handle a larger number of concurrent users than most PC based server operating systems. In a client/server environment, this is exploited by using stored procedures for an SQL based application, and by using data queues or message queues for SNA/APPC applications. The integrated security functions provide a single access point to secure all of the objects on the AS/400 system on a user or resource basis. The integrated relational database, DB2/400, is a state-of-the-art DBMS that incorporates a number of features important in the client/server environment.

Referential Integrity and database triggers, which enforce business rules on the database level and eliminate the need to incorporate these into the client/server application. Stored Procedures, which reduces the network traffic to the essential data.

The System Management products gives you a number of important functions to manage:
• • • • •

Backup (server, client, and network servers) Users Security Data placement (file transfer and DB propagation) Object distribution (PC update)

1.1.5 I/O Considerations - AS/400 versus Client/Server Applications
The impact of data transfer internally between the application program and the database files is of less importance in a traditional application. High capacity hardware and software functions yield efficient communication. The program calls directly to the DBMS and you have only one queue or wait point that can slow down your application. Depending on hardware utilization, you still have to be careful not to initiate too much DASD or communications I/O per transaction if you want to obtain sub-second response time. The actual maximum number is dependant on the

Chapter 1. Application Design

5

This soft copy for use by IBM employees only.

operating environment, such as system and disk models. Please refer to AS/400 Performance Management V3R1 , GG24-3723, for additional guidelines. The client/server database-serving application requires more hardware and software resources to be involved in each database request. In addition to the internal time in the server, which is about the same as traditional application, you must consider communications resources. Before the application programs request for data reaches the DBMS, it has to be handled by:
• • • • • • •

Client operating system Client communication software Client communication hardware LAN Server IOP Server communication software Server program

In this environment, you are faced with a multitude of queue or wait points, each one a potential bottle-neck. You also use slower connections because the LAN time is added on top of the internal bus time. This leads to the conclusion that, in this environment you have to use less communication I/O requests per transaction than in the traditional application if you want sub-second response time. The exact number of communication I/Os is dependant not only on the speed of the communications link, but also on the communication line utilization, which can have a dramatic negative effect on response time. Another important factor is the physical layout of the network. The usage of bridges and routers can significantly degrade performance. You should also be aware of the limited performance in WANs compared to LANs.

1.1.6 Serial Processing in the Client
The obvious design choice for an application is to send off a request to the server and then receive the answer as shown in Figure 2 on page 7. This is then repeated until all requests for a given transaction have been processed. This can result in excess wait time for every result set to be transmitted back to the client application because each database request puts the application on hold.

6

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Figure 2. Serial Processing in the Client

An alternative approach is to overlap the client processing with the database processing. This requires the use of techniques other than just plain SQL requests, such as:
• • • •

Data queues Stored procedures APPC Message queues

You will find more information on application serving in this publication.

1.1.7 Overlap Client Requests for Database Processing
To overlap, you may separate the database requests from the active window and make them a secondary (hidden) window or windows. This enables the user to continue the work on the active window without having to wait for the server to respond. It results in equal response time before the result is visible in the active window, but allows the user to continue working in the meantime.

Chapter 1. Application Design

7

This soft copy for use by IBM employees only.

Figure 3. Overlap Client Requests for Database Processing

Depending on the application requirement, you may consider using these methods to achieve the response time you require:
• • •

Background windows for database requests Single thread of execution Multiple threads of execution

Single thread of execution means that the database request only flows from client to server with no reply needed (for example, INSERT). Multiple threads of execution is used for database requests resulting in a result set returned to the client (for example, SELECT). Be aware that using these methods can increase the workload on both the network and server, and may have the opposite effect on overall performance for your application. Use the available tools to measure and predict performance and make sure you have sufficient hardware resources, both in server and network.

1.2 Client/Server Application Models
Many studies have been done to group client/server applications into categories. The study used in this presentation is the following one by the Gartner group.

8

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Figure 4. Gartner Group Client/Server Model

1.2.1 Database Serving (Remote Data and Distributed Data)
This is the most common solution for client/server applications, and for many C/S application development tools, can be the only one available. It is also the easiest way for PC developers to take advantage of the capabilities of the AS/400 system:

One control point for database From the client program perspective, it only deals with one DBMS, even if the actual data is split over multiple nodes.

Most mature tool set Be aware of how 4GL tools generate I/O requests; some generate severe impacts on both the network and server by issuing more requests than others.

Database recovery and restart With a single DBMS serving your application, disaster recovery and restarting the database is less challenging.

Security Database access authority can be controlled through the servers security administration facility.

Portability Using a standard data access language such as SQL makes your application to a high degree independent from the Servers′ operating, data management, and storage management system.

Consider the following aspects for a database serving application:

Database performance

Chapter 1. Application Design

9

This soft copy for use by IBM employees only.

Database performance is a critical factor, because the server is not dedicated to one single client. Make sure that you have sufficient resources to give adequate service to all of the clients accessing the server.
• •

PC hardware cost and performance Communications link performance Communication link performance is of the utmost importance in this environment. Remember that communications time is actually added to all I/O requests. Use Performance Tools to trace lines and applications to determine actual flow and try to minimize the number of requests transmitted.

Efficiency of application development tool The quality of code generated by application development tools varies widely. For example, specially generated SQL statements might affect database serving performance for the AS/400 system; it is essential to use the SQL parameter marker facility instead of specifying literals in a SQL statement.

1.2.1.1 Database Serving - SQL Packages
SQL Packages are unique objects for the DB2 product family. They contain information about how your SQL statements are run, which means that you have an already-prepared statement for later execution. Depending on the application enabler used, you have to create the SQL package object yourself or it is created automatically for you. With Distributed Relational Database Architecture (DRDA), packages are created at compile time or after the application program as been successfully compiled. The AS/400 ODBC driver creates packages the first time they are run. All users of a given client application share the same package. Package support is available through the following services:
• • • •

DRDA for clients with DB2 for OS/2 and DDCS/2 ODBC for Windows and OS/2 clients For CA/400 clients using the *PKG APIs For clients using the AS/400 Extended Dynamic SQL APIs

Depending on which platforms your application runs on, you can choose from these interfaces. DRDA implies that both the application requestor and the application server run the DB2 product. ODBC requires that the data source DBMS supports these calls. The CA/400 and AS/400 APIs require programming skills on a lower level. Significant aspects of packages are:

Located on AS/400 database server. Using Packages allows you to have statements prepared in the server for subsequent use. Use the PRTSQLINF CL command to analyze how your SQL statements are run in the server.

Contains control structures and access plans. A SQL Package contains control structures and access plans necessary to process SQL statements on an AS/400 database server when running in a distributed environment. The contents of a package are derived from SQL statements embedded in a single source program.

10

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

• • •

Used for remote SQL database access to DB2 for AS/400 system. The AS/400 object type is *SQLPKG. For DRDA created using CRTSQLxxx or CRTSQLPKG: The relational database parameter identifies the relational database on which system the SQL package is to be created. The SQL Package is created using CRTSQLxxx as a precompile step during the creation of an HLL program, or using CRTSQLPKG if the package was not created during a precompile. The program name specified in the command must have been previously created using CRTSQLxxx.

Package name. The default package name and library is the program name and library.

ODBC package creation. With the CA/400 ODBC driver, the creation of the package is controlled by entries in the ODBC.INI file.

Packages can be created for non-DB2/400 databases (the RDB parameter does not refer to an AS/400 system). In this case, it is possible to use SQL statements that are unique to the remote database manager. However, the GENLVL parameter in the CRTSQLxxx command should be set to 30 to allow for any SQL statements not supported by DB2/400. If an error is generated above 30, it is likely that the statement is not valid for any relational database. Ensure that the precompiler listing is checked very carefully to ensure that the statements are valid for the database to be accessed.

If the program contains only the following statements, a package is not required and is not generated: − − − − − DESCRIBE TABLE COMMIT/ROLLBACK CONNECT/DISCONNECT SET CONNECT RELEASE

Consistency token. A program and its associated package contain a consistency token that is checked when the package is referenced by the program. This is to avoid the program and its associated package from becoming unsynchronized.

Refer to Chapter 25 in the DB2/400 SQL Programming , SC41-3611, for more information on SQL packages.

1.2.1.2 Serial/Parallel Database Serving
DRDA and CA/400 APIs always wait for a server response for each request. Parallel processing is achieved with ODBC asynchronous execution and multiple windows with overlapping functions. Please check for the availability of the AS/400 asynchronous ODBC execution. Database recovery can present a problem with multiple connections because this means that database requests are processed asynchronously from the application logic. Serial processing can be implemented through the following application enablers:

Chapter 1. Application Design

11

This soft copy for use by IBM employees only.

• •

DRDA CA/400 APIs

Parallel processing can be implemented through:
• •

ODBC with asynchronous execution (check availability) Multiple windows and threads

1.2.2 Distributed Display
This is the easy way to develop C/S applications, but it results in heavy communications, which often prohibits sub-second response time. It also limits your possibilities to efficiently exploit the strengths of the client PC, and merely replaces the ″green-screen″ with a more modern look without improving the function of the application. You may achieve the following benefits through a distributed display implementation:
• • • • • •

Low cost PC hardware. One control point for database. One control point for logic. Off-load some display processing. Eliminates the need for workstation controllers. Database recovery and restart.

1.2.3 Distributed Logic
Much better performance may be achieved if the logic is split between the client and the server. Few 4GL tools support this function. Using DATAQs, APPC, or stored procedures allows you to reduce data flow on the communications link, and can result in a performance boost compared with the pure database serving approach. The following lists the advantages of a distributed logic implementation:
• • • • •

One control point for the database. Possible parallel processing for logic. Off-load display processing from server. Database recovery and restart. Optimized interaction with server.

Considerations for the distributed logic approach:

Communication link performance. Because your application controls most of the communication flow between client and server, it is also your responsibility to utilize this resource appropriately.

Maturity of tool set. Application development tools in this category are rare and sometimes lack function and stability to be deployed in a production environment.

Complexity of design. This approach requires programming skills on both client and server.

Dual-maintenance of applications. Application maintenance on both client and server is required.

12

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

1.2.4 Distributed Logic and Data
Downloading part of the database to the client enhances performance even more. Be careful to select data that either is unique for a given client or opened for input only and not required to be totally current. Consider using a tool such as Data Propagator to distribute the database files to the clients.

Advantages: − − − Possible parallel database processing Possible parallel processing for logic Less interaction with server

Considerations: − − − − − − − Database sharing PC hardware cost PC software costs (DBMS) Maturity of tool set Database recovery and restart (DRDA) Backup High complexity

1.2.5 Data Placement Considerations
Placing data on the client can significantly improve performance, because the communications part of the I/O request is eliminated. You have to consider the problem of actuality and redundancy, because you have to duplicate data or find a way to divide data so that only data pertinent to each client is downloaded. The easiest way to do this separation might be to store transaction records on the client for subsequent upload to the server asynchronously from the transaction program. Some considerations are:
• • •

For maximum parallel processing through multiple processes Smaller tables and files Frequently accessed transaction tables that are : − − − NOT shared, or Shared Not UPDATE, or Shared for UPDATE and replication works

• •

Changes in one place only Replicate change on clients (for example Data Propagator)

1.2.6 Distributed Logic Methods
This seems to be the most complex application scenario in a client/server environment. The following services are available on the AS/400 system:
• • • • • • •

Advanced Program-to-Program Communication Data queues Stored procedures Message queues Triggers Distributed Computing Environment, Remote Procedure Call (Referential Integrity)

Chapter 1. Application Design

13

This soft copy for use by IBM employees only.

Synchronous processing, such as Stored Procedures and Triggers results in a WAIT for the result to be returned to your application. If you do not require an immediate answer, but only want to send information to the server, you should consider using any of the other methods to enable the user to continue work without waiting for server response. To separate server jobs per client, you need to have either:
• • • •

One server job or job group per client One data queue per client Keyed data queues to separate clients Separate data queues for requests and replies

1.2.7 Other Considerations for the AS/400 System as a Server
Some additional considerations on the AS/400 server follow.

1.2.7.1 Server Program Considerations

If you write Stored Procedures or Triggers using 3GL languages, take the following aspects into consideration. To keep programs activated for subsequent executions, use these techniques: − − − C/400: return() RPG/400: RETURN with *INLR Off COBOL/400: EXIT PROGRAM, keep RUN UNIT active

• • •

Do NOT close files until the application ends. Use Pre-started Job with Pre-opened Files. In ILE environment, create the program with ACTGRP(*CALLER).

Try to minimize the traffic on the communications link by grouping the parameters, either by using Parameter lists, or by using Data Structures, to get fewer I/O requests.

1.2.7.2 Communications Performance Considerations
It is likely in a client/server environment that the communications media will become a slow wait point, so use as fast a medium as you can and minimize communication traffic as much as possible.

Use the fastest available carrier: − LAN - FDDI - SDDI - ATM − WAN - FRAME RELAY - DIGINET

Minimize the communication requests: − − Block multiple record transfers. Group requests and replies whenever possible.

Minimize the amount of data transferred: − Send only the data required.

14

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Use views containing only pertinent data.

These are more or less the same techniques you would consider for traditional applications, especially for remote terminals, but they have an even higher impact on a client/server application due to more frequent I/O requests in the client/server environment.

1.2.7.3 Record Locks
Even when you read with the intent to update, you could consider releasing the record immediately with a timestamp, and then re-read the record immediately before the update. Use timestamp to check whether the record has been altered in the meantime. Note that some of the most popular 4GL tools use an optimistic approach. They simply do not lock records to ensure data integrity. You then have to provide the locking mechanism in your application. Remember:
• • • •

Read for Update sets exclusive lock There is overhead to set lock There is increased probability of lock wait delays For files opened for update or add: − − If you do not intend to update, READ with NO Lock. Release lock when it is no longer needed.

1.2.8 Commitment Control
You have the choice to compromise between data integrity and concurrency. LCKLVL(*ALL) gives you the best integrity but causes the most lock waits for others. LCKLVL(*NONE) does the opposite. The following transaction isolation levels are ordered with increasing performance impact: 1. Uncommitted (LCKLVL(*NONE))
• • •

Highest performance and lowest data integrity. No commitment at all. Every record read, updated, or inserted is immediately unlocked.

2. Uncommitted read (LCKLVL(*CHG))
• • • • •

High performance and low data integrity. Read rows are not locked. Can change read rows. Can read update and insert rows. Uncommitted data visible.

3. Cursor stability (LCKLVL(*CS))
• • • •

Low performance and high data integrity. Read rows are released. Can change read rows. Uncommitted data not visible.

4. Read stability (LCKLVL(*ALL))
• • • •

Lowest performance and highest data integrity. Every record read locked until COMMIT/ROLLBACK. LOCK wait on all rows. Uncommitted data not visible.

Chapter 1. Application Design

15

This soft copy for use by IBM employees only.

Please note also that with higher isolation levels the blocking capabilities of storage and data management decrease.

1.2.8.1 Memory Resident Data and Programs
Provided that sufficient main storage is available, the AS/400 system gives you several options to keep data, indexes, and programs in memory. Please refer to the appropriate AS/400 publications for more details on the following options:
• • • •

SETOBJACC CL command Expert cache Use when main storage is not over-committed CHGQRYA DEGREE(*ANY)

The SETOBJACC command allows you to reserve main storage space for data files and programs. It is used for small Indexed database files when processed randomly. Asynchronous PURGE maintains data integrity. Clear pool when the file or program is no longer needed with the CLRPOOL command. Setting a storage pool to *CALC results in DBMS examining the way records are accessed and tries to read ahead to bring data pages into main memory.

1.3 Database Design for Performance
The following section gives you an overview of some database design consideration that are more or less AS/400 system-specific:
• • • •

Referential integrity Trigger programs Indexes Denormalization

1.3.1 Referential Integrity (RI)
RI is a new database function in Version 3 Release 1 (V3R1) that allows you to set up rules to prevent you from, for example, accidentally deleting records that have dependent records; you should not be able to delete a customer master record if there are any outstanding invoices for that customer. RI is enforced with constraints that are specified as either UPDATE or DELETE rules.

16

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Figure 5. Referential Integrity
• • • • • • •

Ensures that data values are in line with business rules. Defines the relationship between data values. DB2/400 enforces the rules. Improved performance compared to HLL. No need to code in client. Less I/O. Less complexity in the client application.

Prior to OS/400 V3R1, programmers maintained data integrity by incorporating various checks in their applications. This means that referential integrity was checked on an application basis, rather than on a company basis. It resulted in quite some overhead, and the check was often not maintained consistently throughout all of the applications on the system. Different programmers might have used different methods, and some utilities and tools gave the users the possibility to break business rules, and thus corrupt the data. The accelerating use of PCs and client/server applications enhances this problem. Considering the previous discussion on communication line impact on response time, it is obvious that checking for RI from within the application generates excessive traffic. Enforcing RI on the database level, on the other hand, does not affect communications, and the overhead in the AS/400 system is of no significance in comparison. You also find that your application becomes less complex and easier to maintain. When considering RI implementation, there is more than the performance aspect to look at. The integrity aspect might weigh heavier than performance. Imagine an information provider giving read/write access to a database, for example, through the Internet. The unknown client application accessing the database might not enforce the same business rules as required by this information provider. This company better ensures its business rules on the database level through RI than relying on the accessing application to not corrupt the database integrity.

Chapter 1. Application Design

17

This soft copy for use by IBM employees only.

For more information on this, refer to Database 2/400 Advanced Functions Guide , GG24-4249.

1.3.2 Triggers
Because triggers are stored in the database, the actions performed by triggers do not have to be coded in each application. Once a trigger has been defined and coded, all of the applications manipulating that database file reuse that definition. This results in a faster application development process and easier application maintenance.

Figure 6. Trigger Programs

Reasons for implementing trigger programs for your application may be:
• • • • • • •

Enforce business rules. Validate input data. Generate shadow records on different files. Create audit trail. Faster application development. Easier maintenance. Improved performance in client/server environment.

Because the triggers and their associated programs are all run in the AS/400 system before returning the result to the client, performance is improved compared to coding the same functions in the client. Fewer communication requests and less complex code in the client are a result of this. One example where you would consider using triggers to speed up your client/server application is when the end user requests order detail lines to be inserted before the order header is entered. By specifying a trigger program that checks for the existence of order header and inserts a standard header record if needed, you do not have to do the same checks in your application. When the user later wants to insert the header (remember that with event-driven applications, the user is in control and determines the actual application flow), you simply update the header record previously inserted by the trigger program.

18

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

1.3.3 Indexes and Views
Creating Indexes for WHERE and ORDER BY almost always results in improved performance. The lower the number of rows, the greater the performance improvement. The book DB2/400 SQL Programming , SC41-3411, gives you detailed information on where indexes are required for the statement processing and where indexes improve performance. The following section describes two examples on index usage.

Figure 7. View Example

Some development tools do not support the data types typically used in DB2/400. A numeric key field in DB2/400 is stored as packed. A V i s u a l C + + p r o g r a m describes this field as a floating point field. If this happens, no indexes created over the Key Field are used. SQL instead creates a temporary index to be used for the Query. To prevent this from happening, you have to create a logical file over the physical file where you map the database fields to match the corresponding program fields, and create the required index over the converted field. This enables the SQL Optimizer to use the existing index. The most important factor affecting performance in client/server processing is communications time. To reduce the amount of data being transferred, you should consider using views containing only the data needed for your application.

Chapter 1. Application Design

19

This soft copy for use by IBM employees only.

Figure 8. View Example

Create view with:
• •

Selected rows to minimize communication. JOINed tables to minimize communication.

In a LAN Environment, it is even more important to reduce the number of I/O requests flowing between the client and the server. This is accomplished by preparing already JOINed views in the server from which you request your data.

1.3.4 Denormalization
Your application most likely has to request data from several files in order to present the complete picture to the end user. By stepping back to 2NF, you are able to reduce the number of I/Os needed to retrieve the information. You should avoid doing this for master files.

20

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Figure 9. Denormalization

If you decide to use this method, choose transaction files, such as Order Detail, and include data from the Item Master in each order detail record. Please note that by doing this, you not only occupy more disk space, but you also have a problem with data consistency. A change to the Item Description in the Item Master file is not automatically propagated to all Order Detail records referring to that item number. Adding a trigger program to ensure the correct item description resolves the integrity problem but adds to the users response time.

1.3.5 Database Design Conclusion
It all boils down to the fact that the application designers have to set performance objectives and develop a set of rules to achieve these objectives for one of the application categories:
• • •

Executive Information Systems Decision support Transaction processing

Optimize database performance through the use of the following DB2/400 services:
• • • • •

Indexes Views Referential integrity Triggers Stored procedures

Chapter 1. Application Design

21

This soft copy for use by IBM employees only.

1.4 AS/400 SQL Performance
The following section discusses more features of SQL performance on the AS/400 system. It might be helpful to have the manual DB2 for OS/400 SQL Programming , SC41-4611, handy while reading this section.

1.4.1 SQL Catalog
Prior to OS/400 V3R1 a SQL catalog was available for SQL collections scoped to a single collection or library. With V3R1 a system wide catalog has been introduced for DB2 for OS/400. The SQL catalog provides consistency with the ANSI/ISO standards referred to as Information Schema. The information is held in eight physical files (QADB*) in library QSYS and made available through 12 views in library QSYS2:
• • • • • • • • • • • •

SYSCOLUMNS - column attributes SYSINDEXES - indexes SYSKEYS - index keys SYSPACKAGES - packages SYSTABLES - tables and views SYSVIEWDEP - constraint dependencies SYSVIEWS - view definitions SYSCST - constraints SYSCSTCOL - columns in referenced in constraints SYSCSTDEP - view dependencies on tables SYSKEYCST - unique primary and foreign keys SYSREFCST - referential constraints

The database manager ensures the integrity of the catalog information. When a database object is created in a SQL collection, the information is reflected in both the system-wide catalog and the collection catalog table. When a database object is created in an AS/400 library, only the catalog in library QSYS2 provides this information. Please note that not all database objects are reflected in the SQL catalog but are registered in the QADB* files. Examples of objects that you cannot find in the SQL catalog are logical files with select/omit access path or multi-format join logical files created through data description specifications (DDS). Refer to Appendix E in the AS/400 DB2/400 SQL Programming , SC41-3611-00, for more information.

1.4.2 Optimizer Makes the Decisions
The SQL optimizer is an essential component of all relational database management systems. The DB2 for OS/400 query optimizer:
• • • • •

Makes the key decisions that affect database performance. Identifies the techniques that are used to implement the query. Selects the most efficient technique based on its algorithms. Decides how to access queried data. Works best when information is provided (that is, indexes).

22

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Figure 10. DB2/400 Optimizer

The SQL statements specified in your program determine the result set. That means what data is returned to the application, not how to get to that data. The database management system provides algorithms to efficiently retrieve the data. These algorithms or access methods are the tools the optimizer can choose from based on statistics provided through the DBMS. For compiled programs or SQL packages, the access method to the data for each given SQL statement is chosen by the optimizer and stored in an Access Plan. The techniques used by the Optimizer to perform this task include:
• • •

Cost estimation Access plan validation Join optimization

Note: Unlike other RDBMSs, the DB2/400 optimizer does not retrieve the information and statistics from the SQL catalog but directly from AS/400 data management. For detailed information on how the Optimizer performs its functions, please refer to the information starting on page 23-11 in Chapter 23 of the DB2/400 SQL Programming , SC41-3611.

1.4.3 Terms
The following table lists some terms that are important for a good understanding of how SQL works with DB2/400.
Table 1 (Page 1 of 2). Table of SQL Terms
Term Table Data space Meaning Data repository. Created through CRTPF or CREATE TABLE. Another term for data repository.

Chapter 1. Application Design

23

This soft copy for use by IBM employees only.

Table 1 (Page 2 of 2). Table of SQL Terms
Term Index Temporary index Temporary result Access plan ODP Reusable ODP Meaning Binary tree built over table to order particular columns (keys) of the table and for quick binary searches. Created through CRTLF or CREATE INDEX. Index built ″on the fly″ by the optimizer. Copy of data from an intermediate query step. Needed to complete the query. Plan generated by optimizer on how to access the tables being queried. Open Data Path. Active path through which query data is read. ODP kept open when an SQL query is requested to be closed. Used next time query is opened.

The term access path is used as a synonym for index.

1.4.4 Access Plan
Before an application program with SQL statements is run, a relationship between the program/SQL statement and the referenced tables, views, and indexes must be established. This process is referred to as binding. The result of a successful bind is an access plan. This access plan is created during program creation for static SQL or during statement preparation for dynamic SQL. It contains internal structures and information about the access methods used to run a specific SQL statement. An access plan may become invalid if changes to the database are detected. An example of a change is creating or deleting an index for a referenced table.

SQL statement ─────────┐ │ Bind ├─────────────── Access Plan │ Tables/Views/Indexes ───┘
The access plan is a control structure and information that specifies how to run each SQL request in the most efficient manner. The access plan may be stored in the program (non-distributed SQL) or in an SQL package (distributed SQL). The process of binding:
• • •

Validates the SQL statements using database description. Selects the required indexes. Builds the access plan.

1.4.5 Tools Available to Verify SQL Processing
The following section describes OS/400 tools available to analyze the processing of SQL statements. Figure 11 on page 25 gives you an overview of the recommended tools.

24

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Figure 11. Roadmap to Verify SQL Processing

Unique for the AS/400 is that it provides two different database access methods:

The so-called native language I/0 (using READ/WRITE operations from third generation languages) SQL queries

Generally, the use of SQL results in increased memory and CPU requirements when compared to similar native DB operations. However, SQL as the standard interface to access relational databases gives you other benefits, such as easy to read and easy to connect to other database platform applications. Because AS/400 can hold very large databases and support many concurrent users, it is key for the success of an SQL application to be well designed and tuned. For this reason the focus of this chapter is the performance verification phase where the optimum performance is a major consideration. Performance verification tools that are described in the following sections are:
• • • • • •

PRTSQLINF DEBUG CHGQRYA DSPJOB TRCJOB Line Trace

For your test environment during the performance verification phase you should select parameters as close to the production environment as possible:
• •

Subsystem, pools, and so on Database size Typically the development system doesn′t have the capacity to hold a database the size of the production environment. However, you should have

Chapter 1. Application Design

25

This soft copy for use by IBM employees only.

tables/files that are large enough to trigger the same optimizer decisions as in the production environment. This means that you should not use test tables with 10 or 15 rows while the production tables have millions.

Concurrent users Don′t expect to extrapolate performance results of a single user.

1.4.5.1 PRTSQLINF
The command to print Structured Query Language Information (PRTSQLINF) allows the user to print the access plan information about embedded SQL statements stored with programs, SQL packages, or service programs. The report includes: 1. Parameter settings used to create the object. 2. The SQL statements. 3. The access plan identifying the access method for each SQL statement. The tool can be viewed as the DB2/400 version of the EXPLAIN function provided by other RDBMSs. It creates a spooled file output that can be used as an audit trail during application development to verify changes made to the object.

Example of PTRSQLINF output: Issue the following commands against one of the supported object types:

PRTSQLINF OBJ(LIB1/TSTPGM) WRKSPLF

The following shows a sample output for a program. Note that the access plan information is generated only for the SELECT statement in the DECLARE CURSOR statement. Statements such as OPEN cursor, FETCH, or CLOSE do not generate an access plan.

DECLARE C1 CURSOR FOR SELECT KEYFLD FROM LIB1/TABLE1 XXX WHERE CHAR1 :HV1 AND CHAR2 = :HV2 SQL4021 SQL4020 SQL4017 SQL4006 SQL4008 SQL4011 OPEN C1 FETCH C1 INTO :KEYFLD CLOSE C1 Access plan last saved on 09/01/94 at 14:49:52. Estimated query run time is 1 seconds. Host variables implemented as reusable ODP. All access paths considered for file 1. Access path INDEX2 used for file 1. Key row positioning used on file 1.

26

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

1.4.5.2

Debug Messages

In addition to the traditional debug function, the AS/400 debug tool provides means to verify SQL processing and performance. By issuing the STRDBG command without specifying a program name, your interactive job is put into debug mode. In this mode, the SQL optimizer writes messages to your joblog similar to the information retrieved with PRTSQLINF. For example, an SQLCODE = -204 causes message SQL0204 to appear in the JOBLOG. An advantage and also a disadvantage at the same time, depending on your query, is that this occurs during runtime. The important part of this message is the second-level help text that gives you, in many cases, a comprehensive explanation of the optimizer decisions. The debug messages describe the query implementation method:
• • • • •

Indexes File join order Temporary results Access plans ODPs (Open Data Paths)

To monitor the processing of batch jobs or jobs initiated from a client, the STRSRVJOB CL command with a subsequent STRDBG can be used. The messages returned are CPI4321-CPI432E and SQL7910-SQL7919; the message text can be found in files QCPFMSG and QSQLMSG.

Hint: To find the message text for messages received by an application, use the DSPMSGD or WRKMSGF commands. The messages are found in message file QCPFMSG for messages starting with CPF and CPI. They are found in message file QSQLMSG for messages starting with SQL. To match a SQL message code received by an application with the appropriate message description, prefix a three or four digit code with ″SQL″. Prefix a five digit code with ″SQ″. For example, use SQL0100 for message 100 and SQ30080 for message 30080. Examples of commands for message files are WRKMSGF QSQLMSG and DSPMSGD SQ30080 QSQLMSG. Example of Debug Messages: This example shows how to put your job in debug mode, the execution of SQL statements in interactive SQL, and the display of the joblog. Please note again that the important part of the message is hidden in the second-level help text.

1.

STRDBG UPDPROD(*YES)

2.

STRSQL ---> ---> ---> select busnam, cstfst from lib1/table1 where busnam=′ NORTON′ order by cstfst

... 3. DSPJOBLOG ... All access paths were considered for file TABLE1. Access path of file INDEX1 was used by query.
Chapter 1. Application Design

27

This soft copy for use by IBM employees only.

ODP created. Blocking used for query. SQL cursors closed. ...

1.4.5.3 PRTSQLINF versus DEBUG Messages
Table 2. PRTSQLINF versus DEBUG Messages
PRTSQLINF Available without running the query (after the access plan has been created). Displayed for all queries in the program, whether executed or not. Improved information on host variable implementation. Available only to SQL users with programs, packages or service programs. Messages printed to spool file. SQL statement precedes access plan. Easier to identify query implementations involving unions and subqueries. Little information about dynamic SQL statements. DEBUG MESSAGES Only available when the query is run.

Displayed only for those queries that are executed.

Limited information on the implementation of host variables. Available to all query users (OPNQRYF, SQL, QUERY/400).

Messages displayed in job log. Difficult to match statement with access plan. Difficult to distinguish messages if subqueries or unions are involved. Works the same for static or dynamic SQL statements.

1.4.5.4 Other Helpful Tools
The CHGQRYA command allows an execution time limit to be specified. If the estimated time to run the query exceeds the specified time limit, the query is not run; a CPF message CPA4259 is returned, and DEBUG information messages are added to the JOBLOG. This prevents unintentional running of queries that are likely to consume massive system resources.
• •

The query time limit is set for a job by the CHGQRYA CL command. The time limit is checked against the estimated elapsed query time before initiating a query. It controls the amount of parallel query processing allowed (discussed later). By default, it produces inquiry message CPA4259, which informs the user about the expected runtime of the query and which operations the query performs (such as create a temporary index). If the user decides to cancel, debug messages are written to the JOBLOG, which provides hints to the user on how to fix the performance problem of this query. It is set up using the system reply list to immediately cancel the query.

• •

CHGQRYA QRYTIMLMT(0) DEGREE(*NONE)
DSPJOB options allow a user to examine the following information while the job is active:
• • • •

*OPNF - files and indexes open. *JOBLCK - job and row locks that are in force. *CMTCTL - the list of commitment definitions active. *CMNSTS - input/output operation over communications lines, particularly for ICF application.

28

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

TRCJOB on the AS/400 system affords the facility to review the succession of program calls and returns during the running of a specific job. The information provided includes:
• • •

Time Library and program Resource utilization by program: − − CPU utilization Database and non-database reads

Data access mode (through program name): − − − − QDBGETKY - get-by-key QDBGETSQ - get-sequential QDBGETM - get-multiple QDBOPEN - open index or database file

ODP usage: − QDMCRODP - create ODP

1.4.5.5 Enable Parallel I/O with CHGQRYA
This reduces query time by bringing necessary data from disk in parallel during query execution. It also improves performance for I/O-bound queries. To enable:

Enter command WRKSYSSTS and change paging option on desired pool to *CALC (starts expert cache). Enter CHGQRYA DEGREE(*ANY) for jobs where parallel read should be enabled.

Types of Parallel I/O

Parallel pre-fetch for data space scan queries. Parallel I/O pre-fetch uses multiple input streams for the table to pre-fetch the data when doing a table scan. This method is most effective when the following are true: − − − The data is spread across multiple disk devices. The query is not CPU-processing intensive. There is an ample amount of main storage available to hold the data collected from every input stream.

To enable: − −

Enter command WRKSYSSTS and change the paging option on the desired pool to *CALC (starts expert cache). Enter CHGQRYA DEGREE(*ANY) for jobs where parallel read should be enabled.

Parallel pre-load. − − − Pre-load is similar to pre-fetch except that an entire index or table can be preloaded in entirety to active memory in parallel. After the table or index is loaded into memory, random access to the data is achieved without further I/O. Pre-load can significantly reduce I/O bound join queries or group by queries.

Chapter 1. Application Design

29

This soft copy for use by IBM employees only.

1.5 Understanding V3R1 Enhancements for SQL Processing
To improve SQL query performance, it is important to have an understanding of the DB/400 implementation of SQL and how the optimizer analyzes statements to determine to access method. The following section covers:
• • • • •

The importance of the WHERE clause Extended join support for SQL Query function versus performance (conflict resolution) Available tools General tips

1.5.1 The WHERE Clause
For SQL processing it is important to understand that an index (access path or key) does not necessarily have to exist to run a statement. But note, that for certain clauses, such as ORDER BY or GROUP BY, and certain SQL statements, such as joins an index is required to process the statement. If there is no permanent index available the optimizer chooses to create a temporary index. For statements where an index is not required, such as this example, the best performance improvements revolve around the WHERE clause (QRYSLT in OPNQRYF).

SELECT * FROM CSTMR WHERE CDID=:HV
An index on CDID allows for (quick) binary search using the predicate rather than table scan to access the requested data. The WHERE clause is the most important factor for making the optimizer choose an existing index rather than a table scan. As you see from the examples in 1.5.1.1, “Index Usage Examples,” the optimizer even rearranges your queries and moves HAVING to WHERE in order to be able to use the most efficient method to perform the query.
• • •

Reduces query result set. Permanent indexes most useful. Elusive because index not required.

1.5.1.1 Index Usage Examples
The following examples illustrate the index or access path usage for SQL queries. Although DB2 for OS/400 indexes are implemented as binary trees, it might be helpful to imagine a multi-level index list similar to the one at the end of this book. As it is very unlikely that a human being would read through a book index sequentially to find a specific entry, the DBMS access methods try to go to a position within an index to limit the search to only the data requested. Note: The OPTIMIZE clause is used in this example to indicate to the optimizer that it should return the complete result set. Example 1

30

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

CREATE INDEX X1 ON EMPLOYEE(WORKDEPT) DECLARE BROWSE2 CURSOR FOR SELECT * FROM EMPLOYEE WHERE WORKDEPT = ′ E01′ OPTIMIZE FOR 99999 ROWS
Index X1 is used to position to the first index entry with WORKDEPT = ′E01′, and each row is accessed randomly from the data space. Example 2

CREATE INDEX X1 ON EMPLOYEE(WORKDEPT) DECLARE BROWSE2 CURSOR FOR SELECT * FROM EMPLOYEE WHERE WORKDEPT = ′ E01′ AND FIRSTNAME = ′ DAVID′ OPTIMIZE FOR 99999 ROWS
In this case, index X1 is used to position to the first index entry with WORKDEPT = ′E01′, and the additional selection predicate is applied on FIRSTNAME from the data space. Imagine that department E01 has 100 employees but only one David. That means that the DBMS has to read 100 index entries and retrieve the records from the data space to find one David. That shows you that index X1 is not the most efficient index for this query. An index on WORKDEPT and FIRSTNAME allows the DBMS to retrieve only the records matching the selection predicate. Example 3

CREATE INDEX X1 ON EMPLOYEE(WORKDEPT) DECLARE BROWSE2 CURSOR FOR SELECT * FROM EMPLOYEE WHERE WORKDEPT BETWEEN ′ E01′ AND ′ E11′ OPTIMIZE FOR 99999 ROWS
Index X1 is used to position to the first index entry with WORKDEPT = ′E01′ and the rows selected until the last index entry with WORKDEPT = ′E11′. Example 4

CREATE INDEX X1 ON EMPLOYEE(WORKDEPT) DECLARE BROWSE2 CURSOR FOR SELECT * FROM EMPLOYEE WHERE WORKDEPT BETWEEN ′ E01′ AND ′ E11′ AND WORKDEPT BETWEEN ′ A00′ AND ′ B01′ OPTIMIZE FOR 99999 ROWS
This is an example of multi-range key support where the positioning is performed more than once. Example 5 Multiple keys can also be handled, but the key fields must be contiguous with the left-most key field.

Chapter 1. Application Design

31

This soft copy for use by IBM employees only.

CREATE INDEX X2 ON EMPLOYEE(WORKDEPT,LASTNAME,FIRSTNAME) DECLARE BROWSE2 CURSOR FOR SELECT * FROM EMPLOYEE WHERE WORKDEPT = ′ E01′ AND LASTNAME = ′ JONES′ OPTIMIZE FOR 99999 ROWS
Here the multiple key-fields are contiguous with the left-most key field, and multiple key-field positioning occurs. The DBMS is able to retrieve only the records matching the selection predicate from the data space. Example 6

CREATE INDEX X3 ON EMPLOYEE(WORKDEPT,LASTNAME,FIRSTNAME) DECLARE BROWSE2 CURSOR FOR SELECT * FROM EMPLOYEE WHERE WORKDEPT = ′ E01′ AND FIRSTNAME = ′ DAVID′ OPTIMIZE FOR 99999 ROWS
However, in this situation, multiple key-field positioning does not occur as the second key field is not contiguous with the left-most key-field. Since no selection predicate is specified for LASTNAME, it works like a wild card. Try to imagine how you would look through a three-level book index to find the entries. Example 7 The following example presents a situation where the SQL Optimizer analyses the WHERE clause, and re-arranges the selection predicates in the WHERE clause into a more efficient form that can use multiple-key positioning:

CREATE INDEX X2 ON EMPLOYEE(WORKDEPT,FIRSTNAME) DECLARE BROWSE2 CURSOR FOR SELECT * FROM EMPLOYEE WHERE WORKDEPT = ′ E01′ AND FIRSTNAME IN (′ DAVID′ ′ BRUCE′ WILLIAM′ ) OPTIMIZE FOR 99999 ROWS
The preceding WHERE clause is re-written to an equivalent to provide multiple ranges of the contiguous multiple key-fields.

CREATE INDEX X2 ON EMPLOYEE(WORKDEPT,FIRSTNAME) DECLARE BROWSE2 CURSOR FOR SELECT * FROM EMPLOYEE WHERE (WORKDEPT = ′ E01′ AND FIRSTNAME = ′ DAVID′ ) OR (WORKDEPT = ′ E01′ AND FIRSTNAME = ′ BRUCE′ ) OR (WORKDEPT = ′ E01′ AND FIRSTNAME = ′ JOYCE′ ) OPTIMIZE FOR 99999 ROWS

1.5.2 Multi-key Row Positioning with OR Criteria
The following example shows a WHERE clause with an OR selection, and the most efficient index is not obvious.

32

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

CREATE INDEX X2 ON ORDLIN(OLWID, OLDLVD, OLSPWH) SELECT SUM(OLAMT) FROM ORDLIN WHERE OLSPWH=:HV_EMP AND ( (OLDLVD IN (:HV_DATE1, :HV_DATE2) AND OLWID=:HV_STORE1) OR (OLDLVD IN (:HV_DATE3, ′08/03/95′) AND OLWID=:HV_STORE2))
The query optimizer analyzes the WHERE clause and re-writes it into an equivalent form. Unfortunately, the user does not have the means to see the WHERE clauses re-arranged by the optimizer and it is left to the users experience to find the most efficient access path.

SELECT SUM(OLAMT) FROM ORDLIN WHERE (OLSPWH=:HV_EMP AND OLDLVD=:HV_DATE1 AND OLWID=:HV_STORE1) OR (OLSPWH=:HV_EMP AND OLDLVD=:HV_DATE2 AND OLWID=:HV_STORE1) OR (OLSPWH=:HV_EMP AND OLDLVD=:HV_DATE3 AND OLWID=:HV_STORE2) OR (OLSPWH=:HV_EMP AND OLDLVD=′04/03/95′ AND OLWID=:HV_STORE2)
The preceding example uses any index with OLSPWH, OLDLVD, and OLWID to position at selected rows. Index with three keys - OLSPWH, OLDLVD, and OLWID (in any order) - is used by the database manager to position to the rows matching the selection criteria. This example shows how the SQL Optimizer analyzes the WHERE clause and rewrites the clause into a more efficient form that can use multiple-key positioning.

1.5.3 Multi-key Row Positioning and JOIN
Join is one of the statements where an index matching the join predicate is required to process the query. Prior to V3R1, this required index could not be used for additional selection predicates in the WHERE clause. The following example attempts to illustrate the V3R1 enhancement for multi-key row positioning on join and non-join predicates:

CREATE INDEX XZ ON ORDLIN (OLWID, OLDLVD, OLSPWH) SELECT A.CLAST, A.CFIRST FROM CSTMR A, ORDLIN B WHERE AND AND AND a.CID=:hv_cus a.CID = B.OLSPWH <-- join predicate B.OLWID=:HV_STORE <-- non-join predicate B.OLDLVD=:HV_DATE <-- non-join predicate

Index with three keys - OLDLVD, OLWID, and OLSPWH (in any order) - is used to satisfy all three criteria on B at once.

Chapter 1. Application Design

33

This soft copy for use by IBM employees only.

Prior to V3R1, non-join predicates for B would have been implemented after the join unless a temporary index was built from an existing index.

1.5.4 HAVING to WHERE
Prior to V3R1, the following query may not perform well, because the HAVING criteria would be implemented as post-GROUP BY intermediate buffer selection. Example of predicates pulled from HAVING clause to WHERE clause:

SELECT OLSPWH, SUM(OLAMT) FROM ORDLIN GROUP BY OLSPWH HAVING OLDLVD=:HV_DATE AND OLWID=:HV_STORE
In V3R1, the query optimizer analyzes the HAVING clause and rewrites the query as follows:

SELECT OLSPWH, SUM(OLAMT) FROM ORDLIN WHERE OLDLVD=:HV_DATE AND OLWID=:HV_STORE GROUP BY OLSPWH
Since the selection is done with the equal type predicates, the selection fields OLDLVD and OLWID can also be used as place holders in the GROUP BY clause without affecting the grouping. Therefore, an index with three keys - OLDLVD, OLWID, and OLSPWH - can be used to satisfy selection and GROUP BY criteria all at once.

1.5.5 Data Space Scan Selection
In V3R1, a new algorithm for table scans has been introduced to retrieve the requested data. This new algorithm generates code that searches records in memory and returns each row matching the selection criteria. It results in significantly reduced CPU usage for queries where 20 to 30 percent of the rows are selected. There are some restrictions on the selection predicate because the selection is being done against the record image in memory. It is recommended to watch out for these limitations in your database and application design:
• • • •

No derived expressions No varying length character columns No columns needing CCSID translation If both operands are numeric columns, they must be of the same type, scale, and precision. Numeric host variables or constants must be the same type as the column operand and the scale and precision must be less than that of the column operand. Host variables or constants cannot be longer than the column operand.

34

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

1.5.6 Extended Join Support for SQL
Prior to V3R1, SQL supported only inner join. Now, left outer and exception join are also supported with PTF SF22302: Inner Join Left Outer Join Exception Join Rows in left table not having corresponding row in right table are not returned. Rows in left table not having corresponding row in right table return null value for each row of right table. Only rows in left table not having corresponding row in right table are returned. Null value returned for each column of right table.

The following example illustrates the new syntax:

SELECT EMPNUM, LASTNAME, DEPTNAME, PROJNUM FROM EMPLOYEE XXX LEFT OUTER JOIN DEPARTMENT YYY ON XXX.DEPT = YYY.DEPTNUM LEFT OUTER JOIN PROJECT ZZZ ON XXX.EMPNUM = ZZZ.RESPEMP WHERE XXX.EMPNUM = YYY.MGRNUM AND YYY.DEPTNUM IN (′ A00′ , ′ D01′ , ′ D11′ )
The new SQL clauses are:

JOIN or INNER JOIN Specifies that each row in the table to the left is joined with one or more rows in the table to the right using the join-condition. Any rows in the table to the left that do not have a corresponding row in the table to the right are not included in the result table.

LEFT JOIN or LEFT OUTER JOIN Specifies that each row in the table to the left is joined with one or more rows in the table to the right using the join-condition. Any row in the table to the left that does not have a corresponding row in the table to the right returns the null value for each column in the table to the right.

EXCEPTION JOIN Specifies that only the rows in the table to the left that have no corresponding rows in the table to the right using the join-condition are returned. The null value is returned for each column in every row for the table to the right.

ON join-condition Specifies the condition to apply to each combination of rows from the two tables being joined in determining which rows are to be added to the intermediate result table. If more than one join-predicate is specified for a LEFT OUTER JOIN or EXCEPTION JOIN, all of the comparisons in the join-condition must be the = condition.

Join-predicate Specifies one condition that must be satisfied in order to have a row added to the intermediate result table. Each expression must contain at least one column name from one of the tables in the current join. One expression must use a column from the table specified to the right of the JOIN keyword. The other expression must use a column from any of the

Chapter 1. Application Design

35

This soft copy for use by IBM employees only.

tables specified in the current FROM clause prior to the JOIN keyword. Each column name must unambiguously identify a column in one of the tables in the from-clause. Column functions cannot be used in a join-predicate. Note: The new syntax for inner join disables the optimizer from rearranging the order in which files are joined. To give the optimizer additional information for these decisions, you might want to specify the join predicates redundant in the JOIN and in the WHERE clause.

1.5.7 Conflicting Index Requirements
Often, index usage for a WHERE clause conflicts with clauses that require an index, such as :
• • •

GROUP BY (GRPFLD) ORDER BY (KEYFLD) Join (already discussed)

Index conflict often results in the creation of temporary files and indexes, and phone calls from customers. Simple ordering and grouping specification used to be able to totally destroy the performance of queries because it severely limited the indexes that could be used. The following list describes cases where clauses can be re-arranged:

Fields in equal predicates of WHERE clause can be implicitly added to or eliminated from ORDER BY and GROUP BY. GROUP BY columns allowed to ″move around″. Unique indexes used to guarantee one result record: − − ORDER BY and GROUP BY can be ignored allowing more optimization possibilities. Only if fields in equal predicates compose a unique key.

• •

Changes to the way the optimizer analyses statements that have been introduced with OS/400 V3R1:

• •

Fields with equal selection can now be implicitly added or eliminated from an ORDER BY or GROUP BY specification. The order of GROUP BY fields can be implicitly shuffled. If the equal selection fields compose a unique key, the ORDER BY or GROUP BY specifications can be ignored.

1.5.7.1 Example for Unique Key Optimization
Example of making use of unique-keyed indexes to allow more optimization possibilities:

SELECT OLSPWH, OLWID FROM ORDLIN WHERE OLDLVD=:HV_DATE GROUP BY OLSPWH, OLWID ORDER BY OLWID, OLSPWH
Assuming that a unique key for column OLDLVD exists, this information can be used to satisfy the selection. Note that the GROUP BY and ORDER BY are ignored because the unique index guarantees one result record for this query.

36

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

1.5.7.2 Example of WHERE/ORDER BY Index Conflicts
In the following example, the required index for the ORDER BY clause on field CLAST conflicts with the selection on field CDID:

SELECT CLAST FROM CSTMR WHERE CDID=:HV ORDER BY CLAST
Prior to V3R1, a temporary index might have been built from the preceding index. With V3R1, an index keyed on CDID and CLAST can be used to implement both the WHERE selection and ORDER BY because CDID can be implicitly added to ORDER BY. The following shows a query where a key on CLAST and CDID helps efficiently retrieve the data. However, the selections conflicts with the index needed for the ORDER BY on fields CDID and CZIP:

SELECT * FROM CSTMR WHERE CLAST=:HV_NAME AND CDID=:HV_CDID ORDER BY CDID, CZIP

Prior to V3R1, a temporary index might have been built from the preceding index. With V3R1, an index keyed on CLAST and CZIP can be used to implement both the WHERE selection and ORDER BY because CDID can be implicitly removed from ORDER BY and CLAST can be added.

1.5.7.3 Example of WHERE/GROUP BY Index Conflict
This example shows index conflicts between WHERE and GROUP BY clauses. GROUP BY clauses also require an index, permanent or temporary, for processing.

SELECT CLAST FROM CSTMR WHERE CDID=:HV GROUP BY CLAST

The need for an index keyed on CDID conflicts with the index needed for CLAST in the GROUP BY clause. In V3R1, an index keyed on CDID and CLAST can be used to implement both the WHERE selection and GROUP BY because CDID can be implicitly added to GROUP BY without affecting the result set.

Chapter 1. Application Design

37

This soft copy for use by IBM employees only.

1.6 Tips and Techniques for SQL Queries
The following section gives you some tips and techniques you might want to consider to achieve better performing SQL queries.

1.6.1 Sort
In many cases, sorting a snapshot of the data can be faster than using indexed access. Sorting also means that the underlying data can change while the sort is running and the user might not see the most recent status. When generating a program, you have the choice to specify the ALWCPYDTA(*OPTIMIZE) parameter to allow sort to be used for ORDER BY.

1.6.2 Temporary Index Creation
In V3R1, the build for temporary indexes has been improved. Examine temporary index creation by optimizer before assuming it is bad:
• • • •

Larger page size (more efficient to use). Query selection built in. Brings underlying table into main storage. Always more efficient to use, but savings has to overcome creation time.

1.6.3 Maintain Useful Indexes Over Tables
To maintain useful indexes over tables:

Create permanent (multi-key) index or indexes primarily to match WHERE clause, secondary for GROUP BY/ORDER BY. Remove indexes not often used (check out ″Date last used″ and ″Days used count″ through DSPFD).

1.6.4 Avoid Temporary Results
Queries using the following clauses might require you to work with a snapshot of the data and create a temporary result: 1. 2. 3. 4. DISTINCT, UNION or UNION ALL ORDER BY columns from more than one table GROUP BY columns from more than one table Complex view or logical file being queried

Some of the ODPs created for these queries might not be reusable which means that they have to be re-created for each subsequent processing. Temporary results are indicated by messages CPI4324 and CPI4325. Examine the second-level help text for this message as shown in Figure 12 on page 39 to understand more about temporary results. If the specified file selects few rows, usually less than 1000 rows, then the row selection part of the query′s implementation should not take a significant amount of resource and time. However, if the query is taking more time and resources than can be allowed, consider changing the query so that a temporary file is not required. One way to do this is by copying the records of the file to a physical file, and then changing the query to refer to the physical file.

38

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

System: SYSASM01 Message ID . . . . . . . . . : Message file . . . . . . . . : QCPFMSG Library . . . . . . . . . : QSYS

CPI4324

Message . . . . : Temporary file built for file * Cause . . . . . : A temporary file was built for member tools of file * in library DATABASE for reason code &4. This process took &5 minutes and &6 seconds. The temporary file was required in order for the query to be processed. The reason codes and their meanings follow: 1 - The file is a join logical file and its join-type (JDFTVAL) does not match the join-type specified in the query. 2 - The format specified for the logical file references more than one physical file. 3 - The file is a complex SQL view requiring a temporary file to contain the results of the SQL view. Recovery . . . : You may want to change the query to refer to a file that does not require a temporary file to be built.
Figure 12. CPI4324 - Temporary File Built

1.6.5 Data Skew
Be aware of the unbalanced distribution of values in a column during database design. An example is a table of accounts for several banks. If one bank has a 100,000 accounts and the rest have 200, data skew or unbalanced distribution is present. To make the optimizer aware of this fact, you might want to create, select, and omit logical files (indexes) so that the optimizer has a better chance of recognizing the skew. For the preceding example, create a select and omit index keyed on BANK, which selects only the bank with 100,000 accounts.

1.6.6 Dynamic SQL
Dynamic SQL statements have an additional overhead for syntax checking and access plan build during runtime. If possible, use dynamic SQL sparingly or use dynamic SQL with parameter markers .

1.6.7 Minimize Data Movement
For a client application accessing a remote server database, you should try to minimize data movement. That means do not use statements such as SELECT * FROM. List the columns you really need for a transaction. For single row retrieval, the SELECT INTO statement is recommended.

1.6.8 Avoid Data Conversion
When running SQL statements in debug mode, message SQL7919 indicates that data conversion occurs. This might happen when a client application uses host variables that do not match the data type of the columns selected. This also affects index usage for these statements. For index usage, you might want to investigate if DB2 for OS/400 can map the column data type to the data type in the application.

Chapter 1. Application Design

39

This soft copy for use by IBM employees only.

1.6.9 SQL Program Compiles
The following precompile options for SQL programs might improve response times:
• • • •

ALWCPYDTA - Allow copy of data (specify *OPTIMIZE). CLOSQLCSR - Close SQL cursor (specify *ENDSQL or *ENDJOB). ALWBLK - Allow blocking of data (specify *ALLREAD). DLYPRP - Delay PREPARE for dynamic SQL (specify *YES).

The CL commands DSPPGM or PRTSQLINF show the precompile options used to create the program, service program, or SQL package.

1.6.10 Keep Predicates Clean
The following examples show what clean predicates mean. For a binary or integer field, operand attribute mismatch (CPI432E) occurs when decimal positions are specified:

WHERE BINFLD > 100 WHERE BINFLD > 100.00

<------- YES <------- NO

Omit arithmetic expressions in WHERE clause:

WHERE SALARY > 21000 WHERE SALARY > 20000 * 1.05

<------- YES <------- NO

Avoid wild card in first position with LIKE:

WHERE NAME LIKE ′ J%son%′ WHERE NAME LIKE ′%son%′

<------- YES <------- NO

1.6.11 Other Considerations
The following list shows miscellaneous considerations when implementing an application:

Consider using Data Propagator to partition data across systems. It allows querying the data on other systems. To clear data from table, use CLRPFM rather than DELETE. If a table or data space scan is used often for queries, use RGZPFM or CHGPF REUSEDLT(*YES) to remove deleted records. Use ALWCPYDTA(*OPTIMIZE) for OPNQRYF when KEYFLD is specified. Use OPTIMIZE(*FIRSTIO) for OPNQRYF to bias optimizer to use an index versus creating one.

• •

• •

Check out other AS/400 publications discussing DB2 for OS/400:
• •

DB2/400 SQL Programming , SC41-3611-00 DB2/400 Advanced Database Functions , GG24-4249-00

40

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Chapter 2. Communications Performance
This chapter covers the impact data communications has on a client/server application. It covers both the SNA and TCP/IP and environments. It shows how to analyze communication performance problems and conclude with some communication performance recommendations. This chapter provides some communications performance tips. Additional communications performance tips are provided in 7.2.7, “Communications SNA” on page 244 and 7.2.8, “Communications - TCP/IP” on page 246.

2.1 Introduction to Communication SNA
In this chapter, you are introduced to SNA and its terms from an SNA Logical Unit 6.2 (LU6.2) protocol viewpoint. This is the protocol used for APPC (Advanced Program-to-Program Communications) applications that include Client Access/400 server and client functions. It also discusses the basics of the line (lower level than SNA) protocol being used, with the focus on IBM token-ring LAN. Other documents go into much more detail than this redbook. Suggested AS/400 documentation includes:
• •

AS/400 Communications Configuration , SC41-3401 AS/400 Local Area Network Support , SC41-3404

SNA APPC is described in comparison to use of a telephone connection. The SNA APPC terms are presented because resolving a client/server application performance problem requires some base understanding of the SNA APPC protocol. Appendix B, “Communications Trace Examples” on page 429 shows the communication data flow of a portion of an ODBC application. After reviewing this chapter, you should review this line trace; the information provided can assist in assessing performance characteristics of a client/server application. You need to understand at least the key elements of SNA and token-ring LAN terms and data flow to interpret the line trace accurately. Other line types have similar SNA constructs but data flow requirements may be different for a different application or communication line protocol. Sections follow in this redbook that provide more details on performance tuning and analysis. See Chapter 7, “Client/Server Performance Tuning” on page 235 and Chapter 8, “Client/Server Performance Analysis” on page 255.

2.1.1 What Is SNA?
Systems Network Architecture (SNA) is a design developed by IBM for telecommunications networks. It defines how computers in a network communicate. Within SNA are many ″languages,″ or SNA Logical Unit (LU) type protocols by which these computers can communicate. As with everyday languages, some are more popular than others, some are more widely used, and some are better at certain methods of communication than others. One popular SNA LU protocol is that of LU2, which is the protocol originally used for 3270 display device communications. This protocol has proven very
© Copyright IBM Corp. 1996

41

This soft copy for use by IBM employees only.

satisfactory for display device functions. However, it generally is not as full-functioned as is required when communicating between intelligent systems such as between most client/server applications. The discussion focuses on the most recent of the SNA Logical Unit protocol types originally referred to as LU6.2, but now more generally referred to as Advanced Program-to-Program Communications (APPC).

2.1.2 The Basics
Before discussing specific SNA terminology, you need to understand key terms that are used over and over again throughout this chapter: network, data link, router/bridge, and node. A node is a computer at the end point of a data link that wants to exchange information with another node. A network is a collection of computers, the data links and, optionally routers/bridges that exchange information. A data link represents the physical connection between nodes and, if present, routers/bridges. Examples of these links include token-ring local area network (LAN), and synchronous data link communications (SDLC) over a wide area network (WAN) connection. A router/bridge is typically a hardware and software device that connects physically separate data links or processes multiple data link protocols. In ″simple networks″, there are no obvious routers/bridges. However, in many of today′s ″complex networks,″ one or more routers/bridges connect data links. For example, a router/bridge could connect two different LAN wiring systems or a router/bridge could connect a WAN set of nodes to a LAN set of nodes. Bridges and routers can connect these separate physical data links. Additional functions include carrying and routing multiple line protocols such as token-ring, Ethernet, and IPX. Although there are technical differences between routers and bridges, the key performance consideration is that they are a potential for performance degradation that needs to be examined in complex networks. Some problem examples include:

Two high speed LANs are connected by bridges over a WAN line that is of lower speed than the LANs. Performance is limited by the speed of the WAN line during peaks of activity. A router/bridge handles multiple protocols between many nodes and many data links in a network. During peaks of activity, the router/bridge itself may become so busy that its CPU is over-utilized and becomes a performance bottleneck.

All of these network components are usually transparent to the applications exchanging data, so it is not always easy to determine that they may be responsible for lower than expected performance.

42

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

2.1.3 What Is an LU?
An LU (logical unit) is merely the SNA term for the software that manages the exchange of data between partner programs and acts upon commands (APPC verbs) from a user′s program. If you compare the communication between computers with the communication between two persons, you can view an LU as the house phone that is used when one person calls another.

2.1.4 What Is LU 6.2?
LU 6.2 is the most recent LU type. It provides the ability for two computers to communicate as peers, as opposed to the host-oriented communication used in early SNA, such as with 3270 (LU 2) display devices communicating with a mainframe system. With LU 6.2, there is no dependance on a host machine. Either computer can begin the communications. Either computer can end the communications. LU6.2 is sometimes referred to as ″independent LU″ support. Following the precedent of comparing an LU with a phone, LU6.2 represents a new and improved phone. In fact, it is perceived as a conference phone because LU6.2 provides the ability for two computers communicating with each other to have several conversations active at the same time. APPC uses LU6.2 protocol to perform its functions.

2.1.5 What Is a Session?
For two LUs to communicate, they must have some type of connection. In SNA, this connection is a session. The session manages the exchange of data between LUs, including the quantity of data exchanged, the security of the data, and the routing of the data. A session can be compared to the telephone wire that connects the two telephones, except that for LU6.2, there can be several sessions defined over the connection between two telephones (that is, between two LUs).

2.1.6 What Is a TP?
A transaction program (TP) is an instance of an application program. It performs a role in a specific transaction. It is required to establish a conversation and cleans up when the conversation ends. Continuing with the telephone analogy, one TP is the person placing the call and another TP is the person answering the call. In terms of AS/400 APPC functions, a user-written program is a TP, and the IBM-provided programs that perform 5250 Display Station Pass-Through, SNA Distribution Services (SNADS), Distributed Data Management (DDM), and Open Database Connectivity (ODBC) are each a TP.

Chapter 2. Communications Performance

43

This soft copy for use by IBM employees only.

2.1.7 What Is a Conversation?
A conversation exists when data flows through the session pipeline between two programs. It is possible for many conversations to be active at the same time over the single connection between two systems. Each conversation uses a unique session. When the conversation has completed, the session that was in use is freed up for use by a subsequent conversation. In the phone analogy, the words spoken by the two persons in each house represent the conversation. In terms of AS/400 APPC functions, a user-written program or an IBM-provided application, such as SNADS, ODBC, and so on, begins a conversation by issuing the Evoke function with an Intersystem Communications Facility (ICF) file interface or an Allocate/Attach function with a CPI-C interface. A Detach/Deallocate function ends the conversation.

2.1.8 Attach Manager
In APPC (LU 6.2), there is a component known as the Attach Manager. In the OS/2 implementation, the attach manager is externalized in user documentation and interfaces. This function exists in other implementations; however, it may or may not be externally referenced. The attach manager manages the incoming allocation/attach requests to begin and maintain a conversation between partner programs (TPs). In everyday phone terms, the attach manager is like a house operator who answers the phone, screens the call, and ensures that the caller speaks to the right person. It also has the capability to awaken the desired party if necessary. In terms of AS/400 APPC functions, the incoming attach is termed an incoming Evoke, or program start request. AS/400 system jobs QLUS and QLUR have primary responsibility for the Attach Manager functions.

2.1.9 Parallel Sessions
It was earlier said that APPC supports multiple sessions between two LUs. In APPC terminology, this is referred to as parallel sessions. Although no actual capability is known in today′s phone system, the equivalent would be if the phone had multiple receivers and lines to talk to more than one person at the same time. This is similar to the conferencing capability of today except that the parallel sessions can (and probably do) carry totally unrelated conversations.

2.1.10 What Is a Mode?
In APPC, a mode defines the characteristics of a session (which influence an active conversation using that session). These characteristics may define the cost, the speed, or the security of a session. A mode also defines the number of sessions (and conversations) that can be active concurrently if data encryption is to be used, and some conversation buffering parameters. Important buffering parameters include Request/Response Unit (RU) size and the number of RUs that can be sent or received before an indication (SNA Pacing) that more data

44

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

can be sent or received. (SNA RU and Pacing are discussed later in this chapter.) In reality, the route selection (line speed, security, and so on) are only referenced by the mode. The mode names the Class of Service (COS) description, where the routing information is defined. This definition helps to determine which links are selected to transfer data during a conversation. For example, if you are exchanging very sensitive data, such as payroll information, you may be willing to pay a little more for the connection to ensure that you have a very secure link. On the AS/400 system, you can display and set up class-of-service descriptions with the Work with Class of Service Descriptions (WRKCOSD) command. Most IBM systems provide a set of these descriptions that are defaulted to by a mode description provided by IBM. #INTER, #BATCH, and #CONNECT are the typical class-of-service descriptions used. With regard to the phone analogy, you have somewhat similar choices when dialing long distance. You can choose to dial direct, person-to-person, or collect. Also, different rates may apply depending upon the kind of call and the time of day called. When calling someone, you may be willing to pay more to relay important information. If you are making a casual call, you may decide to wait until the late evening hours when the rates are lower. The mode itself defines the maximum number of concurrent (parallel) sessions and conversations active, and the conversation blocking and data encryption values. With APPC, the mode description parameter values are negotiated by the two connecting systems, typically at vary-on time. When one system has a higher value for a particular parameter, the negotiated value is always the lower value. On the AS/400 system, you can display and set up mode descriptions with the Work with Mode Descriptions (WRKMODD) command. Most IBM systems provide a set of these descriptions that are defaulted to by functions that are provided by IBM. #INTER, #BATCH, QPCSUPP, and QCASERVER are the typical mode descriptions used.

2.1.10.1 APPC Compression
As previously discussed, a mode may specify compression for APPC sessions.
• • • •

Specify if allowed, required, requested, or dependent on line speed. Inbound and outbound algorithm. Run Length Encoding (RLE) using string control bytes. Lempel-Ziv (LZ) builds a dictionary of codes to represent unique character strings. Supported by VTAM, the AS/400 system, and Client Access/400.

The following list contains the data compression parameter values within AS/400 network attributes.

DTACPR - compression for end node session. − *NONE, *ALLOW, *REQUEST, *REQUIRE, line speed. DTACPRINM - intermediate node may suggest compression. − *NONE, *REQUEST, line speed.

Chapter 2. Communications Performance

45

This soft copy for use by IBM employees only.

There are two compression algorithms embedded in SNA: Run Length Encoding (RLE) and Lempel-Ziv (LZ). RLE uses String Control Bytes (SCBs) to encode duplicate, repetitive bytes of data. For RLE, a string of repetitive characters, such as ″ssssssssssssss″, can be compressed into two bytes: (1) the first byte is an SCB indicating 14 repetitive characters, and (2) the second character is the repetitive character itself (′ s′). LZ assigns codes to represent unique character strings. LZ uses tables to store these codes. The LZ tables come in three sizes depending on the length of the codes.
• • •

For LZ9, the codes are 9 bits long and the tables have 511 entries For LZ10, the codes are 10 bits long and the tables have 1023 entries For LZ12, the codes are 12 bits long and the tables have 4095 entries

For LZ, repetitive character strings such as the string ″therefore″ can be compressed into a single code. These codes are transmitted to the partner as they are developed. Generally, LZ compresses better than RLE, but also costs more in terms of storage and processor cycles. The compression and decompression is done at the session level and on an RU basis. An intermediate node may (if the parameter is set to *REQUEST or linespeed) suggest compression be done because of a slow line, for example. This request flows back to the session originator in the BIND and is honored depending on the parameters at the end points. The compression either occurs from end-to-end, or it does not occur at all.

2.1.10.2 Mode Description Example
Figure 13 contains an example of mode description QPCSUPP provided by IBM and references an IBM supplied class-of-service description #CONNECT. It is shown here for reference purposes.

Display Mode Description Mode description . . . . . . . Class-of-service . . . . . . . Maximum sessions . . . . . . . Maximum conversations . . . . Locally controlled sessions . Pre-established sessions . . . Maximum inbound pacing value . Inbound pacing value . . . . . Outbound pacing value . . . . Maximum length of request unit Data compression . . . . . . . Inbound data compression . . . Outbound data compression . . Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . : : : : : : : : : : : : : : QPCSUPP #CONNECT 64 64 0 0 *CALC 7 7 *CALC *ALLOW *LZ10 *LZ10 AS/400 PC Support mode entry

Figure 13. Example of IBM-provided Mode QPCSUPP

The maximum sessions parameter is negotiated between endpoints of an APPC session. If one side requests a larger number, the lower number is used.

46

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

The maximum conversations may be larger than the maximum number of sessions, but is not usually necessary. One conversation can be active over a session at a time. If one end of the conversation ends, and the other end is still doing work (but not communications), the conversation is tied up until the task ends the conversation. Therefore, it is sometimes true that the number of conversations are larger than the number of sessions. This value is not negotiated and may be different for each end of the session. Locally controlled sessions are the number of sessions that this location owns and does not have to bid for to use. Either side can use the sessions that are controlled by the other, but has to bid to get permission. Locally controlled sessions stay available once they have been started. This means that the SNA BIND has already been done and the conversation (passthru, SNADs, and so on) starts up faster. Pre-established sessions are started (BIND flows) as soon as the mode is started. A mode is started by a STRMOD command, or by a session request, such as STRPASTHR command. The number of BINDs that flow is equal to the number of pre-established sessions in the mode. All pre-established sessions must also be locally controlled. Therefore, pre-established sessions are greater than or equal to locally controlled sessions. The maximum length RU of *CALC uses a size that fits into the frame without segmentation.

2.1.10.3 Display Mode Status Example
Figure 14 shows the status of IBM-provided mode description QPCSUPP for APPC device description ITSOPC. Option 9, Display mode status , was used from the Work with Configuration status for the controller ITSOPC display. This example shows that two target sessions are active for ITSOPC.

Display Details of Mode Status Mode/status . . . . . . . . Device/status . . . . . . . Local location/network ID . Remote location/network ID . . . . . . . . . . . . . . . . . . . . . . . . : : : : QPCSUPP ITSOPC SYSASM01 ITSOPC Source 0 0 Local 0 0 0 0 System: SYSASM01 Started ACTIVE ITSCNET ITSCNET Target 2 2 Remote Detached 0 0

Conversations: Total Configured maximum . . . . . . . : 64 Number for device . . . . . . . . : 2 Number for location . . . . . . . : 2 Sessions: Configured limits . Local maximum . . . Negotiated limits . Number for device . Number for location Total 64 64 64 2 2

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

: : : : :

32

Bottom
Figure 14. Display Mode Status Example Using QPCSUPP

Chapter 2. Communications Performance

47

This soft copy for use by IBM employees only.

This display shows a mode that has been started. There are three sessions started for this local and remote location pair and two of them are using this device description.

2.1.11 What Is an RU (Request/Response Unit)?
An SNA RU is a defined block of data that contains application data and, for APPC, an LU6.2 set of architected information that pertains to the application data. For example, LU6.2 defines a method for indicating the record length of the application data and identifying the TP Name that is to be Allocated/Attached (Evoked) when starting a conversation. RUs can be marked as ″chained″. This means all of the RUs are to be treated together, such as an entire file of records, may be transmitted as a chain of RUs. SNA supports indicating the RU is a Begin Chain, Middle of Chain, and End of Chain. When chaining is not used, as is the case with exchange of 5250 display data, the RU is marked with both Begin Chain and End Chain. The RU size is defined in the mode for APPC. The SNA Pacing value specified in the same mode tells how many RUs can be sent or received before a pacing response is required. Although the AS/400 system can support a maximum RU length of up to 16384 bytes, it is recommended that you use the *CALC default in the CRTMODD command. This enables the system to look at the communication line frame size and make the RU 9 bytes less than the frame size. These 9 extra bytes contain SNA RH (Request/Response Unit Header) and SNA TH (Transmission Header) information. TH information includes such things as ″conversation id″ for routing between the partner TP programs. RH information includes identifying the RU as containing a request or response, chaining indications, and if SNA definite response or exception response mode is being used. Most APPC applications run in the exception response mode. This means do not send a response to each chain unless an error (exception) was detected by the receiving TP. Definite response mode means a response is required, indicating either a successful or unsuccessful reception of the chain. As you can see, definite response mode can degrade performance in an interactive environment where line utilization is high. On the AS/400 system, 5250 functions performing output-only functions and save/restore display functions use the definite response mode. These workstation functions are associated with the Create Display File command parameter values DFRWRT(*NO) and RSTDSP(*YES). If you specify DFRWRT(*YES) (the default), and RSTDSP(*NO) (the default), the exception response mode is used by OS/400. Most IBM-provided applications use the exception response mode, though programming interfaces are available to use the definite response mode also. In general, the RU size can span (be larger than) the frame size. This is called segmenting. While data can be successfully exchanged when segmenting is in effect, segmenting generally degrades performance and can result in further degradation if a bridge/router has to process segmenting. Line protocol frames are the actual physical blocks of data that are exchanged over a WAN or LAN line. Typically, the LAN nodes can support larger frame sizes than WAN nodes, but that is not always the case. Because it is not always

48

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

easy to determine what the remote client node supports for maximum frame and RU size, it is recommended that the AS/400 line description specify a MAXFRAME size of not less than 1994. This gives very good performance in most cases and, like the RU size, is negotiated with the remote system. The agreed-to frame size is the smaller of the values supported by one of the communicating nodes. On the AS/400 system, the system operator message CPF5908 indicates the control unit that is contacted and second-level text lists the agreed-to frame size. In a stable network (error conditions are rare), where both nodes support very large frame sizes (AS/400 supports MAXFRAME of 16393 bytes) and a significant amount of data is exchanged, you should specify very large frame sizes for maximum throughput. By making the RU size *CALC, you have the most flexible RU and frame size settings while delivering reasonably good performance when significant amounts of data are exchanged between nodes.

2.1.12 SNA Pacing
Pacing is a conversation-level parameter defined in the mode for APPC. Pacing throttles the number of RUs sent to prevent buffer overflow at the receiver. Pacing throttles one conversation to allow another conversation to use more of the bandwidth. For example, suppose you are running two conversations concurrently; one is primarily an interactive application and the other is primarily a batch ″file transfer″ or ″query download″ application. If the batch conversation has a larger pacing value than the interactive conversation, then the batch conversation may get more communications line bandwidth (more of the available line speed). You may want this difference in pacing values, but remember the interactive performance is exposed to response time degradation while the batch conversation is active. You can have the batch application use a mode with a lower pacing value than that used by the interactive conversation if the shortest possible batch application runtime is not a customer priority.

Conversation 1 -----------> (Pacing = 7) -----------> -----------> -----------> -----------> -----------> -----------> <----------

RU RU RU RU RU RU RU Pacing response to conversation

Conversation 2 -----------> RU with pacing indicator on (Pacing = 1) <---------- Pacing response to conversation

APPN uses adaptive values as necessary intermediate nodes. what the source and values.

pacing, which means the APPN code may vary the pacing to optimize the buffering within the source, target, and any This is performed without degrading performance from target system mode descriptions agreed to for pacing

Chapter 2. Communications Performance

49

This soft copy for use by IBM employees only.

2.1.13 Line Protocol Frame Size and Response Requirements
PACING is by request unit (RU). At a lower level, a link-level response is required every several frames. In SDLC, this is one every seven frames (modulus 8), or one every 127 frames (modulus 128). In token-ring LAN, LANMAXOUT, and LANACKFRQ are key parameters affecting these responses. Think of the frame as an exchange of data between stations or control units, and the RU as the buffer between LUs or TPs. This RU must travel within a line protocol frame. As discussed previously, the AS/400 system supports a frame size of up to 16393 over a LAN line. LAN protocol also has parameters that are something like the SNA pacing value, but at the station level. Key parameters on the sending system LAN are LANMAXOUT, LANACKFRQ, and LANINACTMT, and LANRSPTMR and LANACKTMR on the receiving system. On the AS/400 system, these parameters are specified on the controller description.

2.1.13.1 LANMAXOUT and LANACKFRQ Parameters
These two LAN parameters must be coordinated. LANMAXOUT works the same as SDLC MAXOUT parameter - how many frames to exchange between control units before a response is exchanged.
• •

LANMAXOUT=2 sends two frames and waits for a response. Response may be an RR (Receiver Ready) or an I (Information) frame of data or an SNA pacing response.

LANACKFRQ determines when, by a count, to acknowledge a frame that has been received, and is reset if an I or S (Supervisory) frame is sent.
• •

LANACKFRQ must be less than remote LANMAXOUT. LANACKFRQ=1 sends response after receiving every frame.

These parameters affect all sessions sharing one controller description. Having exactly the same setting for these parameters on communicating nodes is required for best performance. In most cases, IBM communications software defaults to the same values.

Frame Size: As speed increases, so does the allowable frame size. A station may hold the token for 10 milliseconds. As the speed increases, a larger frame can be sent in that 10 milliseconds.
• •

As MAXFRAME increases, LANACKFRQ and LANMAXOUT can be lower. LANACKFRQ must always be less than or equal to LANMAXOUT on the partner system. − When waiting for a timer to cause a response to be sent, all work under that controller description waits. Timer used to send RR response even if LANACKFRQ not reached. Should be less than remote LANRSPTMR or performance may suffer. No response received, send RR with poll bit on requesting a response. Should have received response when remote LANACKFRQ reached or LANACKTMR reached.

LANACKTMR

LANRSPTMR

If timers are used, your performance suffers. Counts (LANMAXOUT and LANACKFRQ) should normally be sufficient for sending link-level responses.

50

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

In general, AS/400 LAMAXOUT=2 (*CALC) offers the best performance for interactive environments and adequate performance for large transfer environments. For larger transfer environments, increasing LANMAXOUT may improve batch performance improvement. If you are using the newer IOPs (2619, 2617, 2618, and 6506), try using a LANMAXOUT value of 6. Remember, both nodes must support the larger values to achieve a performance improvement over the LANMAXOUT=2 value. If you are communicating below a PS/2 Model 50 level of performance, there is little difference in performance. If you are using a Model 50 or higher level of performance, increasing LANMAXOUT above 2 may improve performance but keep LANCKFRQ(*CALC). Never let LANACKFRQ on one node be greater than LANMAXOUT on the other node. It is best to have the sending and receiving nodes have the same LANMAXOUT and LANACKFRQ values. If changing LANMAXOUT and LANACKFRQ do not result in noticeable performance improvement, it is suggested that you change the values back to *CALC.

2.1.13.2 SDLC Line and Controller Parameters
Although LAN line protocols are the focus of this redbook, also provided is information on key SDLC parameters that affect performance. You may find environments when SDLC lines, rather than LAN lines are required. Line description parameters include MAXFRAME, MAXOUT, MODULUS, NPRDRCV, INACTMR, IDLTMR, CNNPOLLTMR, and FAIRPLLTMR. Control unit (station) parameters include MAXFRAME, POLLLMT, OUTLMT, CNNPOLLRTY, POLLPTY, and NDMPOLLTMR. The most important of these SDLC parameters is discussed here. Refer to the AS/400 documentation listed at the start of this topic for additional information. MAXFRAME MAXOUT NDMPOLLTMR CNNPOLLTMR Defines maximum frame size, up to 2057 bytes for SDLC. Determines if up to 7 frames or 128 frames can be sent before an SDLC protocol response is required. Specifies how long the primary station (AS/400 system) waits before sending a ″contact poll″ to a Vary on Pending station. Defines how long the primary waits for a response from the secondary station that is in the Vary on Pending state. Note: When the system is polling a controller (station), it is doing no meaningful work to other stations on the line that have data to be exchanged. If you have a large number of varied on pending controllers, this time is used for each one of these controllers every time NDMPOLLTMR expires. This can significantly prohibit successful data exchange from controllers ready to do work while the system is polling Vary on Pending controllers that may actually be powered off. IDLTMR Specifies how long the primary station (AS/400 system) waits for a response from a secondary station that was previously communicating with the system. Specifies the number of additional sequential polls sent to the secondary station that has already responded with the maximum frames as specified by the MAXOUT parameter. POLLLMT(0) is the AS/400 control unit description default value, which means up to MAXOUT (default is 7) can be received and then the AS/400 system performs input or

POLLLMT

Chapter 2. Communications Performance

51

This soft copy for use by IBM employees only.

output to another station on the line. This is normally a satisfactory value, especially if the controller supports a frame size of at least 530 bytes. However, there may be situations where you have a 5250 controller that supports a maximum frame size of 265 bytes. In this case, you may want to have the AS/400 system immediately poll the same control unit for up to another set of seven frames. POLLLMT(1) polls for up to a total of 14 frames before communicating with another control unit on the line. OUTLMT Specifies the number of additional consecutive frames that are sent to the control unit above the base MAXOUT frames. With OUTLMT(0), where OUTLMT defaults to POLLLMT value), up to seven frames are sent to this control unit and then data is sent to another control unit if output data is available. An OUTLMT of 1 enables sending up to 14 frames consecutively to the control unit before the AS/400 system processes another control unit. Larger than default values for POLLLMT and OUTLMT are typical if full screen editing or client workstations are attached on an SDLC WAN line with several controllers active at the same time. When sending data to a display or TP, you commonly want to finish sending all of that data to the devices or TPs on that controller before sending data to another controller. However, you can vary this on a control unit basis for special circumstances.

2.1.14 AS/400 Error Recovery Parameters
The AS/400 system supports error recovery attempts, such as resending data previously sent or indicating to the remote node that the AS/400 system did not successfully receive incoming data. Recovery is controlled by line protocol, line and control unit description time parameters, and retry counts. This level of recovery is termed ″first-level″ on the AS/400 system. When this first-level of recovery has completed unsuccessfully, the affected AS/400 application or applications receive a ″serious error″ indication and the AS/400 system (asynchronous to the application) tries to perform ″second-level″ recovery. Note that successful retries during first-level recovery enables the application to continue but may be a cause for poor performance. You can detect this lower-level of recovery being present by reviewing the AS/400 error log through the Start Service Tools command, by changing the line description THRESHOLD parameter, and by observing line protocol error recovery indications in a communication line trace. The system value QCMNRCYLMT or the line description and controller description CMNRCYLMT parameter specifies a second-level recovery once the first-level recovery has not completed successfully. CMNRCYLMT(2 5) is the default that tells the system to repeat first-level recovery two times within a time period of five minutes. If the second-level of recovery is not successful within the specified time interval, a message is issued to the QSYSOPR message queue that indicates some manual intervention is required.

52

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

2.1.15 What Is an APPC Verb?
APPC supports many functions at the conversation level that are specified with an APPC verb. An APPC verb is merely the command that tells the LU what function to do on behalf of the program. The verb consists of parameters that customize the verb for a particular function. For example, an ALLOCATE requires the name of the requested partner, the mode required, and so on. This command is issued directly from a program. The ALLOCATE is thought of as a procedure call.

2.1.16 Types of APPC Verbs
There are several types of APPC verbs. Within a conversation the Send and Read verbs control the flow of data. The Send has options that indicate data length and if a Definite Response is required. Normally consecutive Sends buffer up data until the RU is filled or the program issues a Read verb. Additionally, AS/400 APPC provides Send APPC verb options for sending the data ″now″, with allow write, confirm, or force (flush) data. Allow write sends the data to the remote node and indicates a change-of-direction so that the remote node can send data back to the AS/400 conversation. Confirm sends the data to the remote node and asks for a definite response reply. This typically is done only at the end of a long and important transmission. Force sends the data immediately but does not request a response or a confirmation. It is typically used when it is important that the data get to the remote node and be processed immediately, when the sending system does not want to send change-of-direction, or cause the overhead of a confirmation response. In a heavy APPC data transaction environment, you want to keep confirmation and forcing of the data to a minimum and not do Allow Write frequently when there is no application need. If you are writing your own APPC programs, you need to understand and use many of these APPC verbs. If you are using a higher level interface, such as OS/400 SNA Distributed Services (SNADS), Client Access/400 Remote SQL, Data Queue, or ODBC functions, these functions use the appropriate APPC verbs without the user′s direct knowledge. However, a high level understanding of these verbs can assist in AS/400 client/server performance problem analysis. Table 3 on page 54 shows a comparison of performance measurement data between three different APPC scenarios. This test was done by sending data between two AS/400 systems, but the results also apply in a client/server environment. The table shows elapsed time, CPU time (maximum/minimum), CPU utilization, and IOP utilization for the test scenarios. In the first test, the data was sent across the communications line by the normal APPC method. That is, when the APPC buffer was full, the data was sent. In this

Chapter 2. Communications Performance

53

This soft copy for use by IBM employees only.

case, the application programs do not have control over when the data was actually sent. In the second test, the data was sent across the communication line for each write. This can be done under application program control using the Flush verb. As you can see from the table, forcing the data across the communication line adds communications overhead to the application. In the final test, confirm processing was used. In confirm processing, the sending program sends out a CONFIRM request to the receiving program. The receiving program responds back with a CONFIRMED verb to indicate that it has completed some critical application related processing. CONFIRM processing should be used at an application level and should not be used to determine whether the data has been successfully sent across the communications line. As you can see from the table, CONFIRM processing adds considerable overhead to an application.
Table 3. APPC Variations with AS/400 ICF 2048 Records of 2048 Bytes (4MB)
Scenario Line Speed Frame Size in bytes 8K 8K 8K Time CPU time CPU 0:08 0:25 1:16 7/7 21/16 35/32 81/81 83/64 46/42 Utilization IOP 41/55 55/52 31/31

Send/Normal Send/Force Send/Confirm

TRLAN 16MB TRLAN 16MB TRLAN 16MB

2.1.17 SNA Layers over Token-Ring Network
The SNA frame is made up of three parts:
• •

The TH: this contains addressing information. The RH: this contains RU control information such as whether the RU is a Request or a Response, and if Definite or Exception Response Mode, Chaining, and so on, are being used. The RU: this contains the data or command sent from the application or end-user.

Figure 15 on page 56 shows an SNA RU and its associated RH and TH control information placed within a token-ring LAN frame. The top portion represents the data being constructed at the application (TP) layer and packaged within the SNA header information and LAN frame information. The bottom half represents the ″breaking down″ of the information on the receiving node. BIU is the Basic Information Block and contains the RU and TH. The PIU is the Path Information Unit and contains the BIU (TH, RH, and RU). The token-ring frame has header (HDR) and trailer information (TRLR) and contains the PIU and controller information including:
• •

Remote node′s LAN address. The destination service access point (DSAP). This is the logical address this node sends to when it communicates with the remote controller node. This

54

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

address allows the controller node to properly route the data sent by this node.

The source service access point (SSAP). This is the logical address the local system uses when it sends data to the remote controller node. This address allows the controller node to properly route the data that comes from this node.

For a frame bound for a peripheral node in an SNA network, such as a PC or the AS/400 system, the TH and RH jointly are 9 bytes long, whereas the RU is variable in length. The RU size should be either the same size as the application buffer size or larger. When multiple messages are sent by the application, they are chained together into the same RU if the RU is large enough, which is good for performance. Frame size can significantly affect performance. If the frame size is not large enough to hold the RU, the RU is split across multiple frames. This is called segmentation and is very costly in terms of performance. Frame size itself does not affect CPU utilization, but many smaller frames use more CPU cycles than fewer large frames. It is, therefore, a good idea to use larger frames where possible.

Chapter 2. Communications Performance

55

This soft copy for use by IBM employees only.

Figure 15. Example of an SNA Frame

2.1.18 SNA RU and Line Protocol Frame Analogy
The following is an analogy of SNA RU and line protocol Frame relationships to a freight train.

56

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

FRAME = train car. A frame is repeated - sent over again if the remote node expecting the frame does not see it within a time value or receives the frame but does not recognize it. RU = cart to carry cargo to the train car. SNA Pacing says do not send any more carts until I tell you I can receive more. Chaining = multiple carts must be shipped together, so the cargo from those carts is loaded into train cars that are linked together for the whole trip. Segmentation = cart holds too much to fit in one train car, so multiple train cars are used, and the last car may be partially used.

2.2 Sockets Communications Support over TCP/IP
There are several concepts that pertain specifically to the sockets communications support used by the optimized servers. These concepts are described here at a high level. See OS/400 Server Concepts and Administration Version 3, SC41-3740, for more details.

2.2.1 Establishing Client/Server Communications
Client Server Deamon ┌───────────────┐ ┌──────────────────┐ │ │ │┌───┐ │ │ │ 1 ││ │ │ Connect───────┼───────┼┼ Listen │ │ │ ││ │ │ │ ││ │ │ │ │└─Attach──────────┼─┐ │ │ │ │ │ │ │ └──────────────────┘ │ │ │ Server Job │ 2 │ │ ┌────────────────┐ │ │ │ │wait for attach ┼─┘ │ │ │ │ │ │ 3 │ │ │ Send──────────┼─────────┼ validate/swap 4│ │ │ │ user profile │ └───────────────┘ └────────────────┘
Figure 16. Establishing Client/Server Communications Using Sockets Support

1. Client connects to a particular server′ s port number. A server daemon must be started (using STRHOSTSVR command) to listen for and accept the client′s connection request. 2. The server daemon issues an internal request to attach the client′ s connection to a server job. This server job may be a prestarted job or a batch job, if prestarted jobs are not used. The server job handles any further communications with the client. 3. The server connects to client. The initial data exchange includes a request that identifies the user profile and password associated with the client server.

Chapter 2. Communications Performance

57

This soft copy for use by IBM employees only.

4. The server job swaps to this user profile and changes the job to use attributes defined for the user profile, such as accounting code and output queue. The functions of connecting to the server daemon, attaching the client connection to a server job and exchanging data and validating the user profile and password are comparable to those performed when an APPC program start request (PSR) is processed. Each type of server has it′s own server daemon which starts up the appropriate server job for incoming client connection requests. In addition, there is a server mapper daemon that listens on a specified port and is provided to permit a client to obtain the current port number for a specified server.

2.2.2 Server Mapper Daemon
The server mapper daemon is a batch job that runs in the QSYSWRK subsystem. It provides a method for client applications to determine the port number that is associated with a particular server. When the client sends the service name, the server mapper: 1. Obtains the port number for the specified service name from the service table 2. Returns this port number to the client 3. Ends the communication 4. Returns to listen for another connection request The client uses the port number received to make a connection to the specified server daemon. The server mapper daemon is started using the Start Host Server (STRHOSTSRV) command and ended using the End Host Server (ENDHOSTSRV) command.

2.2.3 Server Daemons
The server daemon is a batch job that is associated with a particular server type. There is only one server daemon per server type but one server daemon can have many server jobs. Like the server mapper daemon, the Start Host Server (STRHOSTSRV) command starts server daemons and End Host Server (ENDHOSTSERV) command ends them. The server daemons must be active for the client applications to establish a connection with a host server that is using sockets communications support. All of the server daemons run in the QSYSWRK subsytem, except the database and file server daemon that run in the QSERVER subsystem. The server daemon jobs run in the same subsystem as their corresponding server jobs. The protocol TCP/IP and the associated subsystem must be active when the server daemon job is started.

58

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

2.2.4 Service Table
The service table contains the port number of the server mapper daemon and the port numbers of each server daemons and their symbolic service names. The service table is updated with these entries when Host Server Option 12 is installed on the AS/400. The port numbers for each server daemon are not fixed, they can be modified. However, the service name must remain the same, otherwise, the server daemons will not be able to establish a port number to accept incoming client connection requests. The following table shows the initial service table entries provided for the optimized servers and server mapper.

┌────────────────┬─────────────────────────────────────────┬─────────┐ │ Service Name │ Description │ Port # │ ├────────────────┼─────────────────────────────────────────┼─────────┤ │ as-central │ Central server │ 8470 │ │ as-database │ Database server │ 8471 │ │ as-dtaq │ Data queue server │ 8472 │ │ as-file │ File server │ 8473 │ │ as-netprt │ Network print server │ 8474 │ │ as-rmtcmd │ Remote command/program call server │ 8475 │ │ as-signon │ Signon server │ 8476 │ │ as-svrmap │ Server mapper │ 449 │ └────────────────┴─────────────────────────────────────────┴─────────┘
Figure 17. Port Numbers for Host Servers and Server Mapper Using TCP/IP

You can use the Work Service Table Entries (WRKSRVTBLE) command to see the service names and their associated port numbers. Also, it is possible to display the alias name, and add or remove any service entry.

Work with Service Table Entries System: SYSNM000 Type options, press Enter. 1=Add 4=Remove 5=Display Opt Service as-central as-database as-dtaq as-file as-netprt as-rmtcmd as-signon as-svrmap auth auth chargen Port 8470 8471 8472 8473 8474 8475 8476 449 113 113 19 Protocol tcp tcp tcp tcp tcp tcp tcp tcp tcp udp tcp

Chapter 2. Communications Performance

59

This soft copy for use by IBM employees only.

2.3 Where Do You Begin with Unacceptable Performance?
Where do you begin to look when you have unacceptable response times in your client/server application? If you suspect that communications might be the problem, start by checking the following. Note that some of the listed suggestions may not be possible in a client/server application. However, you should consider things such as running the same SQL functions requested by a client from a twinaxial or LAN attached 5250 emulation display.
• •

• •

How does the job perform without communications? Ignoring communications, determine the amount of time and resources used by the task. Compare performance of the same task run from a local workstation. Roughly figure, as a paper exercise, the amount of time used by the components of communications.

2.4 Components of Communications
What is added by doing the work over a communications line?
• • • •

Both server and client communications tasks resource utilizations Transmission time for user data Transmission time for overhead Wait time

─────────────────── TOTAL JOB TIME ────────────────────── ┌───────────┬────────────┬────────────┬────────────┬────────────┐ │ │ │ │ │ │ │ LOCAL │ COMMUNIC- │TRANSMISSION│TRANSMISSION│ WAIT │ │ PROCESSING│ ATIONS │TIME FOR │TIME FOR │ TIME │ │ TIME │ TASKS │USER DATA │OVERHEAD │ │ │ │ │ │ │ │ │ │ │ │ │ │ └───────────┴────────────┴────────────┴────────────┴────────────┘

Local processing time - what would the response time be if the task were run locally? Every job has disk time, processing time, and wait time which has nothing to do with communications. If SQL functions are requested by the client, can you run these same SQL statements from a local workstation or program using the same data ? If so, use the Start Debug (STRDBG) command support to see SQL query optimizer messages in the job log. If the optimizer is selecting the ″wrong index,″ it is not communications that is responsible for poor response time. Communications tasks - as soon as the job is run from a remote workstation, or started through APPC, for example, there are additional AS/400 tasks involved to process the allocation requests received (job initiation) and exchange data over the communications line. Transmission time for user data and overhead - this is the time it takes to move the data: the number of bytes sent or received over the line, the SNA pacing, the number of transmissions required to complete the data exchange. Line

60

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

speed is an important factor; the amount of work being done by multiple systems over the same communications line and the number of transmissions between two partner applications all enter into the performance considerations. For SDLC-type of lines, the number of stations on the line and polling frequency for each station (controller) may impact performance. LAN Communications Although client/server applications may exchange data over a wide variety of line types, this redbook focus is primarily on token-ring LAN lines.

Wait time - is the queueing for use of resources: communications IOP, line, and modem. In general, one fast resource is better for performance than two resources, each 1/2 the speed of the fast one. Using an SDLC example, one 19.2 Kbps line gives you better overall performance than two 9.6 Kbps lines. If lots of data is being sent concurrently over the same line, especially by multiple client/server connections, there is a condition called high utilization that may result in overall poor performance. LAN environments may include a single LAN, or multiple LANs connected (transparently to the applications) through various hardware and software devices, such as bridges, routers, or APPN intermediate nodes. There may also be Wide Area Network lines connecting the these LANs. In some cases, the WAN lines or the bridges/routers may become overloaded and cause additional delays in data exchange or even cause error recovery that delays response times. The following sections provide additional details on the performance components of communications.

2.4.1 Communications Tasks CPU Time
Tasks to do the communications work include:
• • •

The tasks that run in the system to handle the communications process. The time spent starting up. The time spent in a task.

On the AS/400 system, the initial connection between a client and server and job initiation (attach manager processing) may take relatively high resource utilization. Also, varying off an entire line with hundreds of stations (controllers) can take considerable resource. Most of this startup and take-down processing is performed with asynchronous processing, but it is recommended that startup and take-down functions are kept to a minimum during any short period of time, such as over a five minute period. For example, you want to vary off an entire line when there is minimal overall system activity and you do not want to end an APPC conversation after each order has been completed and the next order immediately follows. There are configuration options where a gateway system is placed between the actual client workstations and the AS/400 system. The gateway can be customized so that a smaller number of client workstation control units are defined and active at the AS/400 system. For example, a single APPC control

Chapter 2. Communications Performance

61

This soft copy for use by IBM employees only.

unit is defined and each client workstation is an APPC device description associated with that control unit. This has performance advantages during client connection vary on and vary off processing and during periods of heavy error recovery situations on the line. In cases where all AS/400 systems and client communicate with APPC, this has distinct performance advantages, though the single gateway system an is extra point of failure and maintenance within the network. Where the AS/400 server and client are using different protocols, a possible performance impact is introduced. In some cases, the gateway system can become overloaded when heavy data traffic with the clients occurs. Additional overhead is introduced when the gateway system is translating between IBM token-ring LAN protocols and other protocols, such as TCP/IP, NETBIOS, or IPX. Note that the CPU speed of each node communicating with each other has an impact when a large amount of data is being exchanged. For example, a client workstation performs an ODBC SQL SELECT function that returns over 1000 records (rows). This is sometimes referred to as a ″query download.″ In this scenario, the speed of the client CPU has the greatest impact on throughput performance and a 486 processor workstation would normally complete receiving all 1000 records much faster than a 386 processor workstation.

2.4.2 Line Time
Transmission time for user data includes the time it takes to transfer the data, including wait times. It is governed by the line speed, the processing power of the client and the server, and the total line utilization. Line utilization is determined by your application and by how many other concurrent users of the communication line are exchanging data at the same time. When ″short bursts″ of data are exchanged, such as with a 5250 display or a query that returns one or two records, the line speed has less impact on performance. For large amounts of data, the line speed and processor power of both the client and the server have increased performance impact. It is difficult to predict or model throughput rates (for example, bytes per hour) without performing testing in at least a controlled environment. This is because of the line protocol and SNA parameter settings and the design of the application. But as a high level generalization, you can assume the number of bytes (characters) transmitted based on knowledge of the application data and add 10-15% overhead for protocol unique data. Then compare this to the rated speed of the line. The application performance is no faster than this quick estimate. Communications overhead includes the traffic over the line, in addition to the user′s data, which can be caused by:
• • • •

Protocol-specific information Configuration Error retries Poor programming techniques

Sections follow in this redbook that provide more details. See Chapter 7, “Client/Server Performance Tuning” on page 235 and Chapter 8, “Client/Server Performance Analysis” on page 255.

62

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

2.4.3 Wait Time
Communications wait time is the time spent waiting for the line, the IOP, and other components of a network.

Wait time can be affected by: − − − − − Multiple controllers on an SDLC multi-dropped line. Modem turnaround (request-to-send, clear-to-send). Highly-utilized local area network. Highly-utilized line, IOP, 3745 NCP controller, and so on. Sending large frames, no pacing.

The only way to be absolutely clear about how much of the total response time or throughput time is accounted for by the AS/400 system is to run a communications line trace and examine the time stamps in the printed trace output. The trace can identify:
• •

Actual frame size that is being used to exchange the data. Time stamp the ″request for data or a function″ was received by the AS/400 IOP. Time the AS/400 system sent the response/data from the IOP. By subtracting the receive time from the send time, for example, you can determine how fast the AS/400 system responded to an ODBC SQL SELECT request.

How much data is actually being exchanged. Sometimes you are surprised by the amount of additional data being exchanged or the number of line turnarounds required to complete a function. In many performance critical situations, especially use of ODBC, you need to minimize the number of line turnarounds to accomplish the required function. A sample line trace is included in Appendix B, “Communications Trace Examples” on page 429.

2.4.4 Autostart Jobs
The QSERVER subsystem has an autostart job defined that is needed for the file server and database server jobs. If this job is not running for some reason, the servers will not be able to start and the QSERVER subsystem will end. If a problem occurs with this job, you may want to end the QSERVER subsystem and restart it.

2.4.5 Prestart Jobs
Prestart jobs are helpful in improving performance because they complete AS/400 job initiations prior to the AS/400 system receiving the program start request. Prestart jobs:

• •

May be used when the time to initiate the transaction is affecting the response time noticeably. Require changes to application programs. May be used for APPC, ASYNC, BSCEL, INTRA, Finance, Retail, Sockets, and SNUF

Prestart jobs are designed to reduce the overhead required to process a program start request. A prestart job entry is created in the subsystem

Chapter 2. Communications Performance

63

This soft copy for use by IBM employees only.

containing the communications entry that receives the program start request. When the subsystem is started, the prestart job entries for that subsystem are started. Because the programs are already started and the files used are opened, the time required to process a program start request is reduced, thereby giving improved response times. Prestart jobs are designed to be used for communication applications where the response time or overhead required to initiate a program start request needs to be reduced. Another application is for frequently initiated jobs that perform a function and are then ended. For example, a credit authorization application (where a teller located in a retail environment may be using a credit card reader) initiates a dial into an AS/400 system. This could send a program start request to the AS/400 system along with the credit card number and amount of credit requested. The AS/400 system verifies the credit request and then sends a response to the credit card reader indicating approval or denial with any other pertinent information and terminates the session. In this environment, there could be hundreds of credit authorization requests coming into multiple communications ports on the AS/400 system each hour. Prestart jobs reduce the amount of time required to process each request, thereby improving the throughput of both the AS/400 system and the response time the teller receives on the credit authorization. Prestart jobs can be used with any HLL that supports the use of ICF or CPI-Communications: RPG/400, COBOL/400, C/400, FORTRAN/400, REXX/400, and CSP/AE. They can also be used for C/400 which use TCP/IP Sockets for communications. Prestart job entry commands: The ADDPJE command adds a prestart job entry to a specified subsystem that contains a communications entry so it can process program start requests. The RMVPJE command removes a prestart job entry from a specified subsystem. The CHGPJE command is used to change characteristics of an existing prestart job entry. Prior to V3R1, prestart jobs supported only APPC PSRs as the method to become active. Starting with V3R1, prestart jobs support the servers that use sockets communication support.

2.4.6 Communications Trace as a Performance Tool
The AS/400 system communications trace can be used to verify the amount of data going over the line, the time it takes for a response to be returned, and to check for abnormalities. For examples of how to start a communications trace and how to use the trace output to analyze a client/server application using ODBC, see Chapter 8, “Client/Server Performance Analysis” on page 255.

64

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

2.5 Performance Recommendations
IOP Considerations:
• • • • •

Ethernet IOP 2617 has significantly greater capacity than older IOPs. TRLAN IOP 2619 is the highest capacity TRLAN IOP. IOP can be the bottleneck for large transfer workloads. Use of large frames minimizes IOP utilization and overhead. Follow IOP utilization threshold recommendations: − Interactive environments: 60%. − Large transfer: > 6 0 % . − Use AS/400 Performance Tools to monitor.

Use a higher speed adapter such as the 2617 Ethernet/HP adapter or the 2619 token-ring/HP adapter. These adapters are capable of higher throughput, handling bigger frame sizes, and have higher utilization thresholds than the older communication IOPs, so they provide better performance on high throughput than the older LAN IOPs. Frame Size:

The use of larger frames: − − − − − − − − Generally offers better overall performance. May reduce the number of frames required per transaction. Reduces the total amount of communications overhead. Is more efficient for the CPU, IOP, and media. Usually yields a higher transfer rate. May not work well on error-prone lines. Has no effect if client adapter cannot support. Causes larger memory requirement on client.

MAXFRAME parameter on the LIND and CTLD: − TRLAN: Up to 16393 bytes (default=1994). - Increase to 4060 for 4Mbps TRLAN. - Increase to 8156 for 16Mbps TRLAN. - Increase to 16393 for 16Mbps TRLAN for 2626 IOP or 2619 IOP. Ethernet: Up to 1496 bytes (default=1496).

Using a bigger frame size reduces the number of frames that are put onto the communications network. Bigger frame sizes also mean that you use less CPU cycles than many smaller frames, which frees up CPU cycles for use elsewhere. Client/server communications:

Use local area network (LAN): − − Minimize use of slower networks and delays. Examples: Passthru, bridges, routers, and gateways.

Minimize the number of communications requests. − Examples: Stored procedures, bundling, and chaining.

Minimize the amount of data transferred. − − − − But maximize the amount of data per request. Send or receive only the data needed. Group operations when possible. Place data near the function.

Chapter 2. Communications Performance

65

This soft copy for use by IBM employees only.

TCP/IP Communications Support:

The TCP/IP protocol and application code always run in the *BASE pool on AS/400. If the *BASE pool is not given enough storage, TCP/IP performance can be adversely affected. − − Configure *BASE pool to use at least 4MB of storage. Or change QTCP subsystem description in library QTCP to run in a user defined pool.

Where possible, the use of the network should be kept to a LAN (Local Area Network) as link speeds on the WAN (Wide Area Network) are generally several times slower than link speeds of the LAN. Also, when utilizing a LAN network, try to avoid bridges and gateway machines as much as possible; they are a potential bottleneck and add to the response time of the application. Try to minimize the number of communication flows as much as possible by bundling requests together, or by using bigger frame sizes.

66

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Chapter 3. Work Management
Work management on the AS/400 system is the method used to manage AS/400 resources to achieve optimum throughput. The AS/400 system has many objects that interact with each other and the applications to process information efficiently. An understanding of the concepts of work management is an important prerequisite to maximizing system performance.

3.1 Performance Concepts
AS/400 work management concepts affecting performance are discussed in this section. However, it is not the intent of this presentation to cover all aspects of the subject. Education courses such as OS/400 STRUCTURE, TAILORING AND BASIC TUNING (Course Code S6023) are available for a greater understanding of the subject. The AS/400 Work Management Guide , SC41-3306, has detailed explanations of all aspects of work management. The objective of this section is to introduce the subject and enable the reader to appreciate performance and tuning concepts to be discussed later on.

3.1.1 Queuing Concepts
The work of a single job, or the transactions within that job, is comprised of several tasks or services. The invitation to perform the work required by a task is called a request, while the required work is performed by a server. The time taken to complete the work of the task is called the service time. Queuing is a concept that applies to computer resources just as it does to people waiting in line at the supermarket or waiting to use a bank′s Automated Teller Machine (ATM). In general, how long it takes to get a request or unit of work serviced, whether it be a request to complete the purchase at the supermarket counter, complete a transaction at the ATM, perform a disk I/O operation, or use the CPU depends on three primary parameters:
• • •

The number of ″waiters″ in the line ahead of a new request. The number of servers responding to requests. The service time to complete a request once given to the server, which is a function of the speed of the server and the amount of work to do.

R e Queue q┌─┐ ┌─┬─┬─┬─┬─┬─┬─┬─┬─┐ u│ │ Arrive │ │ │ │ │ │ │ │ │ │ ┌──────┐ Request e│ ├─────── │ │ │ │ │ │ │ │ │ │─── │Server├──── Serviced s│ │at queue│ │ │ │ │ │ │ │ │ │ └──────┘ t└─┘ └─┴─┴─┴─┴─┴─┴─┴─┴─┘ e r
Consider a service point where certain tasks are performed for requestors of service. Generally, requests for service are responded to in the order in which they arrive. Therefore, those arriving first are responded to first, and leave the service point first.

© Copyright IBM Corp. 1996

67

This soft copy for use by IBM employees only.

If the rate of arrival of requests is greater than the rate at which they leave after being serviced, a queue is built at the server. The total response time to have a request serviced is the sum of time spent in:
• •

The queue waiting to be serviced. The request actually being serviced.

When the queue grows longer, the total time taken for a request to be serviced becomes longer, and more time is spent waiting in the queue. The following basic principles govern queuing:
• • •

A single server can service only one request at a time. Multiple concurrent requests are queued for service. The higher the server utilization, the greater the wait or queuing time.

In the AS/400 environment, examples of service requestors are:
• •

Applications System tasks

while examples of service providers are:
• • •

CPU I/O processors Disk arms

The equivalent functions of requestors and servers are also present within the client system as well as the communications network. It is outside the scope of this document to discuss the mathematical equations to determine the effect of queuing. The formula for computing the queuing multiplier assumes:
• •

Work arrives at random intervals. Requests for the resources are not all the same.

The Queuing Multiplier equation is represented by:

QM = 1 / (1 - U), where U = utilization.
As the utilization of a server increases (more work for the server), queuing can account for a much longer elapsed time for work (or request) completion. The queuing multiplier (QM) is a measure of queuing. The AS/400 Performance Management Redbook , GG24-3723, contains a table showing the approximated QM for a range of CPU utilization values. For example, for a CPU that is 67% utilized, QM = 3 means that on the average, there are three requests in the queue (you and two others ahead of you). Therefore, using an average of 0.2 seconds of CPU to service a request, an interactive transaction (response time) takes a minimum of 0.6 seconds to use the CPU. The queuing multiplier is an important factor when projecting the impact of adding work or additional hardware on current system performance. Systems with performance problems often show resources with high queuing multiplier factors. The Performance Tools transaction report - job summary lists the CPU queuing multiplier determined for the collected data.

68

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

The simplified queuing theory discussed above assumes a single queue of requestors and a single server. In the high-end models of the AS/400 product range, multiprocessor (N-way) systems have more than one central processor executing instructions, even though there is only a single queue of requestors (Task Dispatch Queue). In this situation, the increased number of servers reduces the queuing multiplier and the average queue length.

3.1.2 Response Time Curve
Response time is the elapsed time between the request for a service and the completion of that service. In an interactive AS/400 environment, it is the time between the user pressing the Enter key (or a function key) and the keyboard unlocking with the information on the display.

3.1.2.1 Queuing Multiplier Effect
Response time is directly related to queue length, and the queuing multiplier is a measure of the queue length. Thus, response time changes with server utilization are similar to that for the queuing multiplier. A graph of the queuing multiplier against utilization of the server is shown in the following figure:

Figure 18. Queuing Multiplier = 1 / 1 - U

Queuing multiplier is a measure of the queue length, and U is the utilization of the resource providing the service. The queuing multiplier values used in the formulas for disk and CPU service time are shown graphically. The curve shows the utilization at various rates and the significance of the knee. The knee of the curve is the point where a change in utilization produces a correspondingly higher change in the queuing multiplier. That is, the change along the Y-axis (queuing multiplier) is significantly greater than the change along the X-axis (utilization). The knee of this curve is the maximum utilization point a certain resource should be driven up to. After this knee, service time becomes less stable and may increase dramatically for small utilization increases. Not all resources react the same. There are different recommended maximum values for the different resources, such as CPU, disk, memory, controller, remote line, IOPs, and so on.
Chapter 3. Work Management

69

This soft copy for use by IBM employees only.

The AS/400 Performance Tools Guide provides more queuing information. The graph shows a simplified queuing formula and a curve derived from it highlighting the effect of increasing utilization on the queuing multiplier for a single server.

3.1.3 Components of Response Time
Following the discussion on queuing theory, it is necessary to visualize the effect of server utilization on queue length (as represented by the queuing multiplier) and consequently on response time.

┌──────────────────────────────────────┐ │ ┌──────┐ │ │ │ │ │ ┌────┐ │ ┌──────┐ Line time │ │ │ │User│ ───── │ │Client│────────────────── │Server│ │ └────┘ │ │System│ ──────────────────│System│ │ │ └──────┘ Line time │ │ │ │ │ │ │ │ └──────┘ │ └──────────────────────────────────────┘ Client/Server Entity

3.1.3.1 Client/Server Response Time
In a Client/Server environment, the response time perceived by the user is the total response time of the following service providers:

Client system - When a user at a client system, such as a PC, requests information, that request is first processed by the PC and translated to a request to the server system. Communication line (to server system) - The request is sent through the line to the server (such as a database or application or file server). Server system - The server system accepts the request and performs the requested functions. Communication line (from server system) - The server response is sent back to the client. Client system - The client then receives the information, performs further processing as necessary, and presents the final response to the user′ s request.

Therefore, the total response time experienced by a client/server application user is the sum total of the service times of the:
• • •

Client Communication line Server

Typically, a server system functions in an environment with multiple requestors. The response time experienced by a requestor is affected not only by the function of the particular task, but also by the workload introduced by other concurrent requestors and the relative servicing priority assigned to them. Client PCs, on the other hand, are single-user systems where the contention for resources is minimal. However, with the introduction of multitasking operating systems, and more concurrent activity on the PCs, resource contention is becoming a significant contributor to overall client/server performance.

70

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

The number of times information has to move between the client and server (communications flows) before a response is completed also increases the response time.

3.1.3.2 Components of AS/400 Response Time
Within a system, there are many functions contributing to response time of the system, including CPU time and disk I/O time. There are wait times associated with these servers, including waiting for CPU and waiting for disk I/O, that are associated with queuing. Each transaction uses communications line capacity, CPU time, main storage, and disk accesses, and has to be scheduled for the CPU using a priority classification. The interactive response time experienced by an AS/400 user is the total of many components:

• •

There is a transmission time delay for the transaction to reach the CPU. (This is significant in situations such as remote workstation.) Once the transaction reaches the system, the system′s response time measurement begins. − The job may have to wait for an activity level at the system. − Once the activity level has been entered, resource utilization begins, which includes: - CPU processing time (including queuing). - Disk I/O time (including queuing). − There may also be periods of inactivity when the transaction is waiting in a variety of states that are reported in the performance tools report, and discussed in the AS/400 Performance Management Redbook , GG24-3723-02, including: - Ineligible activity time (Excs ACTM) - Short waits - Short waits - extended - Object or record seize or lock conflict There is a transmission delay in the response reaching the user. Finally, the time is taken by the user workstation to process the information for presentation.

│ │ │ │ │ │ │ │ │

│ AS/400 System Response Time │ ─────────┬──────────────────────────────── │ Active │ Wait │ ────┬─── │ ─────────┬──────┬─────┬──────── │ │ │ │ Short│Short│Seize/lck│ CPU │Disk│Ineligible│ wait │waitX│ Conflict│ ─── │ ── │ ──────── │ ──── │ ─── │ ─────── │ │ │ │ │ │ │

The components of the response time diagram shows that CPU is only one of the resources (servers) involved in the response time. Disk service time, disk utilization, and the disk QM also must be factored into response time expectations. Additional wait times, such as exceptional wait times, need to be factored into expectations. These exceptional wait times (waiting for record or object locks, waiting for communication line data transmission, and so on) can play an

Chapter 3. Work Management

71

This soft copy for use by IBM employees only.

important part in actual performance results and must be included in analyzing performance problems and capacity planning.

3.2 Subsystems
A subsystem in the AS/400 system is an operating environment used to allocate main storage and provide a degree of isolation for jobs with similar processing characteristics (such as batch, interactive, and so on) to run in. This minimizes contention for system resources and increases efficiency. The following section discusses some of the important components of a subsystem definition.

Storage Pool Definitions provide information about:
• • •

Pool identification (within subsystem) Pool size Maximum activity level

An Autostart Job allows a one-time initialization job, or performs a repetitive activity associated with a subsystem. The QSERVER subsystem uses autostart job QPWFSERVER to initiate file and database servers.

Routing Entries in a subsystem provide a means of selecting the environment and program that is to be run. The environment includes:
• • • •

Entry selection criteria Program to run Memory pool number (within the subsystem) Execution class

A Class identifies aspects of the execution environment such as:
• • •

Run priority Time slice Purge option

A Communications Job, from an AS/400 Work Management stand-point, is a batch job started by a program start request from a remote system. In the case of servers, the start request is initiated by a client or PC application. Communications work entries identify the sources from which the subsystem accepts start requests. A communications entry includes:
• • • • •

Device Mode Job description Default user Maximum active jobs

A program start request using a mode entry of QSERVER is routed to the QSERVER subsystem, while users of other modes including QCASERVR and QPCSUPP are routed to the QCMN subsystem. A Prestart Job is a batch job that is started in anticipation of a program start request. The objective of the prestart job is to have as much ″start up″ activity as possible before the remote request is received.

72

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

3.2.1 IBM Supplied Subsystems
Several subsystem monitors are provided under OS/400. IBM licensed program applications may either place jobs into one of these subsystems or supply their own subsystems. This section gives a brief overview of key subsystem monitors. For applications, you must review application specific documentation to determine their unique subsystem monitor requirements. IBM-supplied subsystems may use shipped system values (such as QCTLSBSD) and subsystem description parameters to determine the work assigned to a particular subsystem. This section describes the typical assignments. See the Work Management Guide for detailed information. Many of the IBM-supplied subsystem monitors have routing entries predefined for certain functions. Through manipulation of subsystem monitor description routing entries, class descriptions, and communication entries, the user may route IBM-provided applications to any subsystem and control the storage pool assignment and job priority within a subsystem. See the index entries for additional information on assigning work to subsystem monitors and controlling user and IBM-supplied applications storage pools and run priority assignments. For information on communication entries, routing entries, class descriptions, and autostart job entries of subsystem descriptions supplied as part of OS/400, refer to the Work Management Guide - Version 3 , GG24-3306. Some of the subsystems included here have IBM-supplied job names for IBM-supplied functions, such as SNADS jobs. User-profile-names and job names are shown when the user profiles for IBM-supplied functions are defined by the function. This information may assist in collecting performance data for all of the jobs related to a specific function.

3.2.2 QBASE Subsystem
This subsystem can be the controlling subsystem on the AS/400 system (system value QCTLSBSD(QBASE) is specified) and is typically used in simple, single application environments. Running non-interactive and interactive applications in QBASE at the same time generally results in page faulting rates that approach the poor range. The system is shipped with QCTLSBSD(QBASE) specified. This redbook assumes a more sophisticated environment than can be supported with almost all work being performed in QBASE. For the remainder of this redbook, assume QCTLSBSD(QCTL) is specified and at least the following subsystems are active and QBASE is not:
• • •

QCTL QBATCH QINTER

Other IBM subsystems, such as QSPL, QSNADS, and so on, and user defined subsystems may also be active. System value QCTLSBSD specifies whether QBASE, QCTL, or some user-defined subsystem is the controlling subsystem.

Chapter 3. Work Management

73

This soft copy for use by IBM employees only.

System value QSTRUPPGM specifies the start\up program that is called soon after the controlling subsystem has started at the end of IPL. The startup program starts the subsystem monitors that both IBM and the customer want to have active after IPL. As shipped from IBM, the startup program starts subsystem monitors QBATCH, QINTER, QSPL, QSNADS, and QCMN. In V3R1, the shipped program also starts QSERVER (used by Client Access/400). The user can use the Retrieve CL Source (RTVCLSRC) program to determine the IBM subsystems automatically started. Optionally, the user can modify this program, including adding their own subsystem monitors to start.

3.2.3 QCTL, QBATCH, QINTER Subsystems
QCTL is the default controlling subsystem. It is the only subsystem active when the system is operating in a restricted state. The console device jobs default to run in QCTL. In some system environments, QCTL may also do interactive and batch user application work. The Performance Monitor defaults to running in QCTL. System cleanup jobs, such as QSYSSCD, setup in Operational Assistant, run in QCTL. Most OS/400 commands that perform the Submit Job (SBMJOB) command default to placing the job request on job queue QBATCH that is assigned to subsystem QBATCH. This means, by default, typical non-interactive jobs run in subsystem QBATCH. QINTER is set up so that all interactive sessions default to run in subsystem QINTER. This includes local and remote dependent workstation displays (5250 and 3270 displays), 5250 display station pass-through sessions, 3270-based sessions (such as DHCF and SNA Primary Logical Unit (SPLS)), PC Support/400 or Client Access/400 Work Station Function (WSF) sessions, RUMBA/400 sessions, and OS/2 Communication Manager 5250 emulation sessions, and ASCII Workstation Controller display devices.

3.2.4 QSPL Subsystem
This subsystem is shipped to control all spooled output work, such as the ″Start Printer Writer (STRPRTWTR) command jobs.

3.2.5 QSNADS Subsystem
This subsystem performs document transmission, change management (SystemView System Manager/400 and SystemView Managed System Services/400) transmission, OS/400 Object Distribution Facility (ODF), and TCP/IP Simple Mail Transfer Protocol (SMTP) work. There are several ″routing jobs active″ and a job for each ″send distribution″ defined for a remote location.

QSNADS/QDIAnnnnnn jobs These jobs perform Document Interchange Architecture (DIA) functions such as routing local system distributions, routing ″independent user″ functions, and routing local system host printing functions.

QSNADS/QNFTP This job performs most ODF send and receive functions.

QSNADS/QROUTER

74

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

In V2R3, this job provides all SNADS routing services for sending and receiving distributions with remote systems. In V3R1, this job continues to provide SNADS routing for change management functions. However, routing functions for normal SNADS distributions (for example, documents and ODF) are moved to the QMSF job or jobs that run in subsystem QSYSWRK.

QSNADS/QZDSTART This job is an auto start job when subsystem QSNADS is started. It starts the QDIA jobs, the QROUTER job, and the jobs for each remote system defined in the SNADS configuration distribution queues .

QSNADS/remote-location-name These jobs are the SNADS send jobs for each distribution queue defined in SNADS configuration distribution queues.

QSRVBAS/QESTP This job is activated as part of standard OS/400 support for receiving PTFs from either IBM or a customer service provider.

QGATE/remote-location-name These jobs are the SNADS send jobs for either SNADS bridge support, such as for the MVS/VM bridge, or change management distribution jobs when Managed System Services/400 or System Manager/400 are started for the local system. If you already had QSNADS/remote-location-jobs active, then you also have a corresponding QGATE/remote-location-job active when System Manager/400 or Managed System Services/400 is active.

QGATE/TCPIPLOC This job is activated when TCP/IP Simple Mail Transfer Protocol (SMTP) is activated for the local system.

3.2.6 QSYSWRK Subsystem
The QSYSWRK subsystem was introduced with V2R3 to be a common subsystem for various system jobs. In V3R1, additional jobs are placed in this subsystem. Because of the possibility of a large number of different jobs active within this subsystem, it is important to understand what is currently known about these job types. For a particular customer environment, changes to the default run priority or storage pool assignment may be necessary to improve overall system performance. Subsystem description QSYSWRK is shipped with only *BASE storage pool and is not included in the system-supplied start program QSTRUP. QSYSWRK is started by the SCPF job during IPL unless the system is IPLing to the restricted state. Subsequent topics within this chapter discuss the advantages that separate main storage pools for any subsystem may provide in improving overall system performance. The following V3R1 facilities cause jobs to be to run in subsystem QSYSWRK by default. The user profile name and job name (user-profile-name/job name) are shown when the user profiles cannot be varied by the user.

ManageWare/400 jobs

Chapter 3. Work Management

75

This soft copy for use by IBM employees only.

For information on ManageWare/400 jobs, refer to ManageWare/400 Administrator ′ s Guide , SC34-4478.

Directory Shadowing support job (QDOC/QDIRSHDCTL) If defined, this job keeps distribution directories updated (shadowed) across the defined systems. For information on directory shadowing, refer to SNA Distribution Services - Version 3 , SC41-3410.

LAN Server/400 File Server I/O Processor monitor job There is one job active for each File Server I/O Processor varied on. The monitor job has the name of the network server description started for the File Server I/O Processor. For more work management information on the File Server I/O Processor support, see LAN Server/400: A Guide to Using the AS/400 as a File Server , GG24-4378. Performance considerations are also discussed in this redbook.

Operation Control Center/400 (SystemView System Manager/400 and SystemView Managed System Services/400) For a system defined as a service requester , job QSVSM\QECS is started. The following jobs that provide change management support under SystemView Managed System Services/400 support (managed site) or SystemView System Manager/400 support (manager site) may be active. For information on SystemView System Manager/400, refer to SystemView System Manager/400 Use - Version 3 , SC41-3321. For information on SystemView Managed System Services/400, refer to SystemView Managed System Services/400 Use - Version 3 , SC41-3323. − QSVMSS/QCQEPMON This job monitors Managed System Services/400 work, including: - Completion of CL input streams run as a result of change request activities requested by the central site manager (such as V3R1 SystemView System Manager/400, V1R5 or later NetView Distribution Manager, and so on). - Scheduled jobs under change management support. - Notifying the central site manager that a scheduled job has completed. − QSVMSS/QCQRCVDS This job receives change management distributions from subsystem QSNADS jobs. − QSVMSS/QVARRCV This job accepts any remote command change request activities received from the central site manager. − QSVMSS/QCQSVSRV This job processes change request activities received from the central site. There could be multiples of these jobs. You may control the number of these jobs concurrently active by changing job queue entry QNMSVQ. − QSVSM/QCQROMGR

76

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

This job sends remote commands to managed sites under V3R1 SystemView System Manager/400 if the Start Manager Services (STRMGRSRV) command has been issued on this local system. − QSVSM/QNSCRMON This job monitors the change management requests and initiates the sending of these requests to the managed system. This job should be active only if the V3R1 SystemView System Manager/400 STRMGRSRV command has been issued.

Mail Server Framework (QMSF/QMSF) There are one or more mail server framework jobs. Typically there is only one job. The Start Mail Server Framework Job (STRMSF) command can be used to startup multiple QMSF jobs. Multiple QMSF jobs may improve performance during periods of excessive sending and receiving of mail or SNADS distributions.

TCP/IP support In V3R1, TCP/IP support is included in OS/400. When the command Start TCP/IP (STRTCP) is issued, several jobs are started in QSYSWRK. For previous releases, TCP/IP work was performed in subsystem QTCP. This subsystem no longer exists in V3R1. For more information on the following TCP/IP jobs, refer to TCP/IP Configuration and Reference - Version 3 , SC41-3420. − − TCP/IP Main Job(QTCP/QTCPIP) TCP/IP File Transfer Protocol (FTP) Server (QTCP/QTFTPxxxxx) There may be more than one active. − TCP/IP TELNET Server (QTCP/QTGTELNETS) There may be more than one active. Multiple TELNET sessions (typically 20) are managed by a single QTGTTELNETS job. − − − − − − − − Simple Mail Transfer Protocol (SMTP) client (QTCP/QTSMTPCLNT) Simple Mail Transfer Protocol (SMTP) server (QTCP/QTSMTPSVSR) Simple Mail Transfer Protocol (SMTP) bridge client (QTCP/QTSMTPBRCL) Simple Mail Transfer Protocol (SMTP) bridge server (QTCP/QTSMTPBRSR) Simple Network Management Protocol (SNMP) server (QTCP/QTSNMP) Simple Network Management Protocol (SNMP) server (QTCP/QTMSNMPRCV) Simple Network Management Protocol (SNMP) server (QTCP/QSNMPSAV) Line Printer Daemon (LPD) server (QTCP/QTLPDxxxxx) There may be more than one active. − APPC over TCP/IP if AnyNet support is in use (QTCP/QAPPCTCP) AnyNet support is part of V3R1 OS/400 and, if configured, supports APPC data over TCP/IP and TCP/IP data over APPC. The network attributes must specify to allow AnyNet support (ALWANYNET(*YES)).

Chapter 3. Work Management

77

This soft copy for use by IBM employees only.

Subsystem QSYSWRK is shipped with several autostart job entries , including QSYSWRKJOB, QFSIOPJOB, and QZMFEJOB. These jobs run at the start of subsystem QSYSWRK and restart LAN Server/400 jobs, other QSYSWRK processing, and mail framework jobs that are previously listed. Once the normal production mode jobs are active, these auto started jobs end normally. If you do ENDSBS QSYSWRK *IMMEDiate, all jobs are abnormally terminated and cause some system overhead in generating job logs. Re-issuing STRSBS QSYSWRK automatically restarts all of the jobs discussed previously except the TCP/IP jobs. You must do ENDTCP and follow with STRTCP to make TCP/IP support operational again.

3.2.7 QLPINSTALL
This subsystem performs Licensed Program (LP) installation functions.

3.2.8 QPGMR
This subsystem is available for application development functions.

3.2.9 Client/Server Subsystems
The operating system is shipped with some specially described subsystems to provide the necessary support for Client Access/400 applications.

3.2.9.1 QCMN Subsystem
Subsystem QCMN supports most communications jobs. Additionally, all server jobs (except the file and database servers), run in this subsystem. QCMN is active when the system value QCTLSBSD specifying the controlling subsystem is QCTL QSYS. User-written client/server application serving jobs (for example, using APPC or data queues) run in the QCMN subsystem.

3.2.9.2 QSERVER Subsystem
Subsystem QSERVER is shipped with V3R1 OS/400 and runs the host server jobs for Client Access/400 file serving and database serving functions. There is one autostart job and one file server job for each active client and one database server job for an active database serving session, as shown in the following list: SNA:

QPGMR/QSERVER This autostart job sets up the file serving and database serving environment on the AS/400 system.

User-id/QPWFSERV File serving support includes storing programs and files as a network drive (virtual disk) for the attached client.

QUSER/QZDAINIT There is one of these database serving functions for each active client session. QZDAINIT is implemented as a prestarted job.

TPC/IP:

User-id/QPWFSERVSO File serving support includes storing programs and files as a network drive (virtual disk) for the attached client.

78

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

User-id/QPWFSERVSD File Server daemon.

QUSER/QZDASOINIT There is one of these database serving functions for each active client session. QZDASOINIT is implemented as a prestarted job.

QUSER/QZDASRVSD Database Server Daemon.

Database servers that provide access to the AS/400 database use the following object names:
• • •

Mode = QSERVER Job = QZDAINIT (SNA) or QZDASOINIT (TCP/IP) User = QUSER

These jobs normally run in subsystem QSERVER, but they may run in QCMN if the QSERVER subsystem is inactive or some error has occurred with the prestart jobs. However, when this occurs the mode used is QPCSUPP and the user name corresponds to the client user initiating the job. This information is displayed using the WRKCFGTSTS command. The remainder of this redbook assumes the database serving job runs in subsystem QSERVER. If database serving jobs begin to run in QCMN, all functions will be performed correctly. However, it is recommended that these database serving jobs run in subsystem QSERVER for best overall performance. To enable the database server jobs to again run in the QSERVER subsystem, you must either IPL the system or perform the following actions.
• • • •

• • •

Quiesce all work using subsystem QSERVER Quiesce all jobs using the database serving functions in subsystem QCMN End subsystem QSERVER immediately Vary off the APPC control unit and associated APPC device description for the affected client workstations (those previously running in subsystem QCMN) Vary on the APPC control units and devices just varied off Start subsystem QSERVER On the affected client workstations, start a connection to the AS/400 ODBC data source. The database serving jobs should now run in subsystem QSERVER.

File Servers (previously Shared Folder type-2 Server) provide PC users access to byte-stream files stored on the AS/400 system. Access to the AS/400 Integrated File System (IFS) is also supported through the File Servers. Various other subsystems are used to provide the execution environment for other applications. Prior to V3R1, subsystem QXFPCS was used to run shared folder type-2 jobs.

Chapter 3. Work Management

79

This soft copy for use by IBM employees only.

3.2.10 Prestart Jobs
A Prestart Job can reduce the time taken for the AS/400 server to respond to a program start request from a client. Pre-starting jobs have the potential to complete program initiation and open database files. These jobs are described within the subsystems and the entries include specifications that:
• • •

Determines memory pool. Identifies user profile. Determines execution class.

Prestart jobs can be used with any communications type that supports program start requests: APPC, Asynchronous, BSCEL, Intrasystem, Finance, Retail, and SNUF. Since V3R1, prestart jobs support the servers that use sockets communications support also. Prestart jobs can be used with any HLL that supports the use of ICF or CPI-Communications: RPG/400, COBOL/400, C/400, FORTRAN/400, REXX/400, and CSP/AE. The ADDPJE command adds a prestart job entry to a specified subsystem that contains a communications entry to be able to process program start requests. The RMVPJE command removes a prestart job entry from a specified subsystem. The CHGPJE command changes characteristics of an existing prestart job entry. The following table summarizes the prestart job default options shipped with OS/400 for SNA connection.

┌─────────────┬─────────────────────────────┬──────────┐ │ Subsystem │ QCMN │ QSERVER │ ├─────────────┼─────────┬─────────┬─────────┼──────────┤ │ Server │ Network │ Rmt Cmd │ Central │ Database │ │ │ Print │ Pgm Call│ │ │ ├─────────────┼─────────┼─────────┼─────────┼──────────┤ │Library/ │QSYS/ │QIWS/ │QIWS/ │QIWS/ │ │Program │QNPSERVR │QZRCSRVR │QZSCSRVR │QZDAINIT │ ├─────────────┼─────────┼─────────┼─────────┼──────────┤ │Initial Jobs │ 1 │ 1 │ 1 │ 1 │ ├─────────────┼─────────┼─────────┼─────────┼──────────┤ │Threshold │ 1 │ 1 │ 1 │ 1 │ ├─────────────┼─────────┼─────────┼─────────┼──────────┤ │Addl Jobs │ 3 │ 3 │ 3 │ 3 │ ├─────────────┼─────────┼─────────┼─────────┼──────────┤ │Max Jobs │ *NOMAX │ *NOMAX │ *NOMAX │ *NOMAX │ ├─────────────┼─────────┼─────────┼─────────┼──────────┤ │Max Uses │ 200 │ 200 │ 200 │ 1 │ ├─────────────┼─────────┼─────────┼─────────┼──────────┤ │Pool ID │ 1 │ 1 │ 1 │ 1 │ ├─────────────┼─────────┼─────────┼─────────┼──────────┤ │Library/Class│QGPL/ │QGPL/ │QGPL/ │QSYS/ │ │ │QCASERVR │QCASERVR │QCASERVR │QPWFSERVER│ └─────────────┴─────────┴─────────┴─────────┴──────────┘
Figure 19. Prestart Jobs Default Options

80

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

The following table summarizes the Prestart job default options shipped with OS/400 for TCP/IP connection.

┌─────────────┬─────────────────────────────┬──────────┐ │ Subsystem │ QCMN │ QSERVER │ ├─────────────┼─────────┬─────────┬─────────┼──────────┤ │ Server │ Network │ Rmt Cmd │ Central │ Database │ │ │ Print │ Pgm Call│ │ │ ├─────────────┼─────────┼─────────┼─────────┼──────────┤ │Library/ │QSYS/ (*)│QIWS/ │QIWS/ │QIWS/ │ │Program │QNPSERVS │QZRCSRVS │QZSCSRVS │QZDASOINIT│ ├─────────────┼─────────┼─────────┼─────────┼──────────┤ │Initial Jobs │ 1 │ 1 │ 1 │ 1 │ ├─────────────┼─────────┼─────────┼─────────┼──────────┤ │Threshold │ 1 │ 1 │ 1 │ 1 │ ├─────────────┼─────────┼─────────┼─────────┼──────────┤ │Addl Jobs │ 2 │ 2 │ 2 │ 2 │ ├─────────────┼─────────┼─────────┼─────────┼──────────┤ │Max Jobs │ *NOMAX │ *NOMAX │ *NOMAX │ *NOMAX │ ├─────────────┼─────────┼─────────┼─────────┼──────────┤ │Max Uses │ 200 │ 1 │ 200 │ 1 │ ├─────────────┼─────────┼─────────┼─────────┼──────────┤ │Pool ID │ 1 │ 1 │ 1 │ 1 │ ├─────────────┼─────────┼─────────┼─────────┼──────────┤ │Library/Class│QGPL/ │QGPL/ │QGPL/ │QSYS/ │ │ │QCASERVR │QCASERVR │QCASERVR │QPWFSERVER│ └─────────────┴─────────┴─────────┴─────────┴──────────┘ * For AS/400 CISC systems, network print is in the QSYS library. For AS/400 RISC systems, it is in QIWS.
Figure 20. Prestart Jobs Default Options

The following explains some of the terms used in the above tables:
• •

• •

• •

• •

Library/Program - the program initiated by the prestart job. Initial Jobs - the number of prestart jobs that are started when the subsystem is started. Threshold - the minimum number of prestart jobs that remains active. Additional Jobs - the number of additional jobs that are started when the number of jobs drops below the threshold value. Maximum Jobs - the maximum number of jobs that may be active. Maximum Uses - the maximum number of times each prestart job is reused before it is ended. Pool ID - the memory pool number within the subsystem that the job uses. Library/Class - the execution class that the job uses.

┌───────────────┬──────────┬───────────┐ │ Class │ QGPL/ │ QSYS/ │ │ │QCASERVR │QPWFSERVER │ ├───────────────┼──────────┼───────────┤ │ Description │CA Server │File Server│ │ Run Priority │ 20 │ 20 │ │ Time slice(ms)│ 500 │ 3000 │ │ Purge option │ *YES │ *NO │ └───────────────┴──────────┴───────────┘

Chapter 3. Work Management

81

This soft copy for use by IBM employees only.

The following tables list the programs used by the Client Access/400 servers. More information is available in OS/400 Server Concepts and Administration Manual , SC41-3740-00.

┌──────────────────────────────────────────────────────────────────┐ │ SERVERS FOR ORIGINAL CLIENT │ ├──────────────────┬───────────────────────────────────────────────┤ │ Server │ Program Name │ ├──────────────────┼───────────────────────────────────────────────┤ │ File │ QIWS/QTFDWNLD │ ├──────────────────┼───────────────────────────────────────────────┤ │ Remote SQL │ QIWS/QRQSRVX (post-V2R2 + V3R1 CA/400) │ │ │ QIWS/QRQSRV0 (pre-V2R2 with CMTCTL(*NONE) │ │ │ QIWS/QRQSRV1 (pre-V2R2 with CMTCTL(*ALL) │ ├──────────────────┼───────────────────────────────────────────────┤ │ Data Queue │ QIWS/QHQTRGT │ ├──────────────────┼───────────────────────────────────────────────┤ │ Message Function │ QIWS/QMFRCVR Receiver │ │ │ QIWS/QMFSNDR Sender │ ├──────────────────┼───────────────────────────────────────────────┤ │ License Mgmt │ QIWS/QLZPSERV │ ├──────────────────┼───────────────────────────────────────────────┤ │ Virtual Print │ QIWS/QVPPRINT │ ├──────────────────┼───────────────────────────────────────────────┤ │ Shared Folders │ QSYS/QPWFSERVER QSERVER autostart job │ │ │ QSYS/QPWFSERV Main Server for each user │ │ │ QSYS/QPWFSTP0 S/flr type-2 (V3R1) │ │ │ QSYS/QCNTEDDM S/flr type-0,1 │ ├──────────────────┼───────────────────────────────────────────────┤ │ Remote Command │ QSYS/QCNTEDDM │ └──────────────────┴───────────────────────────────────────────────┘ ┌──────────────────────────────────────────────────────────────────┐ │ SERVERS FOR OPTIMIZED CLIENTS │ ├──────────────────┬───────────────────────────────────────────────┤ │ Server │ Program Name │ ├──────────────────┼───────────────────────────────────────────────┤ │ File │ QSYS/QPWFSERVER QSERVER autostart job │ │ │ QSYS/QPWFSERV Main Server for each user │ │ │ QSYS/QPWFSTP1 OS/2 client │ │ │ QSYS/QPWFSTP2 Win 3.1 client │ ├──────────────────┼───────────────────────────────────────────────┤ │ Database │ QIWS/QZDAINIT All DB server requests │ │ │ QIWS/QZDANDB Native DB requests │ │ │ QIWS/QZDAROI Object info/catalog requests │ │ │ QIWS/QZDASQL SQL requests │ │ │ QIWS/QZDCMDP Command Processor │ ├──────────────────┼───────────────────────────────────────────────┤ │ Data Queue │ QIWS/QHQTRG │ ├──────────────────┼───────────────────────────────────────────────┤ │ Network Print │ QIWS/QNPSERVR │ ├──────────────────┼───────────────────────────────────────────────┤ │ Central │ QIWS/QZSCSRVR │ ├──────────────────┼───────────────────────────────────────────────┤ │ Rmt Cmd/Dist Pgm │ QSYS/QZRCSRVR │ ├──────────────────┼───────────────────────────────────────────────┤ │ APPC P/word Mgmt │ QSYS/QACSOTP │ └──────────────────┴───────────────────────────────────────────────┘

82

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

┌──────────────────────────────────────────────────────────────────┐ │ SERVERS FOR CLIENTS USING SOCKETS COMMUNICATIONS SUPPORT │ ├──────────────────┬───────────────────────────────────────────────┤ │ Server │ Program Name │ ├──────────────────┼───────────────────────────────────────────────┤ │ File │ QIWS/QPWFSERVSD File Server daemon │ │ │ QIWS/QPWFSERVSO Main server for each user │ ├──────────────────┼───────────────────────────────────────────────┤ │ Database │ QIWS/QZDASRVSD Database Server daemon │ │ │ QIWS/QZDASOINIT All DB server requests │ ├──────────────────┼───────────────────────────────────────────────┤ │ Data Queue │ QIWS/QHZHQSRVD Data Queue Server daemon │ │ │ QIWS/QHZHQSSRV │ ├──────────────────┼───────────────────────────────────────────────┤ │ Network Print │ QSYS/QNPSERVD Network Printer Server daemon │ │ │ QSYS/QNPSERVS (QIWS library for RISC systems)│ ├──────────────────┼───────────────────────────────────────────────┤ │ Central │ QIWS/QZSCSRVSD Central Server daemon │ │ │ QIWS/QZSCSRVS │ ├──────────────────┼───────────────────────────────────────────────┤ │ Rmt Cmd/Pgm Call │ QIWS/QZRCSRVS Rmt cmd/Pgm call daemon │ │ │ QIWS/QZRCSRVS │ ├──────────────────┼───────────────────────────────────────────────┤ │ Signon │ QIWS/QZSOSIGND Signon Server daemon │ │ │ QIWS/QZSOSIGN │ ├──────────────────┼───────────────────────────────────────────────┤ │ Mapper Daemon │ QIWS/QZSOSMSPD Server Mapper daemon │ └──────────────────┴───────────────────────────────────────────────┘

3.3 Memory Management
Memory management is a methodology to optimize the use of installed main memory and improve the efficiency of the system.

Memory Pools - The installed memory on the AS/400 system is partitioned into pools to minimize the contention for memory by jobs with different processing characteristics. For example, batch jobs are normally run in memory pools distinct from where interactive jobs run. Activity Level - Determines the maximum number of jobs that may be active in the memory pool. An excessive value could result in increased page faulting due to many jobs competing for memory. A low value could result in jobs having to wait for an activity level to be freed. SETOBJACC - The Set Object Access command allows information to be preloaded into a specified memory shared pool or private memory pool. The information can be programs or database files. This eliminates accessing disks for the preloaded objects.
A good knowledge of the applications and database is a prerequisite for effective use of this facility.

Expert Cache - This is a selectable option under OS/400 that enables the system′s single-level storage support to use main memory as a cache. Expert cache is designed to reduce the number of physical disk I/Os but does not require a detailed understanding of the applications or database to be implemented. The operating system determines which objects (or portions of
Chapter 3. Work Management

83

This soft copy for use by IBM employees only.

objects) are to remain in the shared storage pool where Expert Cache is enabled. Please refer to the AS/400 Performance Management V3R1 - April 1995 Redbook, GG24-3723-02, for more information on these memory management functions. You can also refer to Chapter 7, “Client/Server Performance Tuning” on page 235 for examples of using these functions.

3.4 Job Execution
The following elements control the selection of active jobs for processing by the AS/400 system.

Task dispatching and priority − Single task dispatch queue: Regardless of the number of CPUs in the AS/400 system, there is only one task dispatcher and task dispatching Queue (TDQ). All system and user jobs are in this queue as Task Dispatch Entries (TDE), and basically are ordered by the job priority. Job priority: Mainly determines the ordering of jobs in the TDQ. However, there are other considerations that affect the position of jobs of equal priority in the TDQ. Jobs with a lower numeric value have a higher priority in the TDQ, and are processed ahead of those with a lower priority, such as batch jobs.

Activity levels: Jobs must occupy an activity level prior to becoming eligible to be processed by the CPU or CPUs. An activity level is specified to control the maximum number of jobs that may be run concurrently, and is specified at the system, sub-system, or memory pool level. Time slice: Is specified in the Job Class and determines the amount of CPU seconds the job is allowed to use to complete a task. Job states: A currently executing job is in one of the following states: − − − Active: actively processing. Waiting: waiting on an event to occur. Ineligible: not entitled to use the processor.

Job transitions: As the job runs in the system, it is in transfer between states, and the following transitions are displayed by the WRKSYSSTS command.

activity level ┌──────┐not available │ WAIT ├────────────────┐ └─┬────┘ event │ ┌──────────┐ ended │ │task │INELIGIBLE│ │completed └──────────┘ ┌────┴─┐ │ACTIVE├────────────────┘ └──────┘ time slice exceeded
Figure 21. Job Transition States as Shown in WRKSYSSTS

Active-wait: Once a job has been using the CPU and the required task is completed, the job enters a wait state until an event (such as the user pressing the Enter or a function key in an interactive job) occurs.

84

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

− −

Wait-ineligible: If the event the job was waiting on has completed but an activity level is not available, the job enters an ineligible state. Active-ineligible: If the job does not complete in the assigned time slice, it becomes ineligible .

3.4.1 Identifying Database Server Job
Database server jobs are initiated as prestarted jobs from a job initiated during IPL. All of these jobs have the job name QUSER. When a client/server application initiates a conversation with the AS/400 system, a binding process attaches the client application to the QZDAINIT job on the AS/400 system. Displaying the jobs in the QSERVER subsystem may show many entries with the same job name (QZDAINIT) and user name (QUSER).

Work with Subsystem Jobs 07/27/95 Subsystem . . . . . . . . . . : QSERVER

RCASM01 14:20:15

Type options, press Enter. 2=Change 3=Hold 4=End 8=Work with spooled files Opt Job QPWFSERV QPWFSERV QSERVER QZDAINIT User CS16 ITSCID30 QPGMR QUSER

5=Work with 6=Release 13=Disconnect Type BATCH BATCH AUTO PJ -----Status----ACTIVE ACTIVE ACTIVE ACTIVE

7=Display messages

Function

Note: The Server application for the AS/400 Database Server is a prestart job (PJ). It has a default user of QUSER. The AS/400 server job associated with a particular PC client application is determined only after the connection between them has been established . At this time, the QZDAINIT server job appears on the display by the WRKCFGSTS command associated with the controller name for the PC. Note! For clients using sockets communications support the database server job name is QZDASOINIT.

Chapter 3. Work Management

85

This soft copy for use by IBM employees only.

The following steps outiline how to identify a server jobs on the AS/400 system corresponding to a PC client application: 1. Idenfify the APPC controller name of the client, which is:
• • •

The PC name against RTLN APPN in the CONFIG.PCS file for DOS. On the status bar of the Rumba window. The PC name against LOCALLUNAME in the NSD.INI in the WINDOWS directory. Available using the WRKOBJLCK command for the user profile, but may be tedious if the user profile is used for a concurrent signon at more than one PC. See the following figure.
Work with Object Locks Object: CS15 Library: QSYS Type: System: *USRPRF SYSASM01

Type options, press Enter. 4=End job 5=Work with job Opt Job VP033 VP033 VP033S1 User CS15 CS15 CS15

8=Work with job locks Lock *SHRRD *SHRRD *SHRRD Status HELD HELD HELD

2. Use the AS/400 command WRKCFGSTS *CTL appc-ctl-name . See the following figure. 3. Note the job number for the row with QSERVER under ″Description″ (it is actually the mode name).
• • •

Job name - QZDAINIT User -.QUSE Job number - unique AS/400 job number
Work with Configuration Status Opt Description ITSCTRN VP4 VP4 QPCSUPP QSERVER (Lin/Ctl/Dev/Mod) Status ---------Job-----------

ACTIVE ACTIVE ACTIVE ACTIVE/TARGET VP4 CS15 135423 ACTIVE/TARGET QZDAINIT QUSER 135597 (Name) (User) (Number)

Note: Prior to the client application connecting to the AS/400 system, only the appc-device-job is visible.

3.4.2 Identifying Database Server Jobs Using WRKOBJLCK
You can also use WRKOBJLCK to identify and work with your database server jobs. In this case, you need to know the user ID that was used to start Client Access/400 support. 1. Enter the following command:

WRKOBJLCK TEAMxx *USRPRF
2. You see a list of jobs holding locks; select the QZDAINIT job. There may be several QZDAINIT jobs; you will have to try each one until you find the currently active job.

86

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

3. Enter option 5 for the QZDAINIT job. 4. Enter option 10 from the ″Work with job″ display to view the job log. The job log shows you information about access paths, ODPs, and database operations. This may be useful for debugging performance problems.

3.4.3 Identifying Server Jobs Using Sockets Communications Support
The WRKACTJOB command shows all active jobs. All server jobs are displayed, as well as the server daemons and the server mapper daemon. The following types of jobs are shown in the figures:
• • •

1 - Server mapper daemon 2 - Server daemons 3 - Prestarted server jobs
Work with Active Jobs CPU %: 6.2 Elapsed time: 02:21:45 Active jobs: 02/15/96 195

Type options, press Enter. 2=Change 3=Hold 4=End 8=Work with spooled files Opt 2 3 2 1 Subsystem/Job QSERVER QPWFSERVSD QSERVER QZDASOINIT QZDAINIT QZDAINIT QZDASRVSD QSYSWRK QZSOSMAPD QZHQSRVD QZRCSRVSD QZSCSRVS User QSYS QPGMR QPGMR QUSER QUSER QUSER QUSER QSYS QUSER QUSER QUSER QUSER

5=Work with 6=Release 13=Disconnect ... Type SBS BCH ASJ PJ PJ PJ BCH SBS BCH BCH BCH PJ

7=Display messages

CPU % Function Status .0 DEQW .0 (FS Daemon) SELW .0 (DB APPC PJ) EVTW .3 (DB TCP job) DEQW .0 (DB APPC job) DEQW .5 (DB APPC job) DEQW .0 (DB svr Daemon) SELW .0 DEQW .0(Svr Mapper Daemon)SELW .0 SELW .0 SELW .1 TIMW

The following types of jobs are shown:
• • • •

ASJ - Autostart job for subsystem SBS - Subsystem monitor job BCH - Server daemon jobs PJ - Prestarted job

Chapter 3. Work Management

87

This soft copy for use by IBM employees only.

88

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Chapter 4. Client/Server Application Serving

4.1 Introduction
Application Serving is characterized as a client/server application where there is distributed logic. The application is designed such that two or more processors are used to complete the application processing. Although the processors used to run a distributed logic application can be a combination of many different platforms, in this chapter, a personal computer is used, combined with an AS/400 system. The purpose of this chapter is to introduce you to the concepts and programming interfaces available to build client/server applications that use the AS/400 system as an application server. There are many application programming interfaces available for the purpose of building distributed logic applications. See Figure 22 for a list of some of the APIs. This chapter concentrates on three of the interfaces:
• • •

Advanced program-to-program communications (APPC) Data queues Distributed program call (DPC)

4.1.1 Application Serving APIs

Figure 22. CA/400 Application serving APIs
© Copyright IBM Corp. 1996

89

This soft copy for use by IBM employees only.

The Client Access/400 clients each support a number of Application Serving APIs. Figure 22 lists some of the more popular functions supported. An ″X″ indicates that the function is supported for that client.

4.2 Program-to-Program Communications (APPC)

Figure 23. Program-to-Program Communications.

4.2.1 APPC Programming Options
Advanced Program-to-Program Communications (APPC) provides an architected programming interface for writing program-to-program communication applications. APPC provides a standard interface for starting communications with another program, exchanging data, and ending communications. Both the personal computer and the AS/400 system provide programming support for writing APPC programs. On the personal computer platform, a number of options are available to write APPC programs that interface with AS/400 APPC programs:

Client Access Router API − − − Base DOS Extended DOS Windows 3.1 client

Network Services/DOS − DOS programs

Network Services/Windows

90

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.


Windows 3.1 programs

OS/2 − − CM/2 CM/400

Both platform-unique APIs and multi-platform APIs are provided. An example of a client-unique API is the Client Access router API, which provides a set of programming APIs that can only be used for the Client Access DOS or Windows environment. An example of a multi-platform API is the Common Programming Interface-Communications (CPI-C) API which provides a set of programming APIs that are used across many platforms. The CPI-C interface is gaining in popularity because it allows programming skills to be used on multiple platforms. The AS/400 system provides two ways to write APPC applications. They are CPI-C and Intersystem Communication Function (ICF) files. Although ICF programming is the most common way to write AS/400 communication programs, CPI-C is gaining in popularity because of its portability.

4.2.2 APPC Conversations
In order to write an APPC program, the programmer must become familiar with the APPC verb set. The APPC verb set defines a set of interfaces used to build communication programs. Examples of APPC verbs are ALLOCATE, SEND, and RECEIVE AND WAIT. Every APPC conversation has three basic parts:

Initializing a conversation with a partner program: − ALLOCATE - Remote system name - Partner transaction program

Exchanging data: − − SEND DATA RECEIVE AND WAIT - RECEIVE IMMEDIATE

Terminating a conversation: − DEALLOCATE - Releases resources used for a conversation

APPC gives the programmer complete control over the communication session. However, it requires that the program be aware at all times of the state of the conversation. It results in the client program and the server program being very tightly coupled together. APPC conversations may be very complex and are the best choice for application development if there is a need for excellent response and for a complex data flow. For example, using APPC, the client can interrupt the server program and send a second request before the first request has completed. This type of programming does, however, require a higher level of skill on the part of the programmer than some of the other higher level APIs.

Chapter 4. Client/Server Application Serving

91

This soft copy for use by IBM employees only.

4.2.3 Personal Computer Programming
The Client Access provides the router API for developing APPC applications for the DOS and the Windows 3.1 clients. In the Windows 3.1 environment, CPI-C is also provided.

4.2.3.1 Router API
An application gains access to the router application programming interfaces by issuing commands or verbs to the router. There are two types of verbs. Service verbs are used to retrieve information about the communication environment and conversation verbs are used to control the communications conversation, and to send and receive data from a partner program.

4.2.3.2 Service Verbs
Service verbs are provided by the router API to allow the programmer to gain access to system information. For example, you can retrieve a list of the AS/400 systems to which the router currently has a connection or retrieve the name of the default system. The GetCapabilities interface returns the optimum block size to use for router API APPC conversations. Choosing the proper block size can help achieve maximum performance. The following functions are supported by the router API service verbs:

Retrieve attributes of the specified conversation: − EHNAPPC_GetAttributes

Retrieve the default system name: − EHNAPPC_GetDefaultSystem

Allow access to the list of active location names: − EHNAPPC_QuerySystems

Allow access to the list of router capabilities: − EHNAPPC_GetCapabilities

Determine whether the router is loaded: − EHNAPPC_IsRouterLoaded

Retrieve state of the specified conversation: − EHNAPPC_QueryConvState

Retrieve the user ID used to connect to the specified system: − EHNAPPC_QueryUserid

4.2.3.3 Conversation Verbs
The conversation verbs are used to control APPC conversations and to pass information between partner programs. Each APPC implementation supports a standard set of verbs. For example, the Allocate verb is used to initiate conversations with partner programs. All APPC implementations support the Allocate verb. The following list contains the APPC verbs supported by the router API.
• • • •

Allocate Deallocate Receive and Wait Send Data

92

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

• • • • • • • •

Prepare to Receive Receive Immediate Confirm Confirmed Flush Get Attributes Request to Send Send Error

4.2.4 Personal Computer Programming Examples
The following sections show code examples that demonstrate using APPC verbs. Complete example programs are provided on the PC media included with this redbook. See Appendix A, “Example Programs” on page 393 for more information on the example programs. Please refer to the Client Access/400 for Windows 3.1 API and Technical Reference , SC41-3531, for complete descriptions of the available APIs.

4.2.5 Visual Basic Example Code
The following examples demonstrate executing APPC verbs using Visual Basic.

4.2.5.1 Allocate
In this example, a conversation is started with program APPCIXRPG in library TEAMxx. The communication buffer size is hard-coded as 1929 bytes. This is the proper size when using 2K frames on a token-ring connection (1920 byte RU + 9 byte SNA header). However, to make the application more flexible and able to adjust to different media and speeds, you could use the GetCapabilities verb to have the optimum buffer size returned to us. If the Allocate works successfully, a Conversation ID is returned to us (ConvId). The Conversation ID is used as input on all subsequent APPC verbs. This allows multiple conversations to be conducted concurrently. SysName is not supplied; this means that the default system is used. The default system is the first system to which we connect. We indicate that we are using a mapped conversation (EHNAPPC_MAPPED). Although the router API supports only basic conversations, you can allow mapped conversations to flow. On the PC side, you have to set the GDSID field to X′12FF′ to conduct a mapped conversation flow. You are also responsible for setting the length field. See the Send example for how this is done. Using a mapped conversation flow allows you to simplify the AS/400 side and write your programs using mapped conversations. This means that the AS/400 application program does not have to set the length or GDSID fields, OS/400 handles it.

ProgramName = ″APPCIXRPG.TEAMxx″ ′ ″ TEAMxx/APPCIXRPG″ is also allowed SysName = ″″ Function Allocate () As Integer ′ This function allocates a conversation with AS/400 program ′ APPCIXRPG in library TEAMxx ′ The conversation has the following characteristics: ′ Mapped conversation ′ 1929 byte router buffer ′ Sync Level - none ′ No pip data ′ If successful, the conversation ID is returned
Chapter 4. Client/Server Application Serving

93

This soft copy for use by IBM employees only.

PipData$ = Chr$(0) ′ This routine sends no pip data rc% = EHNAPPC_Allocate(hWnd, 1929, EHNAPPC_MAPPED, EHNAPPC_SYNCLEVELNONE, SysName, ProgramName, 0, ″″, ConvId) If rc% <> 0 Then MsgBox (″Allocate failed with return code ″ + Str$(rc%)) End If Allocate = rc% End Function

4.2.5.2 Send
Before the data is used by the AS/400 program, it must be converted to a format understood by the AS/400 system. This includes character data (EBCDIC) and other formats (Packed Decimal and Zoned Decimal). The conversion is done in either the client program or the server program. It is usually best to do this in the client system because you have a dedicated processor available. Client Access/400 provides a comprehensive set of conversion and transformation routines. The routines are documented in the API and Technical Reference , SC41-3531. See Appendix A, “Example Programs” on page 393 for examples of using these routines. These routines all start with the prefix EHNDT.

Function Send (Sdata As String) As Integer ′ This function sends the specified data to the AS/400 ′ It does not wait for a reply ′ The data sent will be preceded by the proper GDS header (llID) Dim Dim Dim Dim GDSHdr As String * 4 FData As String Slength As Integer work As Integer ′ GDS Header ′ Formatted Data (header + data) ′ Work field ′ Work field

Slength = Len(Sdata) + 4 ′ Allow for header bytes work = Slength \ 256 ′ Get number of multiples of 256 Dlen$ = Chr$(work) + Chr$(Slength - (work * 256)) ′ fmt Dlen GDSHdr = Dlen$ + Chr$(&H12) + Chr$(&HFF) ′ Format of GDS header is llID where ID=x′12FF′ FData = GDSHdr + Sdata ′ Put together formatted data rc% = EHNAPPC_SendData(hWnd, ConvId, Len(FData), FData, rqs%) If rc% <> 0 Then MsgBox (″Send failed with Return code ″ + Str$(rc%)) End If Send = rc% ′ Return return code End Function

4.2.5.3 ReceiveWait
When receiving the information from the partner program, either ReceiveAndWait or ReceiveImmediate can be used. ReceiveAndWait waits until a response is received and ties up the CPU. ReceiveImmediate returns control immediately and the program must continually check to see if the response has arrived.

Function RecvW (Rlen As Integer, buff As String) As Integer ′ This function Receives data from the Router for the length specified ′ and returns it in the string specified. IntBuff$ = Space$(Rlen) ′ Set up an intermediate buffer to receive data

94

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

rc% = EHNAPPC_ReceiveAndWait(hWnd, ConvId, EHNAPPC_BUFFER, Rlen, IntBuff$, WhatRec%, Rts%, ActLen%) If rc% <> 0 Then MsgBox (″Receive failed with return code ″ + Str$(rc%)) Else buff = Left$(IntBuff$, ActLen%) ′ Move received data to output Rlen = ActLen% ′ Return actual length End If RecvW = rc% ′ Return return code End Function

4.2.5.4 Deallocate
The Deallocate verb is used to end the conversation and inform the partner program that the conversation is ended. In the section 4.2.7, “AS/400 Processing” on page 97, you see the result of a Deallocate in the partner program.

Sub Deallocate () ′ This subroutine deallocates the active conversation ′ specified in the ConvID global variable rc% = EHNAPPC_Deallocate(hWnd, ConvId, EHNAPPC_DEALLOCATESYNCLEVEL) End Sub

4.2.6 C++ Example Code
The following examples demonstrate executing APPC verbs using Visual C++. Complete program examples are also available. Refer to Appendix A, “Example Programs” on page 393 for more information.

4.2.6.1 Allocate
strcpy(tpName,″APPCIXRPG.″ ) ; strcat(tpName,Lib_Name); pipLen = 0; // PIP data length ret = EHNAPPC_Allocate(m_hWnd, commBufferSize, EHNAPPC_MAPPED, EHNAPPC_SYNCLEVELNONE, locationName, tpName, pipLen, pipData, &conv_id);

// // // // // // // //

Communication buffer size Mapped conversation SYNC level - NONE PC name TP name PIP data length PIP data.txtLibrary) conversation IDce))

4.2.6.2 Send
BOOL CSpeedView::Send(char * szData) { // This function sends the specified data to the AS/400 // It does not wait for a reply // The data sent will be preceded by the proper GDS header (llID) unsigned char GDSHdr[4]; char * FData; int iLength; // GDS Header // Formatted Data (header + data) // Work field
Chapter 4. Client/Server Application Serving

95

This soft copy for use by IBM employees only.

unsigned char rqs; iLength = strlen(szData) + 4; //Allow for header bytes GDSHdr[0] = iLength / 0x100; //length of the data stream GDSHdr[1] = iLength % 0x100; GDSHdr[2] = 0x12; //mapped conversation GDSHdr[3] = 0xff; FData = (char *)malloc(iLength); memcpy (FData, GDSHdr, 4); // Put together formatted data memcpy (FData + 4, szData, iLength - 4); // Put together for ret = EHNAPPC_SendData(m_hWnd, conv_id, iLength, FData, &rqs); if (ret != EHNAPPC_OK) { char msg[100]; sprintf(msg,″Send failed. RC = %d″, ret); AfxMessageBox(msg, MB_ICONINFORMATION); //Put out blue i. return 1; } else return 0; }

4.2.6.3 ReceiveWait
BOOL CSpeedView::RecvW(int iLen, char * buff) { // This function Receives data from the Router for the length specified // and returns it in the string specified. char * IntBuff; unsigned char rts; unsigned char WhatRec; unsigned short ActLen; IntBuff = (char *)malloc(iLen); ret = EHNAPPC_ReceiveAndWait(m_hWnd, conv_id, EHNAPPC_BUFFER, iLen, IntBuff, &WhatRec, &rts, &ActLen); if (ret != EHNAPPC_OK) { char msg[100]; sprintf(msg,″Receive failed. RC = %d″, ret); AfxMessageBox(msg, MB_ICONINFORMATION); //Put out blue i. return 1; } else { memcpy(buff,IntBuff, ActLen); // Move received data to output string iLen = ActLen; // Return actual length return 0; } }

4.2.6.4 Deallocate
ret = EHNAPPC_Deallocate(m_hWnd, conv_id, EHNAPPC_DEALLOCATESYNCLEVEL);

96

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

4.2.7 AS/400 Processing
The AS/400 system offers two ways to write APPC programs. They are using ICF files and using the Common Programming Interface-Communications (CPI-C) APIs. ICF is probably the most common way to write APPC programs on the AS/400 system, but CPI-C is more flexible. ICF programming is only available on the AS/400 system, while CPI-C is used on many platforms. CPI-C is supported on the personal computer platform, the AS/400 platform, and the S/390 platform.

Figure 24. ICF Files

4.2.7.1 ICF Files
Intersystem Communications Function Files are one method that programs on the AS/400 system can use to communicate with a client that is using APPC. CPI-C can also be used. ICF files are unique to the AS/400 system so any applications developed or programming skills acquired using them are not transportable to other platforms. ICF files are device files that allow a program on one system to communicate with a program on another system. The other system can be any platform that supports APPC. Application programs write data to and receive data from the ICF file. An ICF file is used to identify the device to be used, describe record formats, and define keywords. The process required to use an ICF file is: 1. Use DDS to describe the ICF file. 2. Compile the ICF file to create an object. 3. Specify a program device to be used with the ICF file.
• •

Application program acquires the device. RMTLOCNAME(*REQUESTER) - program acquires the requesting device.

4. Write an application program to read from and write to the ICF file.

Chapter 4. Client/Server Application Serving

97

This soft copy for use by IBM employees only.

4.2.8 Example DDS for the ICF File
INDARA RCVDETACH(82) R ORDERIN WIDDIDCID OLINES ORDINFO R ORDEROUT INFOBACK INAME STQTY BORG IPRICE OLAMT 10A 3A 195A

61A 360A 45A 15A 75A 105A

The source for an ICF file is used to define keywords and to describe record formats. In the preceding example, the keyword RCVDETACH is used to control processing when a detach is received from the partner program. In this case, indicator 82 is turned on. Two record formats are also defined: ORDERIN which is used for input, and ORDEROUT which is used for output. The application program reads and writes to these record formats. Once the ICF file is created, a device entry must be added. The application program acquires the device entry in the ICF file. The Remote Location name of *REQUESTER is used on the device entry to allow the application program to acquire the device through which the program start request was received.

4.2.9 ICF Creation
CRTICFF File(CSDB/APPCFIL) SRCFILE(CSDB/QDDSSRC) ADDICFDEVE File(CSDB/APPCFIL) PGMDEV(ICF01) RMTLOCNAME(*REQUESTER) CMNTYPE(*APPC)

4.2.10 Example RPG Code

98

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Acquire Session │ │ ──────────────┐ │ │ Read PC Input │ │ │ │ │ YES┌────────Receive Detach(82) │ │ │ │ │ No │ │ │ │ │ │ Call AS/400 Program │ │ │ │ │ │ │ │ Format/Send Output─────┘ │ └─────── Deallocate/End
Figure 25. AS/400 Flow

The RPG example program uses an ICF file to communicate with the personal computer program. When the program is started on the AS/400 system, it first opens the ICF file and acquires the ICF device ICF01. ICF01 was created with a RMTLOCNAME of *REQUESTER. This allows the program to acquire the device through which the program start request was received. Once the conversation is established, the ICF file is used to exchange data between the client program and the AS/400 program. The AS/400 program issues Reads and Writes to interface with the client program. The record formats used for the Reads and Writes are described in the DDS source of the ICF file. The program reads the input from the client program, passes the input information in a call to program ″NEWORD″, and passes the returned values from ″NEWORD″ back to the client in the write operation. The program is conditioned to loop until Indicator 82 is set on. Indicator 82 is set on when the Deallocate is received from the client program.

H* AS/400 APPC TARGET PROGRAM APPCIXRPG WITH MAPPED CONVERSATION H* AND ICF SUPPORT. H* H FAPPCFIL CF E WORKSTN USROPN F MAXDEV(*FILE) F INFDS(FEEDBK) F*** INFSR(EXCPTH) DADDLIBLE C CONST(′ ADDLIBLE CSDB′ ) D FEEDBK DS D FMTNM 38 45 D MAJMIN 401 404 D MAJCOD 401 402 D MINCOD 403 404 C*--------------------------------------------------------------C* START OF PROGRAM C CALL ′ QCMDEXC′ 99 C PARM ADDLIBLE COMMAND 64

Chapter 4. Client/Server Application Serving

99

This soft copy for use by IBM employees only.

C PARM 64 C OPEN APPCFIL * C ′ ICF01′ ACQ APPCFIL * C *INLR DOUEQ ′1′ * C READ ORDERIN *Check if client has detached C *IN82 IFEQ ′1′ C SETON C RETURN C END * C MOVE OLINES C CALL ′ NEWORD′ C PARM C PARM C PARM C PARM C PARM C PARM C PARM C PARM C PARM C WRITE ORDEROUT * C END C* END OF JOB C ′ ICF01′ REL APPCFIL C CLOSE *ALL C SETON C RETURN *

PARMLEN

15 5

LR88

LR

OLINESDEC WIDDIDCID OLINESDEC ORDINFO INFOBACK INAME STQTY BORG IPRICE OLAMT

3 0

LR

LR

4.3 Program-to-Program Communications (Sockets)

100

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Figure 26. Sockets Concepts.

A socket provides a connection between two programs to allow data to be interchanged over a TCP/IP based network. Sockets were first introduced in Berkeley UNIX in 1982. The AS/400 system implementation is the Berkeley 4.3BSD definition. The Windows Socket (WinSock) standard is based very closely on the Berkeley model. Windows sockets programs and AS/400 sockets programs can interchange data with no trouble. A socket needs three primary pieces of information to define it:
• • •

The interface to which it is bound, specified by an IP address The port number through which it will interchange data The type of socket: stream or datagram

Sockets can use either connection-oriented or connectionless protocols. TCP (Transmission Control Program) provides a reliable connection-oriented protocol, where two programs are required to establish a logical connection with one another before communications can take place. This is also referred to as stream sockets. A service with no connection, sometimes called a datagram service, involves programs sending messages called datagrams. These are sent independently with no guarantee of delivery. In TCP/IP, UDP provides this service.

Chapter 4. Client/Server Application Serving

101

This soft copy for use by IBM employees only.

Figure 27. Sockets Program-to-Program Communications.

The above figure shows a typical flow of events on a connection oriented protocol. The server program will usually be started first so that it can wait for incoming calls.

4.3.1 Sockets Flow of Events

socket() Creates the socket, the address-family, type, and protocol are input and a socket descriptor is returned. The server will specify a particular port number, one that has not already been assigned. The port number will be used by the client to connect. If this is a service to multiple clients, this may become a ″well known″ port number.

bind() Ties the socket to a local address (ie an IP address)

listen() Invites incoming requests from the client

accept() Puts the server into a wait state until a client comes along to make a connection. When a client does connect, accept completes and returns a new socket descriptor which will be used to communicate with that particular client. Meanwhile the client has created a socket.

connect() Used by the client to specify the address of the server. The local port will usually be assigned automatically, but the remote port must be specified. After the server has accepted the connection, the two programs can interchange data over sockets. Writing and reading data from the socket descriptors is like writing and reading stream files.

close() Used to end the connection with a socket. The client may decide to end at that point. The server will typically close the client socket, but return to wait

102

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

for another connection by re-issuing accept() against the original socket descriptor.

4.3.2 Sockets Performance
Socket programming provides performance comparable to APPC programming. Like APPC programming, sockets gives the programmer complete control over the communications session. However, it requires that the program be aware at all times of the state of the conversation. It results in the client program and the server program being very tightly coupled together. Sockets does not have verbs comparable to the APPC allocate and deallocate verbs. The application developer must design into the application a way to start and end the server program. With the AS/400 system, this could be done using the remote command interface. Sockets conversations may be very complex and are the best choice for application development if there is a need for excellent response and for a complex data flow in a TCP/IP environment. This type of programming does, however, require a higher level of skill on the part of the programmer than some of the other higher level APIs.

4.4 Data Queues Interface
A data queue is an AS/400 object that is used to pass data between application programs. Data queues can be used by an AS/400 program to communicate with another AS/400 program. With the remote data queue support provided by Client Access/400, an AS/400 program can communicate with a personal computer program through a data queue. Data queues can be used to develop distributed logic applications. The biggest advantage gained by using data queues is that the application programmer does not have to deal with the complexities of writing communications programs. The actual passing of the data between the systems is handled externally to the application program by the data queue support. Data queues also allow for the development of time-independent applications. Since the interface between the programs is always through the data queue, the programs can run independent of each other. The Client Access/400 data queue support is only available for the AS/400 system. If queuing support is required across multiple platforms, IBM Message Queue Interface products are also available. A number of operations are performed against data queue objects including:
• • • • • • •

Create Delete Send Receive Clear Display Retrieve Attributes

Chapter 4. Client/Server Application Serving

103

This soft copy for use by IBM employees only.

Application 1 Data Q′ s Application 2 ┌──────────────┐ ┌─────────────────┐ │ CRTDTAQ │ │ . │ │ │ │ CALL SendDTAQ│─────────── │ . │─────────── │ Call RcvDTAQ │ │ CALL SendDTAQ│ └─────┘ │ │ │ │ │ │ │ │ │ . │ │ │ │ CALL RcvDTAQ │ ───────────│ . │ ───────────│ Call SendDTAQ │ │ DLTDTAQ │ └─────┘ │ │ └──────────────┘ └─────────────────┘

Figure 28. Data Queues

Manipulation of data queues is done using simple AS/400 CL commands or program calls. Access to data queues is available to all AS/400 applications regardless of what language that application is written in. Data queues are a very fast and efficient method of passing data between programs.

4.4.1 Data Queues Implementation
Data queues offer a number of advantages including:
• • • • • • • •

Time independent processing. Efficiency for interactive jobs being served by one batch job. Data is completely free format - flexibility. Fastest means of communication between two jobs. Usable from any high-level language (including CL). Many jobs may access the same data queue. When receiving data, a wait time may be specified. Ordering of entries (FIFO,LIFO,Keyed).

There are, however, some disadvantages to using data queues. There is limited recovery of data queues across AS/400 system failures. When a data queue is created, it can be created using the Force option. When this option is used, the data queue is physically written to auxiliary storage each time it is changed. When a system failure occurs, the data queue data is recovered intact. However, using the Force option has a detrimental impact on overall system performance. Whether to use the Force option or not depends on the volume of transactions placed on the data queue. If a large number of transactions are expected, Force is not a good option. Instead, recovery should be built into the application. A common approach is to use a round-robin queue at the client to hold copies of the data queue transactions. When it is assured that the processing of the data queue transaction has completed successfully at the server, the backup transaction is deleted.

4.4.2 Remote Data Queue Function
The remote data queue functions built into Client Access/400 allow communication between AS/400 applications and PC (Personal Computer) applications, or even between two PC applications through an AS/400 data queue. The PC application has the same data queue function as an AS/400 application and access to the data queue by PC is completely transparent to the AS/400 application. A PC application can:
• • • •

Create an AS/400 data queue. Delete an AS/400 data queue. Send a message to an AS/400 data queue. Receive a message from an AS/400 data queue.

104

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

From an AS/400 application perspective, access to the data queues by the PC is completely transparent. It is as if the data queues are being accessed by AS/400 applications. No changes to existing AS/400 applications are required. The usefulness of this support is limited only by the application developer′ s imagination. Some possibilities are:

This support is used in conjunction with the Submit Remote command. A remote command is submitted by the PC to the AS/400 application. The AS/400 application does its processing and returns the results on a data queue. The PC application can then read the results from the data queue. The user could implement his own remote SQL server by having a PC application place an SQL statement on one data queue and read the resulting database records from another data queue. An HLL program can be running in the background on the AS/400 application waiting for SQL statements on the incoming queue and writing the results to the specified output queue. An AS/400 application can distribute workload out to multiple PCs by writing requests (such as compile requests) to one queue which multiple PCs are monitoring. The next available PC gets the request, processes it, and returns the results on another data queue.

4.4.3 PC API Interfaces
The data queue interface on the PC includes the following functions:
• • • • • • • • •

EHNDQ_Send - send data to a remote data queue with acknowledge. EHNDQ_SendKeyed - send data to a keyed data queue with acknowledge. EHNDQ_Put/PutKeyed - same as SND, but no acknowledge. EHNDQ_Receive - retrieve record from a specified remote data queue. EHNDQ_ReceiveKeyed - retrieve record from a keyed data queue. EHNDQ_Clear - clear all records from a specified remote data queue. EHNDQ_Create - allow PC applications to create data queues. EHNDQ_CreateKeyed - allow PC applications to create keyed data queues. EHNDQ_Query - allow PC applications to query data queues Output- max length, sequence, force, authority, senderid, text, and key length. EHNDQ_Delete - delete all records and delete the data queue itself. EHNDQ_GetMessage - retrieve messages from the remote system. EHNDQ_Stop - allow application to end the conversation to a remote system. EHNDQ_SetMode - set data queues mode. EHNDQ_GetCapability - get functional level of data support. EHNDQ_ReceiveRequest - receive request for data. EHNDQ_ReceiveRequestKeyed - receive request for keyed data. EHNDQ_ReceiveData - retrieve data previously requested. EHNDQ_ReceiveDataKeyed - retrieve keyed data previously requested. EHNDQ_CancelRequest - cancel previous request. EHNDQ_CancelRequestKeyed - cancel previous keyed request.

• • • • • • • • • • •

Chapter 4. Client/Server Application Serving

105

This soft copy for use by IBM employees only.

4.4.4 Data Queue Implementation

Figure 29. Data Queue Concepts

The application on the PC side sees only the data queues. It simply writes to or reads from a data queue in order to communicate with the AS/400 program. The AS/400 application also communicates by reading and writing to the data queues. The AS/400 program does not know that it is communicating with a PC program.

Figure 30. Data Queue Actual Implementation

The actual flow of the data across the communication lines is handled by programs provided by Client Access/400. A client program and an AS/400 program are provided. When a client application program writes to a data queue, the information is passed to the client data queue source agent, which in turn passes it the server data queue target agent. The target agent physically places the record on the data queue.

106

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Note that it now takes four programs, plus writing and reading to the data queue, to accomplish passing the transaction from the client program to the server program. When you use an APPC implementation, the PC application passes the information directly to the AS/400 application. Because of this, response time and CPU utilization is greater for the data queue implementation. However, this may be a price worth paying to gain the time-independent processing and the simple interface provided by the data queue support.

4.4.5 Commonly Used Data Queue APIs
4.4.5.1 Set Mode
The set mode API is used to specify if messages are to be removed from the queue after they are read, and if the data is to be automatically converted from ASCII to EBCDIC. The example that follows sets the mode to translate and remove the records from the data queue when they are read. The translation support handles only character data and does not handle formats such as Packed Decimal or Zoned Decimal. If that support is required, you can use the Data Transform APIs provided by Client Access/400.

//Define AS/400 data queue names and AS/400 CP name m_QueueLocation = ″ ″ ; //Default AS/400 CP name m_QueueToAS400 = ″TEAMxx/DQINPT″ ; //Outbound queue m_QueueFromAS400 = ″TEAMxx/DQOUPT″ ; //Inbound queue //Set mode rc = EHNDQ_SetMode(hWnd, m_QueueLocation, EHNDQ_XLATY_PEEKN); //Window handle //AS/400 CP name //Translate/Receive

if (rc != EHNDQ_SUCCESS) { e_Message = ″SetMode failed. RC = ″ ; itoa(rc, returnchar, 10); e_Message += returnchar; DisplayMessage(e_Message, TRUE); return; }

4.4.5.2 Sending Data
When you place records on a data queue, two options are provided. They are SEND and PUT. A SEND receives an acknowledgement back that the record was successfully placed on the queue. A PUT regains control immediately and it is not assured that the record was placed on the data queue. A PUT is more efficient than a SEND because there is less traffic flow over the communications line. However, it is not a good idea to do many PUTS without being assured that the data queue support is functioning correctly. To gain performance, a combination of PUTs and SENDs can be used. For example, do 10 PUTs followed by a SEND to insure that the data queue support is working. Then continually repeat this combination.

Chapter 4. Client/Server Application Serving

107

This soft copy for use by IBM employees only.

//Call send data queue function rc = EHNDQ_Send( hWnd, //window handle TEAMXX/CSDQ, //Outbound queue name ″SYSASMO1″ , //AS/400 CP name EBCDICBuffer //Data buffer Len(EBCDICBuffer)); //Data Length if (rc != EHNDQ_SUCCESS) { e_Message = ″Send failed. RC = ″ ; itoa(rc, returnchar, 10); e_Message += returnchar; DisplayMessage(e_Message, TRUE); return; }

4.4.5.3 Receive Information
There are a number of ways to retrieve data from a data queue. EHNDQ_ReceiveRequest Requests a message from a non-keyed data queue but does not wait for the request to complete. You can call EHNDQ_ReceiveData to determine if the request has completed. EHNDQ_ReceiveRequestKeyed Requests a message from a keyed data queue but does not wait for the request to complete. You can call EHNDQ_ReceiveData to determine if the request has completed. EHNDQ_ReceiveData Determine if a request for a data queue message has completed. EHNDQ_ReceiveDataKeyed Determine if a request for a keyed data queue message has completed. EHNDQ_Receive Retrieves a message from a non-keyed data queue. EHNDQ_ReceiveKeyed Retrieves a message from a keyed data queue. The difference between a Receive and a ReceiveRequest is that a ReceiveRequest returns control to the application program immediately while the Receive waits for the call to complete or time out. A ReceiveRequest is followed by a ReceiveData to actually get the data. When programming in the Windows 3.1 environment, the ReceiveRequest is used to avoid tying up the PC CPU for the duration of the data queue read.

4.4.5.4 ReceiveRequest Example
rc = EHNDQ_ReceiveRequest( hWnd, m_QueueFromAS400, m_QueueLocation, waitTime, 108
AS/400 Client/Server Performance

//Inbound queue name //AS/400 CP Name //Wait timer

This soft copy for use by IBM employees only.

senderID, //Sender ID buffer, //Data buffer lengthRecord, //Length Record senderIDInfo); //Sender ID Info if (rc==EHNDQ_COMMERROR || rc==EHNDQ_EXCEPTERROR || rc==EHNDQ_PCERROR) { }

rc = EHNDQ_ReceiveData( hWnd, m_QueueFromAS400, m_QueueLocation);

4.4.5.5 Receive Example
rc = EHNDQ_Receive( hWnd, m_QueueFromAS400, //Inbound queue name m_QueueLocation, //AS/400 CP Name waitTime, //Wait timer senderID, //Sender ID buffer, //Data buffer lengthRecord, //Length Record senderIDInfo); //Sender ID Info if (rc==EHNDQ_COMMERROR || rc==EHNDQ_EXCEPTERROR || rc==EHNDQ_PCERROR) { }

4.4.5.6 RPG Data Queue Example
In this example program, the program monitors a data queue named DQINPT in library TEAMxx. The QRCVDTAQ routine is called with the WAIT parameter set to a negative number. The program waits forever for a message to be placed on the data queue. When a message is placed on the data queue, it causes the QRCVDTAQ command to complete. The program then regains control, receives the message, and chains to a database. If the message placed on the data queue contains an item number found in the AS/400 database, the item is read. The AS/400 program uses the QSNDDTAQ routine to place information on an output data queue name DQOUPT in library TEAMxx. The program then waits for the next message to be placed on DQINPT.

H FDBFIL E IDATAI I IDATAO I I I I I

IF E MSG DS

K 1

DISK 2 40 1 5 ITEMNM

DS 1 25 PARTDS 26 30 PARTQ 31 70 ERRORD C DQINPT C DQOUPT

′ DQINPT ′ DQOUPT

′ ′

Chapter 4. Client/Server Application Serving

109

This soft copy for use by IBM employees only.

I C* C C* C C C C C C* C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C

′ TEAMxx ′ C START OF PROGRAM EXSR READR END OF JOB CLOSE*ALL ENDRUN TAG SETON SETON RETRN PROCESS INPUT READR BEGSR *IN82 DOWEQ′ 0 ′ MOVE DQINPT QUEUEI MOVE LIBL LIBLD Z-ADD5 FLDDL Z-ADD-9 WAIT CALL ′ QRCVDTAQ′ PARM QUEUEI PARM LIBLD PARM FLDDL PARM DATAI PARM WAIT ITEMNM CHAINDBRCD 98 EXSR RECNF N98 EXSR SNDREC MOVE DQOUPT QUEUEO MOVE LIBL LIBLD Z-ADD70 FLDDL CALL ′ QSNDDTAQ′ PARM QUEUEO PARM LIBLD PARM FLDDL PARM DATAO END ENDSR SNDREC BEGSR MOVELITEMD PARTDS MOVELITEMQ PARTQ MOVE *BLANK ERRORD ENDSR RECNF BEGSR MOVE *BLANK PARTDS MOVE *BLANK PARTQ MOVELMSG,1 ERRORD ENDSR

LIBL * *

LR 82 *

10 10 50 50 98

10 10 50

** THE NUMBER WAS NOT FOUND

4.5 Distributed Program Call Interface
The Distributed Program Call API allows PC application programmers to simply access functions on the AS/400 system. It enables a PC application to start non-interactive commands on the AS/400 system and to receive completion messages from these commands. Input, output, and in/out parameters are handled. Up to 10 reply messages can be sent by the AS/400 program.

110

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Previous support (RMTCMD) supported only AS/400 commands and no output was returned to PC program. This is a very easily-coded method of communicating between a PC program and an AS/400 program with a minimum of overhead.
• • •

Commands are issued without an emulation session. User programs and system APIs are called without writing AS/400 programs. Allows PC programs to call AS/400 objects: − − AS/400 Command Language (CL) Commands (*CMD) through QCMDEXC AS/400 programs(*PGM)

The Distributed Program Call Interface is used when the client program needs a call or return interface to an AS/400 program or command. It is much simpler to use than an APPC conversation, but is not as flexible.

4.5.1 Distributed Program Call Flow
In order to use the Distributed Program Call Interface, the following steps are required: 1. 2. 3. 4. 5. Create a system object - EHNDP_StartSys(). Create an Application object - EHNDP_CreatePgm(). Specify a parameter for the program - EHNDP_AddParm(). Execute program - EHNDP_CallPgm(). Retrieve messages from AS/400 system - Message related function calls.

The following APIs and declarations are described for DPC:

System related functions: − − − EHNDP_StartSys() EHNDP_GetSysName() EHNDP_StopSys()

Program related functions: − − − − − − − − − − − − − EHNDP_CreatePgm() EHNDP_AddParm() EHNDP_CallPgm() EHNDP_DeletePgm() EHNDP_GetParmCount() EHNDP_GetParm() EHNDP_GetPgName() EHNDP_GetLibName() EHNDP_GetCallMode() EHNDP_SetParm() EHNDP_SetPgmName() EHNDP_SetLibName() EHNDP_SetCallMode()

Message related functions: − − − − − − EHNDP_GetMsgCount() EHNDP_GetMsgld() EHNDP_GetMsgType() EHNDP_GetMsgLen() EHNDP_GetMsgSev() EHNDP_GetMsgFile()

Chapter 4. Client/Server Application Serving

111

This soft copy for use by IBM employees only.

− − −

EHNDP_GetMsgText() EHNDP_GetSubstTextLen() EHNDP_GetSubstText()

4.5.2 Typical Code for DPC
HANDLE hSystem; WORD rc; HANDLE hProgram; unsigned int len; rc = EHNDP_StartSys(″mysystem″, &hSystem); rc = EHNDP_CreatePgm(″mypgm″, ″mylib″, &hProgram); . . set up parameter data . rc = EHNDP_AddParm(hProgram, EHNDP_Input, ulParmLength, &parm); . rc = EHNDP_CallPgm(hSystem, hProgram); . . check if successful, if not, retrieve message . rc = EHNDP_GetMsgLen(hProgram, &len); . . allocate message buffer of size len . rc = EHNDP_GetMsgText(hProgram, &message); . . rc = EHNDP_DeletePgm(hProgram); rc = EHNDP_StopSys(hSystem);

4.5.3 Visual Basic Example Code for DPC
4.5.3.1 Start System
′ Start a DPC connection ′ ret = EHNDP_StartSys(Me.hWnd, DataSource, ″NewOrder″, a_hSystem) ′ ′ Allocates connection and confirms success. If ret <> 0 Then MsgBox (″Error on Strt system call. Return code-″ & Str$(ret)) a_bConnected = False Exit Sub End If

4.5.3.2 Create Program
The program called is NEWORD in library CSDB.

′ Start a DPC connection ′ ret = EHNDP_CreatePgm(Me.hWnd, ″NEWORD″, ″CSDB″, a_hProgram) If (ret <> 0) Then MsgBox (″Error on Create program call. Return code-″ & Str$(ret)) SQLInit = False

112

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Exit Function End If

4.5.3.3 Specify Input Parameters
′ Add the input parameters ret = EHNDP_AddParm(Me.hWnd, a_hProgram, If (ret <> 0) Then MsgBox (″Error on Create program call. SQLInit = False Exit Function End If ret = EHNDP_AddParm(Me.hWnd, a_hProgram, If (ret <> 0) Then MsgBox (″Error on Create program call. SQLInit = False Exit Function End If ret = EHNDP_AddParm(Me.hWnd, a_hProgram, If (ret <> 0) Then MsgBox (″Error on Create program call. SQLInit = False Exit Function End If EHNDP_INPUT, 10, WID_DID_CID Return code-″ & Str$(ret))

EHNDP_INPUT, 3, OLINES) Return code-″ & Str$(ret))

EHNDP_INPUT, 195, ORDINFO) Return code-″ & Str$(ret))

4.5.3.4 Specify Output Parameters
′ Add the output parameters ret = EHNDP_AddParm(Me.hWnd, a_hProgram, If (ret <> 0) Then MsgBox (″Error on Create program call. SQLInit = False Exit Function End If ret = EHNDP_AddParm(Me.hWnd, a_hProgram, If (ret <> 0) Then MsgBox (″Error on Create program call. SQLInit = False Exit Function End If ret = EHNDP_AddParm(Me.hWnd, a_hProgram, If (ret <> 0) Then MsgBox (″Error on Create program call. SQLInit = False Exit Function End If ret = EHNDP_AddParm(Me.hWnd, a_hProgram, If (ret <> 0) Then MsgBox (″Error on Create program call. SQLInit = False Exit Function End If EHNDP_OUTPUT, 61, INFOBACK) Return code-″ & Str$(ret))

EHNDP_OUTPUT, 360, INNAME) Return code-″ & Str$(ret))

EHNDP_OUTPUT, 45, STQTY) Return code-″ & Str$(ret))

EHNDP_OUTPUT, 15, BORG) Return code-″ & Str$(ret))

Chapter 4. Client/Server Application Serving

113

This soft copy for use by IBM employees only.

4.5.3.5 Call Program
ret = EHNDP_CallPgm(Me.hWnd, a_hSystem, a_hProgram) If ret <> 0 Then MsgBox (″Error on Program call. Return code-″ & Str$(ret)) End If

4.5.3.6 Delete Program and Stop System
If a_bConnected = True Then ′ Free and drop all statements. ret = EHNDP_DeletePgm(Me.hWnd, a_hProgram) ′ Release the program storage ret = EHNDP_StopSys(Me.hWnd, a_hSystem) ′ Stop a DPC connection End If

4.5.4 C++ Example Code for DPC
4.5.4.1 Start System
ret = EHNDP_StartSys(m_hWnd, DataSource, ″NewOrder″, &a_hSystem); if ( ret!=0 ) { char msg [100]; sprintf(msg,″Error on StartSys, Return code - %d″ , ret); AfxMessageBox(msg); bAutoGo = FALSE; a_bConnected=FALSE; }

4.5.4.2 Create Program
The program called is NEWORD in library CSDB.

ret = EHNDP_CreatePgm(m_hWnd, ″NEWORD″, ″CSDB″, &a_hProgram); if (ret!=0) { char msg[100]; sprintf(msg, ″Error on Create program call. Return code -%d″, ret); AfxMessageBox(msg); bReturnFlag = FALSE; return(FALSE); }

4.5.4.3 Specify Input Parameters
ret = EHNDP_AddParm(m_hWnd, a_hProgram, EHNDP_INPUT, 10, (unsigned char __far *)&WID_DID_CID); ret = EHNDP_AddParm(m_hWnd, a_hProgram, EHNDP_INPUT, 3, (unsigned char __far *)&OLINES); ret = EHNDP_AddParm(m_hWnd, a_hProgram, EHNDP_INPUT, 195, (unsigned char __far *)&ORDINFO);

114

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

4.5.4.4 Specify Output Parameters
ret = EHNDP_AddParm(m_hWnd, a_hProgram, (unsigned char __far ret = EHNDP_AddParm(m_hWnd, a_hProgram, (unsigned char __far ret = EHNDP_AddParm(m_hWnd, a_hProgram, (unsigned char __far ret = EHNDP_AddParm(m_hWnd, a_hProgram, (unsigned char __far ret = EHNDP_AddParm(m_hWnd, a_hProgram, (unsigned char __far ret = EHNDP_AddParm(m_hWnd, a_hProgram, (unsigned char __far EHNDP_OUTPUT, *)&INFOBACK); EHNDP_OUTPUT, *)&INNAME) ; EHNDP_OUTPUT, *)&STQTY); EHNDP_OUTPUT, *)&BORG); EHNDP_OUTPUT, *)&IPRICE); EHNDP_OUTPUT, *)&OLAMT); 61, 360, 45, 15, 75, 105,

4.5.4.5 Call Program
ret = EHNDP_CallPgm(m_hWnd, a_hSystem, a_hProgram); if (ret!=0) { char msg[100]; sprintf(msg, ″Error on Create program call. Return code -%d″, ret); AfxMessageBox(msg); return; }

4.5.4.6 Delete Program and Stop System
if (a_bConnected==TRUE) { // Free and drop all statements. ret = EHNDP_DeletePgm(m_hWnd, a_hProgram);// Release the program ret = EHNDP_StopSys(m_hWnd, a_hSystem); // Stop a DPC connection }

4.5.5 Comparison of Techniques
All tests were done using the following set of references: Transaction type Client Configuration New Order transaction with ten items in the order. IBM ThinkPad (Intel DX4 100/33 MHz) with 40MB RAM. Windows 3.1 DOS 7.0 Client Access/400 Windows 3.1 client Server Configuration AS/400 model 510 running OS/400 Version 3.6, PTF C6220360. Communications link 4 Mbps Token-ring LAN. The following utilities were used to derive the results:
• •

Client Response Time measured by the client program Performance Tools/400 (5763-PT1 LPP) − Component Report - Number of Database I/Os
Chapter 4. Client/Server Application Serving

115

This soft copy for use by IBM employees only.

- Number of Communications I/Os Example Programs All the example programs used for the performance tests are available with this redbook on the included PC media. Please refer to Appendix A, “Example Programs” on page 393 for more information and a guided tour of the application code.

Using the different techniques discussed can result in very different performances for your application. The following table shows the number of communication I/O operations and some response times for a “sample” order entry style operation. The identical application was written using several different methods. The first method uses APPC, the second method uses Data Queues and the third method uses Distributed Program call. The following table summarizes performance information for these applications in the Windows 3.1 environment. As is shown in the table, reducing I/O requests between the client and server can dramatically affect response time. Data Queue Application The data queue implementation uses a combination of ODBC and data queue support. This implementation does not do the new order processing on-line, but simply writes the new order information to a data queue for later processing. This is done to demonstrate the idea of time independent processing and to give the end user fast response time. The response time that the end user sees is very fast, but not all the processing is completed.

Table 4. ODBC I/O and Response Times (Windows 3.1)
Logical I/O Count APPC Visual Basic Data Queues Visual Basic DPC Visual Basic 27 2 27 Comm I/O Count 2 6 11 Response time (secs) .61 .71 .77

4.6 Summary

Program to Program (APPC or Sockets) − Do use: − For the best performance. For online transaction processing. For record oriented file transfer. When programming flexibility is required. When you have PC and AS/400 programmer skills.

Do not use:

116

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

- For decision support applications. • Use ODBC - Unless communications programming skills are strong.

Data Queues − Do use: - For time independent processing. - For passing free format data. - When you have PC and AS/400 programmer skills. − Do not use: - For multi-platform applications. • Use Message Queue Interface - For decision support applications. • Use ODBC

Distributed Procedure Call − Do use: − For For For For issuing commands to the AS/400 system. starting programs on the AS/400 system. accessing AS/400 System APIs. online transaction process.

Do not use: - For multi-platform applications. - For decision support applications. • Use ODBC

Chapter 4. Client/Server Application Serving

117

This soft copy for use by IBM employees only.

118

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Chapter 5. Client/Server Database Serving

5.1 Introduction
Database Serving is characterized as a client/server application where a client application is accessing information from a server database management system. All of the application programming is done at the client. The server system provides database server programs that can accept information requests, access data, and return results to the client. The purpose of this chapter is to introduce you to the concepts and programming interfaces required to use Remote SQL and ODBC to build client/server applications that use the AS/400 system as a database server.

5.1.1 AS/400 Database Serving
DB2/400 is a multi-user relational database management system (DBMS) that runs on the AS/400 system. Structured Query Language is used to access data in AS/400 DBMS. IBM PCs and compatible systems can communicate with AS/400 database servers across a network through the Client Access/400 router. Client Access/400 allows users to access the AS/400 database from PCs without programming the AS/400 system. DB2/400 provides a very strong database management system on which to build client/server applications. Some major database improvements were implemented in V3R1. They include:
• • • •

Referential integrity Triggers Stored procedures Compliance to industry standards

The SQL language is an industry standard for database access and has become the defacto standard for client/server database access. The Client Access/400 for Windows 3.1 client and Client Access/400 for Windows 95 provide several interfaces for accessing the AS/400 database using SQL. Remote SQL A proprietary API introduced with V2R1 that allows PC applications running in OS/2, DOS, or Windows 3.1 to access the AS/400 database. Remote SQL is not a strategic access method for database serving and you should not generally consider it for new client/server development. Optimized SQL A proprietary API that allows PC applications running in the Optimized OS/2 or Windows 95 environments to access the AS/400 database. It is the replacement for Remote SQL for these clients. Optimized SQL provides a superset of the function that is provided in the ODBC interface. All the function of ODBC is provided, along with extensions that allow an application developer to take advantage of functions that are unique to the AS/400 system. The optimized SQL

© Copyright IBM Corp. 1996

119

This soft copy for use by IBM employees only.

ODBC

APIs provide access to the AS/400 system through a call level interface. Open Database Connectivity A Microsoft architected database access interface that has become very widely accepted as the standard client/server access method for Microsoft Windows clients. It is also available under OS/2. ODBC is the recommended API to use for database serving applications.

5.1.2 Client Access/400 Servers
Beginning with V3R1, a number of new and enhanced servers have been introduced to Client Access/400. They include:
• • • • • • •

File server Database server Network print server Data queues server Remote command and program call server APPC password management server Central server: − License management − Client management

A number of other enhancements have also been included:

Since V3R1, Client Access/400 servers can use the OS/400 Registration facility to register exit programs individually. Many of the new servers use prestarted jobs that improve startup performance for client applications. You can configure your system to increase or decrease the number of prestart jobs you have available for each Client Access/400 server. The servers are now written to a common data stream. The common data stream was designed for optimum performance.

Each Client Access/400 function has a client program that runs on the PC; the function also has a corresponding server program that runs on the AS/400 system. Up to V3R1, the server programs were packaged as part of PC Support/400 and were shipped mainly in the library QIWS. Now with Client Access/400, the server programs are shipped as optionally-installable features of the operating system. This was done to ensure that the servers were available to all customers using the AS/400 system. The repackaging enables other vendors to write their own client programs to the Client Access/400 servers. IBM licenses the server data streams. Vendors now are able to compete with their client packages without having the expense of writing their own server programs. Now vendors do not have to write their own ODBC or SQL server for the AS/400 system. They can focus their investment on writing the best client interface to the ODBC and SQL servers supplied with OS/400.

120

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

5.2 Remote SQL Interface
The Remote SQL API allows PC applications to issue SQL commands to run on the AS/400 system. It has the following features:
• •

• • •

Client Access provides a full-function AS/400 SQL server. It allows program-to-program communication between PC and AS/400 applications. It is supported for Extended DOS, Windows, or OS/2. It is a programming API only; no applications are provided. It is packaged as part of Client Access/400.

Chapter 5. Client/Server Database Serving

121

This soft copy for use by IBM employees only.

5.2.1 Remote SQL Architecture
┌─────────────────────────────────┐ │PC Application Program │ │Issues call to Remote SQL │ │(Example: EHNRQ_SELECT) │ └─────────────────────────────────┘

┌─────────────────────────────────┐ │Remote SQL LIB │ │(The Remote SQL LIB is a small │ │amount of code that gets linked │ │into the PC Application Program.)│ └─────────────────────────────────┘

┌────────────────────────┬─────────────────────┐ │ │ │ ┌──────┴────────┐ ┌───────┴─────────┐ ┌───────┴────────┐ │ (DOS) │ │ (OS/2) │ │ (Windows) │ │Remote SQL OVL │ │ Remote SQL DLL │ │ Remote SQL DLL │ │ (EHNRQAPI.OVL)│ │ (EHNRQAPI.DLL) │ │ (EHNRQW.DLL) │ └──────┬────────┘ └────────┬────────┘ └────────┬───────┘ │ │ │ └─────────────────────────┴─────────────────────┘

┌───────────────────────────────┐ │ APPC Communications │ └───────────────────────────────┘

┌───────────────────────────────┐ │ Remote SQL AS/400 Server Code │ └───────────────────────────────┘

┌───────────────────────────────┐ │ Dynamic SQL/400 │ │ (executes the SQL statement) │ └───────────────────────────────┘

Figure 31. Remote SQL Architecture

When using remote SQL, the programmer codes remote SQL verbs in the application program. A large number of verbs are available to support this environment. See section 5.2.3, “ SQL Verbs” on page 123 for a list of functions supported. When the API is called, it calls the appropriate Client Access/400 support. The Client Access support on the client platform interfaces with the OS/400 server program to perform the requested function and returns the results and completion code to the application program.

5.2.2 Remote SQL Enhancements
Over the last few releases, a number of enhancements have been made to the Remote SQL Interface. Many of these enhancements have been aimed at improving the performance of remote SQL.

V2R1.1 − Parameter markers - Can re-use frequently used statements to improve performance.

V2R3 − Performance - Block fetch on READ ONLY access - Preprocessing for UPDATE access - Reduced data flow for successful operations Asynchronous processing Canned SQL statements (SQL Packages)

− −

V3R1 − − Remote SQL enhancements (up to three times faster) APPC enhancements (up to three times faster)

122

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

− −

TCP/IP enhancements (up to eight times faster) AS/400 database enhancements

5.2.3

SQL Verbs
Remote SQL supports a large number of API calls that allow you to build flexible and powerful client/server applications. The following functions are supported:
Start SQL (Start Remote SQL environment) End (End Remote SQL environment) Select (Execute Remote SQL Select Stmt and open cursor) Fetch (Retrieves next row, unformatted) Get Formatted (Format and Retrieve next Row) Get Attributes (Retrieve attributes of columns) Delete (Delete row at current cursor) Describe (Obtain SQLDA - Description of columns) Get SQLCA (Obtain SQLCA - Ending conditions of last request) Update Current (Update current cursor row) Close Cursor (Close a cursor associated with select) Execute Immediate (Run SQL stmt other than select) Accept (Accept APPC connection from host (OS/2 only)) Invoke (Invoke AS/400 program) Receive (Receive block of data from host) Send (Send block of data to host) Error (Retrieve last error) ** New for V2R1.1 Execute PM (Prep non-select SQL stmt with parameter markers) Execute VAL (Execute non-select SQL stmt with PM values) Free PM (Free and close prepared SQL statement) Select PM (Prep SQL select stmt with parameter markers) Select VAL (Execute SQL select stmt with PM values) Set Rows (Set the number of rows send back per data xfer) Start with UserID (Start Remote SQL with override of userid and passwd) ** New for V2R3 Prepare SQL package (Create new SQL packages or update existing packages) Execute SQL package (Execute SQL packages)

Figure 32. Remote SQL APIs

5.2.4 Other SQL Verbs Supported

The following verbs are executed using the Execute Immediate (EHNRQEXEC) interface.

COMMENT ON* COMMIT* CREATE DATABASE* CREATE INDEX* CREATE TABLE* CREATE VIEW* *SQL databases only

DELETE DROP* GRANT* INSERT LABEL ON* LOCK TABLE*

REVOKE* ROLLBACK* UPDATE

Chapter 5. Client/Server Database Serving

123

This soft copy for use by IBM employees only.

5.2.5 Example Program Flow
┌─────────────────────────────────┐ │Start connection to Host system │ │EHNRQ_START │ └───────────────┬─────────────────┘ │ ┌─────────────────────────────────┐ │Build SQL collection │ │EHNRQ_EXEC │ │Create collection RSQLxx │ └───────────────┬─────────────────┘ │ ┌─────────────────────────────────┐ │Create table │ │EHNRQ_EXEC │ │Create table RSQLxx/SALESPERS │ │ (NAME char(21), ....) │ └───────────────┬─────────────────┘ │ ┌─────────────────────────────────┐ │Insert data │ │EHNRQ_EXEC │ │Insert into RSQLxx/SALESPERS │ │ values(′ SMITH′ , . . . . ) │ └───────────────┬─────────────────┘ │ ┌─────────────────────────────────┐ │End Session │ │EHNRQ_END │ └─────────────────────────────────┘

Figure 33. Remote SQL Example

In this example, remote SQL is used to create an SQL collection, create a table in the collection, and insert one row into the table. Before any SQL calls are made, a connection to the OS/400 server program must be done with the EHNRQ_START verb. When the session is complete, the EHNRQ_END verb is used to end the session. The EHNRQ_EXEC verb is used to execute non-select type SQL statements.

5.2.6 Example Code
This section demonstrates some coding examples (using the ″C″ language) using the Remote SQL interface. For a complete list of the Remote SQL verbs, see Client Access/400 for Windows 3.1 API and Technical Reference , SC41-3531-01. The following code starts and ends a Remote SQL session with the server AS/400 system. The AS/400 name is “SYSNM003.”

char server[10] = ″SYSNM003″ ; rc = EHNRQ_START(hWnd,server,0,buffer_size,0); if (rc != 0 ) { pMainWnd->SetStatusMessage(″EHNRQSTART error″ , msg2); UpdateData(FALSE); return; 124
AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

} pMainWnd->m_RouterLoaded = TRUE; pMainWnd->SetStatusMessage(″RSQL Connection″ , msg2); rc = EHNRQ_END(hWnd,0);

The following code executes a “SELECT” statement. This can be followed by calls to other APIs to return the selected data, and perform updates and other functions as required. For example, the Select statement returns the selected rows in a result set. The EHNRQ_FETCH verb is then used to retrieve the individual rows from the result set into the program buffers.

char sel2[50] = ″SELECT * from RWM/DBFIL ″ ; hcursor cursor; short upflag = 0; /* non zero for update */ rc = EHNRQ_SELECT(hWnd,sel2,upflag,&cursor,0); if (rc != 0 ) { Error processing }

5.2.7 Remote SQL Summary
Although Remote SQL provides a powerful programming interface for building an AS/400 client/server application, it should not be used for building new applications. ODBC is a better choice. Remote SQL is a platform-specific API and is used only as an interface to the AS/400 system. ODBC applications are platform independent and can interface to any platform that supports ODBC. Also, it is more difficult to performance tune a Remote SQL application than an ODBC application. For example, package support is done by the AS/400 server for ODBC while it is up to the client application developer to implement it in Remote SQL.

5.3 Open Database Connectivity (ODBC) Interface
ODBC is a Microsoft architected database access interface that enables applications to access data using Structured Query Language (SQL) as a standard language. ODBC provides a consistent set of APIs that permit a single application to access different database management systems. The ODBC approach is:
• • •

A program separate from the application to extract database information. A standard interface for applications to import the data. Database drivers, provided by the various database vendors or third parties: − − Supplied as dynamic link libraries that an application can invoke. Gain access to the database management system.

Chapter 5. Client/Server Database Serving

125

This soft copy for use by IBM employees only.

Client Access/400 provides drivers that can access the AS/400 database.

5.3.1 ODBC Interface
The ODBC interface defines a library of function calls that allow an application to:
• • •

Connect to a DBMS. Execute SQL statements. Retrieve results.

The ODBC interface also provides for:
• • • •

SQL syntax. A standard set of error codes. A standard way to connect and log on to a DBMS. A standard representation for data types.

5.3.2 ODBC Components
┌────────────────────────────────────────────────────┐ │ Application │ │ │ ODBC ──── ├────────────────────────────────────────────────────┤ interface │ │ │ Driver Manager │ │ │ ├────────┬──────────────┬──────────────┬─────────────┤ │Driver │ Driver │ Driver │ Driver │ │ │ │ │ │ └────────┴──────────────┴──────────────┴─────────────┘ ┌───────┐ │Data │ │Source │ └───────┘ ┌───────┐ │Data │ │Source │ └───────┘ ┌───────┐ │Data │ │Source │ └───────┘ ┌────────┐ │ Data │ │ Source │ └────────┘

Figure 34. ODBC Architecture

The components of an ODBC application are: 1. Application
• • • • • • • • •

Requests a connection or session with a data source. Sends SQL requests to the data source. Defines storage areas and data formats for the results of SQL requests. Requests results. Retrieves result column data. Processes errors. Reports results back to user if necessary. Requests commit or rollback operations for transaction control. Terminates the connection to the data source.

2. Driver Manager
• • • •

Maps a data source name to a specific driver dynamic link library (DLL). Processes several ODBC initialization calls. Provides entry points to ODBC functions for each driver function. Provides parameter validation and sequence validation for ODBC calls.

3. Driver

Establishes a connection to the data source.

126

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

• • • •

• •

Submits requests to the data source. Translates data to or from other formats, if requested by the application. Returns results to the application. Formats errors into standard error codes and returns them to the application. Declares and manipulates cursors if necessary. Initiates transactions if the data source requires explicit transaction initiation.

4. Data Source

The general features and functionality provided by an SQL database management system. A specific instance of a combination of a DBMS product, remote operating system, and networking necessary for access. Examples: − AS/400 DB2/400 − Oracle DBMS running under OS/2 − Tandem non-stop SQL DBMS running on a Guardian 90 operating system

5.3.3 Types of ODBC Drivers
ODBC drivers come in two basic types:

Single-tier − − Driver processes both ODBC calls and SQL statements. Database file is processed directly by the driver.

Multiple-tier − − Driver processes ODBC calls and passes SQL statements to data source. Can reside on a single system, most often divided across platforms. - Application, driver, driver manager on Client - Database, RDBMS on Server The DB2/400 ODBC support is a multi-tier, client server implementation. The Windows 3.1 client driver (EHNODBC3.DLL) interfaces with the host driver program (QZDAINIT) to provide the AS/400 ODBC support. The Windows 95 client driver (CWBODBC.DLL) interfaces with the host driver programs (QZDAINIT/QZDASOINIT).

5.3.4 ODBC Conformance Levels
The ODBC standard allows for drivers to provide different levels of function. Conformance levels are used to define the function provided. These conformance levels cover both the API interface to ODBC and the SQL statements supported by the driver.

API conformance levels − Core API - Allocate and free environment, connection, and statement handles. - Connect to data source, use multiple statements on connection. - Prepare and execute SQL statements, execute SQL statements immediately. - Assign storage for parameters in an SQL statement and result columns.

Chapter 5. Client/Server Database Serving

127

This soft copy for use by IBM employees only.

- Retrieve data from a results set and about a results set. - Commit or rollback transactions. - Retrieve error information. − Level 1 API − Core API functionality. Connect to data sources with driver specific dialog boxes. Set and inquire values of statement and connection options. Send all or part of a parameter value (useful for long data). Retrieve all or part of a result column. Retrieve catalog information (columns,special columns, and tables). Retrieve information about driver and data source capabilities.

Level 2 API Core and level 1 functionality. Browse available connections and list available data sources. Send arrays of parameter values. Retrieve arrays of result column values. Use a scrollable cursor. Retrieve the native form of an SQL statement. Retrieve catalog information (privileges,keys, and procedures). Call a translation API.

SQL conformance levels − Minimum SQL Grammar - Data Definition Language(DDL): CREATE TABLE and DROP TABLE - Data Manipulation Language(DML):simple SELECT, INSERT, UPDATE SEARCHED, DELETE SEARCHED - Expressions:simple (A > B + C) - Data Types:CHAR − Core SQL Grammar - Minimum SQL grammar - DDL:ALTER TABLE, CREATE INDEX, DROP INDEX, CREATE VIEW, DROP VIEW, GRANT, and REVOKE - DML:full SELECT, positioned UPDATE, and positioned DELETE - Expressions:subquery, set functions such as SUM and MIN - Data types:VARCHR, DECIMAL, NUMERIC, SMALLINT, INTEGER, REAL, FLOAT, DOUBLE PRECISION − Extended SQL Grammar - Minimum and Core SQL grammar - DML:outer joins - Expressions:scalar functions such as SUBSTRING and ABS, date, time and timestamp literals - Data types:LONG VARCHAR, BIT, TINYINT, BIGINT, BINARY, VARBINARY, LONG VARBINARY, DATE, TIME, TIMESTAMP - Batch SQL statements - Procedure calls

The DB2/400 ODBC driver has the following conformance levels:
• •

API Conformance Level - Level 2 SQL Conformance Level - Minimum

128

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

DB2/400 supports all of the core level of SQL grammar functions, with the exception of ALTER TABLE add/drop column and some RI features. DB2/400 also supports many parts of the grammar which are classified as Extended SQL. These include:
• • • • • •

Outer Join. Parts of ALTER TABLE support. Positioned DELETE statement. Stored Procedures. SELECT with FOR UPDATE OF clause. Many of the extended elements used in SQL statements.

Note: Starting with V3R6, DB2/400 has full alter table support. The only parts of the syntax that are not supported are DEFAULT USER, Column level security, and check constraints. The Rochester development lab is analyzing adding these functions in a future release. The following list describes special handling of certain ODBC APIs by the CA/400 ODBC driver.

SQLExtendedFetch − Cursors opened for update cannot call SQLExtendedFetch to read rows but must use SQLFetch. − You cannot use SQLExtendedFetch in combination with SQLSetPos and SQLGetdata. − SQL_FETCH_BOOKMARKS is not supported. − SQLExtendedFetch cannot be used to retrieve result sets for cataloging functions (SQLTables, SQLSpecialColumns, SQLStatistics, SQLColumns, SQLForeignKeys, SQLPrimaryKeys). SQLSetPos − SQL_UPDATE, SQL_DELETE, SQL_ADD options are not supported. − SQL_LOCK_EXCLUSIVE, SQL_LOCK_UNLOCK are not supported. SQLSetScrollOption − SQL_CONCUR_ROWVER, SQL_CONCUR_VALUES are not supported for the concurrency option. − SQL_SCROLL_KEYSET_DRIVEN is changed to SQL_SCROLL_DYNAMIC SQLColumnPrivileges, SQLTablePrivileges returns SQL_SUCCESS, then fetch returns SQL_NO_DATA_FOUND. SQLSetStmtOption − SQL_USE_BOOKMARKS is not supported. − SQL_RETRIEVE_DATA is always set to SQL_RD_ON (default). − SQL_SIMULATE_CURSOR is not supported. SQLSetConnectOption with SQL_TRANSLATE_DLL, SQL_TRANSLATE_OPTION is not supported.

Note: The same limitations that exist with AS/400 DB2/400 are imposed on the ODBC SQL grammar. For further information, refer to Appendix A of the DB2/400 SQL Reference , SC41-3612. In order to claim a conformance level, all of the features for a particular level must be supported. The DB2/400 ODBC driver is classified as minimum SQL Conformance level, but it supports almost all requirements for Core SQL Grammar and a large set of those required for Extended SQL Grammar.

Chapter 5. Client/Server Database Serving

129

This soft copy for use by IBM employees only.

5.3.5 PC Support/400 V2R3 ODBC Driver
┌───────────────────────────┐ │ User Application │ │ │ └─────────────┬─────────────┘ │ ┌───────────────────────────┐ │ Driver Manager - ODBC.DLL │ └─────────────┬─────────────┘ │ ┌───────────────────────────┐ │Remote SQL - EHNRQW.DLL │ └─────────────┬─────────────┘ │ ┌───────────────────────────┐ │Router - EHNAPPC.DLL │ └─────────────┬─────────────┘ │ ┌───────────────────────────┐ │AS/400 Remote SQL Server │ └───────────────────────────┘ │ ┌───────────────────────────┐ │AS/400 Database │ └───────────────────────────┘

Figure 35. V2R3 ODBC Architecture

The PC Support/400 V2R3 driver communicates with the AS/400 server through the remote SQL driver. ODBC calls are converted to remote SQL calls and the requests are submitted to the remote SQL server. The results are returned and converted back to the ODBC format and then returned to the application. This approach is also used with the V3R0M5 ODBC drivers. Because of the extra conversion used with this driver, performance is not optimum. This driver can be used for testing or prototyping, but should not be considered for a production environment.

130

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

5.3.6 Client Access/400 Windows 3.1 ODBC Driver
┌───────────────────────────┐ │ User Application │ │ │ └─────────────┬─────────────┘ │ ┌───────────────────────────┐ │ Driver Manager - ODBC.DLL │ └─────────────┬─────────────┘ │ ┌───────────────────────────┐ │CA/400 Driver- EHNODBC3.DLL│ └─────────────┬─────────────┘ │ ┌───────────────────────────┐ │CA/400 Router- EHNAPPC.DLL │ └─────────────┬─────────────┘ │ ┌───────────────────────────┐ │ Data Access Server │ └─────────────┬─────────────┘ │ ┌───────────────────────────┐ │ AS/400 Database │ └───────────────────────────┘

Figure 36. Windows 3.1 Architecture

ODBC has different versions. The version of ODBC defines which APIs are available for the application to use. In version 1.0, for instance, there are a set of APIs (functions) defined. In version 2.0, most of the APIs are the same as the version 1.0 APIs but there are also some which replace 1.0 APIs. This version is important because it defines the interfaces that the application uses, but it should not be confused with conformance levels. With Client Access/400 for Windows 3.1 (V3R1M0) and the OS/2 Optimized client, an ODBC 1.0 driver is provided. Client Access/400 for Windows V3.1 (V3R1M1) and Client Access/400 for Windows 95 provide an ODBC 2.0 driver. These drivers are provided to enable applications to access data in an AS/400 database through the Open Database Connectivity (ODBC) interface. Figure 36 shows the ODBC architecture for the Windows 3.1 client. On the AS/400 system, ODBC requests are submitted to the QSERVER subsystem where QZDAINIT pre-started jobs run. If the connection is through TCP/IP using the Windows 95 client, the job name is QZDASOINIT. Each request to connect a specific data source initiated by the Client Access/400 ODBC driver uses a QZDAINIT/QZDASOINIT pre-started job, servicing the user profile specified in the connection string. If the DOS extended client or the NetSoft router is used under Windows 3.1, QZDAINIT runs in the QCMN subsystem. Because the DB2/400 ODBC driver accesses the new optimized Database server, it cannot be used with Version 2 systems.

Chapter 5. Client/Server Database Serving

131

This soft copy for use by IBM employees only.

5.3.7 ODBC Support
The ODBC support provided by V3R1 (or later) has been greatly improved over the support provided in V2R3. The ODBC driver performance improvements result because:
• • • • •

• •

The Remote SQL interface is no longer used. A new enhanced AS/400 data access server is provided. A new architected data stream to the server is used. Servers provide blocking and low-level fast path support for fetches. Previously PREPARED statements are stored in packages on the server for faster executions. It utilizes pre-started server tasks for faster connects. More ODBC functions are available such as extended fetch and stored procedures.

5.3.8 Calling ODBC Functions
ODBC API functions fall into several categories:
• • • •

Setting up the ODBC environment. Establishing connections to data sources. Executing SQL statements. Cleaning up the ODBC environment.

5.3.9 Basic Application Steps
An ODBC application needs to follow a basic set of steps in order to access a database server. 1. Connect to the data source. 2. Place the SQL statement string to be executed in a buffer. This is a text string. 3. Submit the statement for prepared or immediate execution.
• •

Retrieve and process the results. If there are errors, retrieve the error information from the driver.

4. End each transaction with a commit or rollback operation (if necessary). 5. Terminate the connection.

5.3.10 Simplified ODBC Application Structure

132

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Figure 37. Simplified ODBC Application Structure

Figure 37 shows a simplified view of a typical ODBC application. The flow is from top to bottom with the loop in the middle meant to imply that multiple SQL statements may be executed, each multiple times. The first stages involve asking Windows to allocate memory for the ODBC environment; here ODBC will keep track of connections to data sources and statement instances. The next stage is to connect to one or more data sources (in practice the connections may come and go over the life of the program, they don′t all have to be made up front). Now comes the real meat, executing SQL statements; either to interrogate the database catalog or the driver to see what it is capable of, or to access the database itself. Finally the ODBC resources are released in the reverse order to that in which they were acquired. Relevant API functions are shown by the side. Remember this is a highly simplified picture and many other APIs may be used in real applications. You will probably find, however, that about 15 to 20 APIs will cover the majority of the programming you do.

5.3.11 Basic Application Flow
Before an application can use ODBC to connect to a database and its supporting database management system, a data source must be created. The data source defines how to access the database and the database management system. For more information on how to define a data source for the Windows 3.1 client, see section 5.3.32, “Configuring an ODBC Data Source for Windows 3.1” on page 168. For information on how to define a data source for the Windows 95 client, see section 5.3.36, “Configuring an ODBC Data Source for Windows 95” on page 173.

Chapter 5. Client/Server Database Serving

133

This soft copy for use by IBM employees only.

The ODBC interface defines a set of calls to be used for each of these application steps. For example, SQLAllocEnv is used to allocate the ODBC environment. Each of these API calls is documented in the Microsoft ODBC 2.0 Programmer ′ s Reference and SDK Guide . This chapter examines some of the more common ODBC APIs. Coding examples are given using the ″C″ language. Also included in this chapter are examples using Visual Basic. For more complete examples in either language, please refer to the Appendix A, “Example Programs” on page 393.

5.3.12 Programming Prerequisites

ODBC Drivers installed and data sources configured − ODBC32.DLL will be in WINDOWS directory (for Windows 95) − ODBC.DLL will be in WINDOWS directory (for Windows 3.1) Language capable of DLL function calls − C is the reference standard The ODBC 2.0 Programmer ′ s Reference available − Basic understanding of C needed The ODBC Software Developers Kit is extremely useful but not a requirement − Sample code, debugging and administration tools, on-line documentation − Needs a Level 2 Microsoft Developer Network subscription

The setup program of almost any ODBC-enabled application on your PC will give you the opportunity to install the standard ODBC files - the driver manager and the administrator as a minimum. Sometimes you will be prompted if the setup program is about to over-write existing components of the same name, sometimes you will not. Caution: Always check that you have the latest versions of these vital components and that they have not been replaced by earlier versions. This can be the cause of errors that are difficult to track. If you are planning to use a particular tool for your development, check with the provider that it is capable of interfacing with Windows Dynamic Link Libraries if you are planning to use the API. If the tool comes with its own database interface, and claims to be ODBC enabled, check if there are any restrictions (for example: ″Can the tool generate code to call stored procedures; if so, has the tool be tested to call AS/400 stored procedures?″ C is the reference standard for the ODBC API, and the Programmer ′ s Reference assumes a working knowledge of that language. Even if you are not a C programmer, make sure you understand how to read C function declarations and understand the C data types.

5.3.13 Using ODBC DLL Functions
This is a subset of the information provided in the ODBC Programmer ′ s Reference for one particular API function. The intention is not to teach you the ramifications of SQLExecDirect, but to illustrate the way in which the functions are documented. Key to the effective use of any API, not least the ODBC API, is a clear understanding of the parameters involved, their data types (ODBC provides typedefs for the standard C data types which are documented in the SQL.H and SQLEXT.H include files), and the possible return codes from the function. The Programmer ′ s Reference describes these in detail.

134

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

C Language Function prototypes, constant #defines, and so on can be found in SQL.H and SQLEXT.H that come with Microsoft Visual C++ or the ODBC software developers kit. Visual Basic Function and constant declarations come with Visual Basic, Enterprise Edition Version 4.0, the ODBC software developers kit or as part of the accompanying examples. Things are straightforward for the C or C++ Windows programmer, and reasonably easy for the Visual Basic programmer as declarations of the ODBC API functions are readily available. Users of other development tools may need to check with the supplier of the tool to see if declarations appropriate to the tool are available, because, unlike some of the other APIs documented in this manual, ODBC has a very large API set. Also, ODBC makes extensive use of manifest constants and it is a large undertaking to translate the C definitions of each of these.

5.3.14 API Conformance - Core
All ODBC API functions start with SQL. Splitting them up into the three inclusive levels: Core, Level 1, Level 2 is convenient, but slightly artificial from the programming point of view. The core API functions are: SQLAllocConnect Obtains a connection handle. SQLAllocEnv Obtains an environment handle. One environment handle is used for one or more connections. SQLAllocStmt Allocates a statement handle. SQLBindCol Assigns storage for a result column and specifies the data type. SQLCancel Cancels an SQL statement. SQLColAttributes Describes attributes of a column in the result set. SQLConnect Connects to a specific driver by data source name, user ID, and password. SQLDescribeCol Describes a column in the result set. SQLDisconnect Closes the connection. SQLError Returns additional error or status information. SQLExecDirect Executes a statement. SQLExecute Executes a prepared statement.

Chapter 5. Client/Server Database Serving

135

This soft copy for use by IBM employees only.

SQLFetch Returns a result row. SQLFreeConnect Releases the connection handle. SQLFreeEnv Releases the environment handle. SQLFreeStmt Ends statement processing and closes the associated cursor, discards pending results optionally freeing all resources associated with the statement handle. SQLGetCursorName Returns the cursor name associated with a statement handle. SQLNumResultCols Returns the number of columns in the result set. SQLPrepare Prepares an SQL statement for later execution. SQLRowCount Returns the number of rows affected by an insert, update, or delete request. SQLSetCursorName Specifies a cursor name. SQLSetParam Assigns storage for a parameter in an SQL statement. SQLTransact Commits or rolls back a transaction.

5.3.15 API Conformance - Level 1
SQLColumns Returns the list of column names in specified tables. SQLDriverConnect Connects to a specific driver by connection string or requests that the Driver Manager and driver display connection dialogs for the user. SQLGetConnectOption Returns the value of a connection option. SQLGetData Returns part or all of one column of one row of a result set. (Useful for long data values.) SQLGetFunctions Returns supported driver functions. SQLGetInfo Returns information about a specific driver and data source. SQLGetStmtOption Returns the value of a statement option. SQLGetTypeInfo Returns information about supported data types.

136

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

SQLParamData Returns the storage value assigned to a parameter for which data will be sent at execution time. (Useful for long data values.) SQLPutData Send part or all of a data value for a parameter. (Useful for long data values.) SQLSetConnectOption Sets a connection option. SQLSetStmtOption Sets a statement option. SQLSpecialColumns Retrieves information about the optimal set of columns that uniquely identifies a row in a specified table, and the columns that are automatically updated when any value in the row is updated by a transaction. SQLStatistics Retrieves statistics about a single table and the list of indexes associated with the table. SQLTables Returns the list of table names stored in a specific data source.

5.3.16 API Conformance - Level 2
SQLBrowseConnect Returns successive levels of connection attributes and valid attribute values. When a value has been specified for each connection attribute, connects to the data source. SQLColumnPrivileges Returns a list of columns and associated privileges for one or more tables. SQLDataSources Returns a list of available data sources. SQLDescribeParam Returns the description for a specific parameter in a statement. SQLExtendedFetch Returns multiple of result rows. SQLForeignKeys Returns a list of column names that comprise foreign keys, if they exist for a specified table. SQLMoreResults Determines whether there are more result sets available and, if so, initializes processing for the next result set. SQLNativeSql Returns the text of an SQL statement as translated by the driver. SQLNumParams Returns the number of parameters in a statement. SQLParamOptions Specifies the use of multiple values for parameters.

Chapter 5. Client/Server Database Serving

137

This soft copy for use by IBM employees only.

SQLPrimaryKeys Returns the list of column name(s) that comprise the primary key for a table. SQLProcedureColumns Returns the list of input and output parameters, as well as the columns that make up the result set for the specified procedures. SQLProcedures Returns the list of procedure names stored in a specific data source. SQLSetPos Positions a cursor within a fetched block of data. SQLSetScrollOptions Sets options that control cursor behavior. SQLTablePrivileges Returns a list of tables and the privileges associated with each table.

138

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

5.3.17 Environments, Connections, Statements

Figure 38. Environments, Connections, Statements

Figure 38 shows the relationship between three important ODBC concepts:

Environment: The environment which, as has already been described, involves Windows making available some memory for ODBC to keep track of its run time information. There is only one environment per ODBC application (or thread in Windows 95 or NT). Connections: Within the environment there can be multiple connections, each to a data source. The connections may be to different physical systems, or the same or any combination. Statements: Within each connection multiple statements can be executed, in some cases in parallel (if the database and driver support asynchronous execution of statements).

5.3.17.1 Handles
Identifiers for storage areas allocated by the Driver Manager or individual drivers. Environment Handle: Global information, including other handles. One per application Connection Handle: Information about connection to a data source. Multiple such handles per environment Statement Handle: Information about a particular SQL statement. Multiple such handles per connection Those new to Windows programming sometimes find the idea of a handle difficult to grasp. Essentially, a handle can be thought of as an identifier for a resource - in this case an environment, connection, or statement - that is known to ODBC and for which ODBC provides an identifier (the handle) that you can use in your

Chapter 5. Client/Server Database Serving

139

This soft copy for use by IBM employees only.

program. Exactly what ODBC decides to store in the handle (which is held as a long integer) you do not care, you must only take care that you do not change the value and that you give the variables holding the various handles unique names. Provided you pass the right variable into the API as the appropriate parameter you will be OK. Some APIs set the handle (for example, SQLAllocEnv), and you must pass in a reference, or pointer to the variable (C programmers do not need to be told this); some refer to a handle previously set (for example, SQL Execute), and this time you must pass in the variable by value.

5.3.18 Simplified C Example - Data Entry
/* Build up the SQL insert statement with values from dialog box */ sprintf(SQLStmt, ″insert into cheque (chqno, payee,chqdate, amount) values(%s, ′%s′, ′%s′, %s )″, szchqno, szpayee, szchqdate, szamount); /* rc rc rc Allocate handles and connect to data source */ = SQLAllocEnv(&henv); = SQLAllocConnect(henv, &hdbc); = SQLConnect(hdbc, DataSource, strlen(DataSource), ″″, 0, ″″, 0);

/* Allocate a handle and execute the SQL statement */ rc = SQLAllocStmt(hdbc, &hstmt); rc = SQLExecDirect(hstmt, SQLStmt, strlen(SQLStmt)); /* Clean up */ rc = SQLFreeStmt(hstmt, SQL_DROP); rc = SQLDisconnect(hdbc); rc = SQLFreeConnect(hdbc); rc = SQLFreeEnv(henv);

5.3.18.1 Simplified C Example
This code closely follows the flow chart previously discussed in Figure 37 on page 133. The code is functional; what is missing is the user interface handling and the variable declarations. Also, no error trapping is included, to make the core functions stand out. The program is designed to insert a row into a table called cheque, which contains payment information for creditors. It is not designed for frequent use no attempt has been made to optimize performance, for example through the use of prepared statements with parameter markers. Such techniques will be discussed later. The first statement builds up the SQL insert statement. SQLAllocEnv and SQLAllocConnect ask ODBC to allocate environment and connection handles respectively. Having obtained a connection handle, it is now used by SQLConnect to make a connection to a data source, whose name has been previously placed in the DataSource variable. The final four parameters of the SQLConnect function call are the user ID, user ID length, password, and password length, to be used to log on to the data source. These have not been supplied in this example, so ODBC will use as defaults the Client Access common user ID as default.

140

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

SQLAllocStmt is used to get a statement handle, which is fed into SQLExecDirect to send the insert statement to the data source. This API is a good one to use for one-off execution of statements. For repeated execution, SQLExecute is more commonly used. Finally the statement and other handles are released.

5.3.19 ODBC Function Return Codes
Every ODBC API function returns a value of type SQLRETURN (a short integer). There are seven possible return codes, and associated with each is a manifest constant. The following list provides the meaning of each particular code and it can be seen that some codes can be interpreted as an error on the function call, some as success, and some as an indication that more information is needed, or to come. A particular function may not return all possible codes - see the ODBC Programmer ′ s Reference for possible values, and their precise interpretation for that function. It is very important that return codes be handled in your program, particularly those associated with the execution of SQL statements and the accessing of data from the data source. In many cases the return code is the only reliable way of determining the success of a function. SQL_SUCCESS Function completed successfully; no additional information available. SQL_SUCCESS_WITH_INFO Function completed successfully; possibly with a nonfatal error. The application can call SQLError to retrieve additional information. SQL_NO_DATA_FOUND All rows from the result set have been fetched. SQL_ERROR Function failed. The application can call SQLError to retrieve error information. SQL_INVALID_HANDLE Function failed due to an invalid environment, connection, or statement handle. Programming error. SQL_STILL_EXECUTING A function that was started asynchronously is still executing. SQL_NEED_DATA The driver is asking the application to send parameter data values.

5.3.19.1 SQLError
SQLError returns error or status information. An application typically calls SQLError after a call to an ODBC function has returned SQL_ERROR or SQL_SUCCESS_WITH_INFO. However, there is nothing wrong with calling SQLError after any return code. Some function calls may generate multiple errors, in which case SQLError may be called repeatedly to retrieve information about each error.

Chapter 5. Client/Server Database Serving

141

This soft copy for use by IBM employees only.

As well as posting standard ODBC SQLSTATE information, SQLError will also retrieve native error codes and messages from the data source itself.

Argument henv hdbc hstmt szSqlState pfNativeError szErrorMsg cbErrorMsgMax

Use Input Input Input Output Output Output Input

pcbErrorMsg

Output

Description Environment handle or SQL_NULL_HENV. Connection handle or SQL_NULL_HDBC. Statement handle or SQL_NULL_HSTMT. SQLSTATE as null-terminated string. Native error code (specific to the data source). Pointer to storage for the error message text. Maximum length of the szErrorMsg buffer. This must be less than or equal to SQL_MAX_MESSAGE_LENGTH - 1. Pointer to the total number of bytes (excluding the null termination byte) available to return in szErrorMsg.

Returns SQL_SUCCESS, SQL_SUCCESS_WITH_INFO, SQL_NO_DATA_FOUND, SQL_ERROR, or SQL_INVALID_HANDLE.

5.3.19.2 Using SQLError
• •

Can be used after any ODBC function call Use to obtain error information after a function returns − SQL_ERROR − SQL_SUCCESS_WITH_INFO Functions can post multiple errors − Keep calling SQLError while it returns SQL_SUCCESS

The type of error information returned by SQLError depends on the contents of the handle parameters. If the error occurs before the handles are all allocated, it is still possible to call SQLError, but the handles have to be set as follows:

To retrieve errors associated with the environment, pass in the value of henv, and set hdbc and hstmt to SQL_NULL_HDBC and SQL_NULL_HSTMT respectively. To retrieve errors associated with a connection, pass in the value of hdbc, and set hstmt to SQL_NULL_HSTMT. The value of henv is ignored. To retrieve errors associated with a particular statement, pass in hstmt. The values of henv and hdbc are ignored.

5.3.19.3 Sample Visual Basic General Error Handler
This following example shows an attempt to provide a general error handler for use in Visual Basic ODBC applications. The routine is designed to be called after any ODBC function has failed to return SQL_SUCCESS, and will retrieve any stacked error information and display it to the user in a message box. It is perhaps of most use while debugging applications. You would certainly wish to replace this routine with something more sophisticated in a finished application, returning application-defined error messages to the end-user.

142

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Sub DspSQLError (henv As Long, hdbc As Long, hstmt As Long, UserMsg As String) ′ Uses SQLError() to return error or status message to the user via Msg Box Dim rc As Integer Dim SQLState As String * 5 Dim NativeErrorCode As Long Dim ErrorMsg As String * SQL_MAX_MESSAGE_LENGTH Dim pcbErrorMsg As Integer Dim MsgBoxText As String ′ A statement execution may lead to several errors being posted ′ SQLError will retrieve each one, itself returning SQL_SUCCESS rc = SQLError(henv, hdbc, hstmt, SQLState, NativeErrorCode, ErrorMsg, SQL_MAX_MESSAGE_LENGTH - 1, pcbErrorMsg) If rc = SQL_NO_DATA_FOUND Then MsgBox ″Error information not available″, MB_ICONEXCLAMATION, UserMsg Else If rc = SQL_ERROR Then MsgBox ″Error with SQLError″, MB_ICONEXCLAMATION, UserMsg End If End If While rc = SQL_SUCCESS Or rc = SQL_SUCCESS_WITH_INFO MsgBox Left$(ErrorMsg, pcbErrorMsg), MB_ICONEXCLAMATION, UserMsg rc = SQLError(henv, hdbc, hstmt, SQLState, NativeErrorCode, ErrorMsg, SQL_MAX_MESSAGE_LENGTH - 1, pcbErrorMsg) If rc = SQL_ERROR Then MsgBox ″Error with SQLError″, MB_ICONEXCLAMATION, UserMsg End If Wend End Sub

Chapter 5. Client/Server Database Serving

143

This soft copy for use by IBM employees only.

5.3.20 General ODBC Application Flow

Figure 39. General ODBC Application Flow

Figure 39. This is a more detailed version of the previous flow chart (Figure 37 on page 133) and attempts to show the sequence of calls necessary for different types of SQL statement. It mentions a number of ODBC statement types not yet discussed: SQLPrepare Sends an SQL statement to the data source for preparation SQLBindParameter Associates program variables with parameter markers SQLBindCol Associates returned columns with program variables SQLExecute Asks the data source to execute a prepared statement, using the current values of any parameter markers SQLFetch Fetches a row of data from a result set SQLRowCount Returns the number of rows affected by an UPDATE, INSERT, or DELETE SQLNumResultCols Returns the number of columns in a result set SQLDescribeCol Returns column name, type, precision, scale, and nullability for a column in the result set

144

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

SQLTransact Issues a commit or rollback for all active statements for a given connection

5.3.20.1 Parameter Markers
Parameter markers act as place holders for values that will be supplied by the program at the time the data source will be asked to execute the SQL statement. Using SQLPrepare the statement containing the parameter markers is passed to the data source to be prepared by the SQL optimizer, which builds a plan of the statement and holds it for later reference. Each parameter marker has to have a program variable (strictly, a pointer to a program variable) associated with it and SQLBindParameter is used for this purpose. SQLBindParameter is a complex function to use, and careful study of the relevant section of the ODBC Programmer′s Reference is strongly recommended. With most SQL statements it is used to provide input information to the function, but with stored procedures it may also be used to receive data back. Having prepared the statement and bound the parameters, SQLExecute causes the current values of the associated variables to be sent to the data source, where the SQL plan is now referenced and the statement executed.

5.3.20.2 SQLBindCol and SQLFetch
SQLBindCol provides a way of associating program variables with the returned columns of a row resulting from the call to SQLFetch, after an SQL select statement has been executed at the data source. SQLBindCol must be executed before SQLFetch.

5.3.20.3 SQLFetch and SQLGetData
SQLGetData provides an alternative to SQLBindCol to retrieve data from the columns of a retrieved row. It must be executed after SQLFetch. As a general rule, SQLBindCol is preferable to SQLGetData as the performance overhead is less; it need only be executed once rather than after every fetch. However, there are special considerations for Visual Basic. As a housekeeping operation to conserve memory, Visual Basic will move character strings to different locations. So if a string variable is bound to a column, it cannot be guaranteed that the memory referenced by a subsequent SQLFetch will place the data in the desired variable (in fact there is a good chance that a General Protection Fault will result). A similar problem can occur with SQLBindParameter. There is a recommended circumvention to this problem, employing Windows memory allocation API functions documented in the Microsoft Development Library Knowledge Base. However, this method involves some difficult programming that is not totally transportable between windows 3.1 and Windows 95. Using SQLGetData rather than SQLBindCol and SQLParamData, and SQLPutData in conjunction with SQLBindParameter produce software that is more in keeping with Visual Basic.

Chapter 5. Client/Server Database Serving

145

This soft copy for use by IBM employees only.

5.3.21 Coding the ODBC APIs
This section shows coding examples which demonstrate using the ODBC application programming interfaces. Example Programs Example programs used to demonstrate coding techniques in this redbook are available on the included PC media. Please refer to Appendix A, “Example Programs” on page 393 for more information and a guided tour of the application code.

5.3.21.1 Establishing ODBC Connections

SQLAllocEnv − Allocates memory for an environment handle. - Identifies storage for global information. • Valid connection handles • Current active connection handles • Variable type HENV • Application uses a single environment handle Initializes the ODBC call level interface for use by an application. Must be called by application prior to calling any other ODBC function. Variable type HENV is defined by ODBC in the SQL.H header file provided by the ″C″ compiler or ODBC Software Development Kit (SDK). The header file contains a type definition for a far pointer:

− − −

typedef void Far * HENV
− In ″C″, this statement is coded:

HENV henv; SQLAllocEnv(&henv);
− In Visual Basic, this statement is coded:

Dim henv As long SQLAllocEnv(henv)

SQLAllocConnect − Allocates memory for a connection handle within the environment. - Identifies storage for information about a particular connection. • Variable type HDBC • Application can have multiple connection handles Application must request a connection handle prior to connecting to the data source. In ″C″, this statement is coded:

− −

HDBC hdbc; SQLAllocConnect(henv,&hdbc);
− In Visual Basic, this statement is coded:

Dim hdbc As long SQLAllocConnect(henv,hdbc)

SQLConnect − − Loads driver and establishes a connection. Connection handle references information about the connection.

146

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

− −

Data source is coded into application. In this example, user ID and password are blank because they are supplied by the Client Access/400 router.

UCHAR source[ ] = ″DBFIL″ ; UCHAR uid[] = ″ ″ ; UCHAR pwd[] = ″ ″ ; SQLConnect(hdbc,source,SQL_NTS,uid,SQL_NTS, pwd,SQL_NTS);
Note: SQL_NTS indicates that the statement is terminated by a Null Terminated String.

SQLDriverConnect − − − Alternative to SQLConnect. Allows driver manager to obtain login information from the user. Displays dialog boxes (optional).

5.3.21.2 Executing ODBC Functions

SQLAllocStmt − Allocates memory for information about an SQL statement. - Application must request a statement handle prior to submitting SQL statements. - Variable type HSTMT.

HSTMT s_create; SQLAllocStmt(hdbc,&s_create);

SQLExecDirect − − − Executes a preparable statement. Fastest way to submit an SQL string for one-time execution. If rc is not equal to SQL_SUCCESS, SQLError is used to find the cause of the error condition.

UCHAR stmt[ ] = ″CREATE TABLE NAMEID (ID INTEGER, NAME VARCHAR(50))″ ; rc = SQLExecDirect(s_create,stmt,SQL_NTS)
− Return code - SQL_SUCCESS - SQL_SUCCESS_WITH_INFO - SQL_ERROR - SQL_INVALID_HANDLE

SQLError

SQLError(henv,hdbc,s_create,szSqlState,&pfNativeError, szErrorMsg,cbErrorMsg);
− szSqlState - 5 character string - 00000 = success - 01004 = data truncated - 07001 = wrong number of parameters pfNativeError - specific to data source szErrorMsg - Error Message text

− −

Chapter 5. Client/Server Database Serving

147

This soft copy for use by IBM employees only.

5.3.21.3 Executing Prepared Statements
If an SQL statement is used more than once, it is best to have the statement prepared and then executed. When a statement is prepared, variable information can be passed as question marks (″?″). When the statement is executed, the question marks are replaced with the real variable information. These question marks are also called parameter markers. Preparing the statement is done at the server. The SQL statements are compiled and the access plan is built. This allows the statements to be executed much more efficiently. Rather than using dynamic SQL to execute the statements, the result is much closer to static SQL. When the database server prepares the statements, it saves them in a special AS/400 object called a package (*SQLPKG). This approach is called extended dynamic SQL. The creation of packages is done automatically by the driver; an option is provided to turn off package support. This is discussed later in the chapter under 5.3.33, “Performance Tuning IBM′s ODBC Driver” on page 171.

SQLPrepare − Prepares an SQL string for execution:

HSTMT s_insert; SQLAllocStmt(hdbc,&s_insert); UCHAR sqlstr[ ] ″INSERT INTO NAMEID VALUES (?,?)″ ; SQLPrepare(s_insert,sqlstr,SQL_NTS);
Note: SQL_NTS indicates that the statement is terminated by a Null Terminated String.

SQLBindParameter − Allows application to specify storage, data type, and length associated with a parameter marker in an SQL statement. In the example, parameter 1 is found in a signed double word field called id, while parameter 2 is found in an unsigned character array called name.

SDWORD id; UCHAR name[51]; SQLBindParameter(s_insert,1,SQL_PARAM_INPUT, SQL_C_LONG,SQL_INTEGER, 0,0,&id,0,NULL); SQLBindParameter(s_insert,2,SQL_PARAM_INPUT, SQL_C_CHAR,SQL_VARCHAR, parmLength,0, name,(UDWORD) sizeof(name), NULL);

SQLExecute − Executes a prepared statement, using current values of parameter markers:

id=500; strcpy (name,″TEST″ ) ; SQLExecute(s_insert); id=600; strcpy (name,″ABCD″ ) ; SQLExecute(s_insert);

SQLParamData / SQLPutData

148

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

When using a language such as Visual Basic, which manages memory outside of the programmer′s control, it is better to supply parameters at execution time, rather than binding them to a memory location. This is done using SQLParamData and SQLPutData: − − − They work together to supply parameters. SQLParamData moves the pointer to the next parameter. SQLPutData then supplies the data for that parameter.

′ s_parm is a character buffer to hold the parameters ′ s_parm(1) contains the first parameter Static s_parm As string s_parm(1) = ″500″ s_parm(2) = ″TEST″ SQLBindParameter(s_insert,1,SQL_PARAM_INPUT, SQL_C_CHAR,SQL_CHAR, 4,0,0,0&,4,cbvalue) SQLBindParameter(s_insert,2,SQL_PARAM_INPUT, SQL_C_CHAR,SQL_CHAR, 4,0,0,0&,4,cbvalue) cbvalue = SQL_DATA_AT_EXEC ′ the parms will be supplied at run time SQLExecute(s_insert) SQLParamData(s_insert, &token) ′ Param 1 SQLPutData(s_insert, Byval s_parm(1), Len (s_parm(1))) SQLParamData(s_insert, &token) ′ Param 2 SQLPutData(s_insert, Byval s_parm(2), Len (s_parm(2))) SQLParamData(s_insert, &token) ′ * No more params, it will execute
Notes: 1. These two statements operate together to supply unbound parameter values when the statement is executed. 2. Each call to SQLParamData moves the internal pointer to the next parameter for SQLPutData to supply data to. After the last parameter is filled, SQLParamData must be called again for the statement to be executed. 3. If data for parameter markers is to be supplied using SQLPutData, the parameter must be bound with the cbValue parameter set to a variable whose value is SQL_DATA_AT_EXEC when the statement is executed.

5.3.21.4 Transaction Control (Commit)
The DB2/400 ODBC driver supports the SQL_AUTOCOMMIT and SQL_TXN_ISOLATION options for controlling the use of commitment control. The SQLTransact API must be used for committing or rolling back a transaction.

SQLTransact − Requests a commit or rollback for update, insert, or delete transactions:

SQLTransact(hdbc, SQL_COMMIT);

5.3.21.5 Retrieving Results
When executing some SQL statements, results are returned to the application program. For example, if an SQL Select statement is executed, the selected rows are returned in a result set. An SQL Fetch statement is then used to sequentially retrieve the selected rows from the result set into the application program′s internal storage. In order to work with all of the rows in a result set, you execute the Fetch statement until no more rows are returned.

Chapter 5. Client/Server Database Serving

149

This soft copy for use by IBM employees only.

You may also issue a Select statement where you do not specify what columns you want returned. For example, ″Select * from RWM.DBFIL″, selects all columns. You may not know what columns or how many columns will be returned.

SQLNumResultCols − Returns the number of columns in a result set: - A storage buffer that receives the information is passed as a parameter.

SQLNumResultCols(hstmt, &nresultcols);

SQLDescribeCol − Returns the - Column - Column - Column result descriptor for one column in a result set: name type size

This is used along with SQLNumResultCols to retrieve information about the columns returned. Using this approach, as opposed to hard coding the information in the program, makes for more flexible programs. The programmer first uses SQLNumResultCols to find out how many columns were returned in the result set by a select statement. Then a loop is set up to use SQLDescribeCol to retrieve information about each column.

SQLDescribeCol(hstmt, i + 1,colname, (SWORD)sizeof(colname), &colnamelen,&coltype, &collen[i], &scale, &nullable);

SQLBindCol − Assigns the storage and data type for a column in a result set: - Storage buffer that receives the information. - Length of storage buffer. - Data type conversion.

SQLBindCol(hstmt, 1, SQL_C_LONG, &id, 0 ,Null); SQLBindCol(hstmt, 2, SQL_C_CHAR, name, (SDWORD) sizeof(name), &namelen);

This is not usually used with Visual Basic, SQLGetData would be used.

SQLFetch − Each time SQLFetch is called, the driver fetches the next row. Bound columns are stored in the locations specified. Data for unbound columns may be retrieved using SQLGetData.

SQLFetch(hstmt);
Visual Basic manages memory and particularly string memory, outside of the programmer′s control and may move variables in memory at any time. Because of this, binding columns to memory locations is very unsafe and usually results in application errors. The SQLGetData functions should be used in place of binding columns in Visual Basic programs.

SQLGetData

150

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Retrieves data for unbound columns after a fetch. In this example, four columns are returned and SQLGetData is used to move them to the correct storage location.

SQLFetch(s_Customer1) SQLGetData(s_Customer1, 1, SQL_C_CHAR, szName, 16, 0&) SQLGetData(s_Customer1, 2, SQL_C_FLOAT, &iDiscount, 0, 0&) SQLGetData(s_Customer1, 3, SQL_C_CHAR, szCredit, 2, 0&) SQLGetData(s_Customer1, 4, SQL_C_FLOAT, &iTax, 0, 0&)

5.3.21.6 Ending ODBC Functions
The last function that must be done before ending an ODBC application is to free the resources and memory allocated by the application. This must be done so that they are available when the application is run the next time.

SQLFreeStmt − Stops processing associated with a specific statement handle.

SQLFreeStmt(hstmt,option);

SQLDisconnect − Closes the connection associated with a specific connection handle.

SQLDisconnect(hdbc);

SQLFreeConnect − Releases connection handle and frees all memory associated with a connection handle.

SQLFreeConnect(hdbc);

SQLFreeEnv − Frees environment handle and releases all memory associated with the environment handle.

SQLFreeEnv(henv);

5.3.22 ODBC Stored Procedures
Stored procedures can be used to improve the performance of an ODBC application. Using Stored Procedures can significantly reduce the communication I/O requests between the client and server programs. Calling a stored procedure is equivalent to calling an AS/400 program. The performance advantage of using stored procedures is that the SQL calls can be placed inside the AS/400 program and static SQL used or SQL can be replaced by native database I/O. Another advantage of stored procedures is that you can use AS/400 developers to write all the database access logic of the application even if they have no experience with Visual Basic or client/server applications. This has a positive secondary effect: you can debug and change the most important part of the application (File I/O) on the AS/400 system. You can also change the behavior of the whole application without modifying the PC code. When using a stored procedure, input to the AS/400 program is passed as parameters. Output can be returned as parameters or result sets.

Chapter 5. Client/Server Database Serving

151

This soft copy for use by IBM employees only.

The steps involved are as follows: 1. 2. 3. 4. Declare or Create the stored procedure. Prepare the call of the stored procedure using SQLPrepare. Bind the parameters for input and output. Execute the call to the stored procedure.

5.3.22.1 Stored Procedures Host Code (Parameter Passing)
This example calls an RPG program named NEWORDRPG. Values are passed as input parameters, and the results are to be returned as output parameters. The AS/400 program will accept the input as parameters and place the output in the defined parameters before returning control to the PC program. The code shown below demonstrates how the parameters are declared in the RPG program.

C *ENTRY PLIST C* input parameters C PARM C PARM C PARM C* output parameters C PARM C PARM C PARM C PARM C PARM C PARM

PRMIDS ILINES INVALUE NONREPEAT OINAME OSTQTY OBORG OIPRICE OOLAMNT

For a complete listing of the AS/400 program, refer to Appendix A, “Example Programs” on page 393.

5.3.22.2 Stored Procedures Client Code
In example, we show using a stored procedure with parameter passing in Visual Basic. 1. Create the stored procedure. The creation of the stored procedure has to be done only once and can be done on the AS/400 system. Unless the stored procedure is deleted, it does not have to be re-created to be run again. If it done on the AS/400 system, it can be done using interactive SQL or by a user-written program. If the stored procedure is created on the AS/400 system, the 5763-ST1 license program (SQL Toolkit) is required. Shown here is the code to drop an existing stored procedure and create a new stored procedure from the client. You create a stored procedure named newordrpg, which calls an ILE RPG program also named newordrpg.

Dim Dim Dim Dim

ret As Integer Query As String szDropProc As String szCreatProc As String

ret = SQLAllocStmt(a_hdbc, s_StoredProc) szDropProc = ″drop procedure csdb.newordrpg″ szCreatProc = ″Create procedure csdb.newordrpg(in p1 char(10), in p2 dec(3,0), in p3 char(195),″ szCreatProc = szCreatProc & ″ out p4 varchar(65), out p5 char(360), out p6 char(45), out p7 char(15), out p8 char(75), 152
AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

out p9 char(105))″ szCreatProc = szCreatProc & ″ external name csdb.newordrpg language RPGLE general″ ret = SQLExecDirect(s_StoredProc, szDropProc, SQL_NTS) ret = SQLExecDirect(s_StoredProc, szCreatProc, SQL_NTS) If (ret = SQL_ERROR) Then Call GiveErrMsg(s_StoredProc, ″Error on Create Stored Proc CALL.″ ) SQLInit = False End If
2. Prepare the stored procedure call.

Query = ″CALL NEWORDRPG (?, ?, ?, ?, ?, ?, ?, ?, ?)″ ′ Prepare the stored procedure stmt. ret = SQLPrepare(s_StoredProc, Query, SQL_NTS) If (ret = SQL_ERROR) Then Call GiveErrMsg(s_StoredProc, ″Error on SQLPrepare of Stored Proc CALL.″ ) SQLInit = False End If
3. Bind the parameters. The following code demonstrates the binding of parameters for the stored procedure. Note that the parameters will be passed in at execution time.
IcbValue = SQL_DATA_AT_EXEC ret = SQLBindParameter(s_StoredProc, ret = SQLBindParameter(s_StoredProc, ret = SQLBindParameter(s_StoredProc, ret = SQLBindParameter(s_StoredProc, ret = SQLBindParameter(s_StoredProc, ret = SQLBindParameter(s_StoredProc, ret = SQLBindParameter(s_StoredProc, ret = SQLBindParameter(s_StoredProc, ret = SQLBindParameter(s_StoredProc, 1, 2, 3, 4, 5, 6, 7, 8, 9, SQL_PARAM_INPUT, SQL_C_CHAR,SQL_CHAR, 10, 0, 0&, aLen(1),IcbValue) SQL_PARAM_INPUT, SQL_C_SHORT,SQL_DECIMAL, 3,0,0&,aLen(2),IcbValue) SQL_PARAM_INPUT, SQL_C_CHAR,SQL_CHAR, 195, 0,0&, aLen(3),IcbValue) SQL_PARAM_OUTPUT, SQL_C_CHAR,SQL_CHAR, 61, 0, INFOBACK,61,aLen(4)) SQL_PARAM_OUTPUT, SQL_C_CHAR,SQL_CHAR, 360,0,INNAME, 360, aLen(5)) SQL_PARAM_OUTPUT, SQL_C_CHAR,SQL_CHAR, 45, 0, STQTY, 45, aLen(6)) SQL_PARAM_OUTPUT, SQL_C_CHAR,SQL_CHAR, 15, 0, BORG, 15, aLen(7)) SQL_PARAM_OUTPUT, SQL_C_CHAR,SQL_CHAR, 75, 0, IPRICE, 75,aLen(8)) SQL_PARAM_OUTPUT, SQL_C_CHAR,SQL_CHAR, 105, 0, OLAMT, 105,aLen(9))

4. Execute the stored procedure. The following code shows executing the stored procedure. The output from the stored procedure is returned to the client program by the host program in the output parameters. Since this example uses Visual Basic, we use SQLParamData/SQLPutData to pass in the three input parameters. The final SQLParamData causes the execution on the call to the stored procedure.

ret = SQLExecute(s_StoredProc) If ret = SQL_ERROR Then Call GiveErrMsg(s_StoredProc, ″Error on SQLExec of Stored Proc.″ ) Else If ret = SQL_SUCCESS Or ret = SQL_NEED_DATA Then ret = SQLParamData(s_StoredProc, aToken) ′ Parameter 1 ret = SQLPutData(s_StoredProc, ByVal INWDC, 10) ret = SQLParamData(s_StoredProc, aToken) ′ Parameter 2 ret = SQLPutData(s_StoredProc, OLINES, Len(OLINES)) ret = SQLParamData(s_StoredProc, aToken) ′ Parameter 3 ret = SQLPutData(s_StoredProc, ByVal INORDINF, 195) ret = SQLParamData(s_StoredProc, aToken) End If

Chapter 5. Client/Server Database Serving

153

This soft copy for use by IBM employees only.

5.3.23 Calling Stored Procedures Using Result Sets
Using result sets allows the client program to retrieve the information returned by the stored procedure with a fetch command. This results in a more SQL-like interface; that is, using SQLExecute followed by one or more SQLFetch statements. The DB2/400 ODBC driver supports result sets. This support allows for multiple result sets through Client Access ODBC. As part of this support, three SQL statements, CREATE PROCEDURE, DROP PROCEDURE, and SET RESULT SETS were also added. In addition, two new catalog views, QSYS2/SYSPROCS and QSYS2/SYSPARMS, were added to support storing of information about stored procedures. Prior to this support, the stored procedure had to be DECLARED. The CREATE PROCEDURE statement is an alternative way to define a stored procedure and its attributes to the database. This is the preferred method for ODBC clients, because only procedures defined through CREATE PROCEDURE have their attributes returned on ODBC SQLProcedure and SQLProcedureColumns API calls. For this reason, it is recommended that ODBC clients define all procedures using the CREATE PROCEDURE statement, instead of the DECLARE PROCEDURE statement. If a procedure is defined by both a DECLARE PROCEDURE statement in an extended dynamic package and by a CREATE PROCEDURE statement, the definition provided by the DECLARE PROCEDURE takes precedence over that provided by the CREATE PROCEDURE. For this reason, it is wise to delete the extended dynamic package and not perform the DECLARE PROCEDURE statement when the package is primed again. Note that the CREATE PROCEDURE needs to be done only once on the system, it does not have to run through ODBC, and that the definition it provides is available to all ODBC as well as native OS/400 applications. The DROP PROCEDURE statement removes a procedure definition from the system catalog; it can remove a procedure definition created with the CREATE PROCEDURE but not one created with the DECLARE PROCEDURE statement. The SET RESULT SETS statement provides a means to selectively include SQL cursors opened by a stored procedure to be in the set of result sets returned to the client for each execution of the stored procedure. For instance, if a stored procedure opens two SQL cursors, C1 and C2, and the design is such that only result sets for cursor C2 are available to the client, the stored procedure contains the following statement:

SET RESULT SETS CURSOR C2
If the stored procedure did not contain the SET RESULT SETS statement, the client would have available result sets for both cursors C1 and C2. In the case where a SET RESULT SETS statement is not executed, result sets are returned in the order in which the corresponding cursors in the stored procedure were opened. When using an array result set, a SET RESULT SETS statement is required.

154

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

5.3.24 Examples Using Stored Procedure with Result Sets
This section shows examples of using ODBC with result sets. Two examples are shown:
• •

Using an array result set Using a SQL result set

5.3.24.1 Array Result Sets
1. Create the stored procedure. This coding example demonstrates dropping and then creating a stored procedure. The creation of the stored procedure needs to be done only once. It can be done at the AS/400 system or as demonstrated here at the client personal computer. The stored procedure is named nordset. It will call an ILE RPG program also named nordset. In this example, only the input data is passed as parameters. The output will be returned as an array result set.

Dim szDropProc As String Dim szCreatProc As String ret = SQLAllocStmt(a_hdbc, s_StoredProc) ′ drop old stored procedure szDropProc = ″drop procedure csdb.nordset″ szCreatProc = ″Create procedure csdb.nordset(in p1 char(10), in p2 dec( 3,0), in p3 char(195))″ szCreatProc = szCreatProc & ″ result sets 1 external name csdb.nordset language RPGLE general″ ret = SQLExecDirect(s_StoredProc, szDropProc, SQL_NTS) ret = SQLExecDirect(s_StoredProc, szCreatProc, SQL_NTS) If (ret = SQL_ERROR) Then Call GiveErrMsg(s_StoredProc, ″Error on Create of Stored Proc ″ SqlInit = False End If
2. Prepare the stored procedure statement.

)

′ Prepare the call. Query = ″CALL NORDSET(?, ?, ?)″ ′ Prepare the stored procedure stmt. ret = SQLPrepare(s_StoredProc, Query, SQL_NTS) If (ret = SQL_ERROR) Then Call GiveErrMsg(s_StoredProc, ″Error on Prepare of Stored Proc ″ SqlInit = False End If
3. Bind the input parameters. Before the stored procedure can be called, the input parameters must be bound. In this case, use the option SQL_DATA_AT_EXEC to pass in the parameters at execution time. This is because of the way Visual Basic manages memory.

)

Chapter 5. Client/Server Database Serving

155

This soft copy for use by IBM employees only.

IcbValue = SQL_DATA_AT_EXEC ret = SQLBindParameter(s_StoredProc, 1, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_CHAR, 10, 0, 0&, aLen(1), IcbValue) ret = SQLBindParameter(s_StoredProc, 2, SQL_PARAM_INPUT, SQL_C_SHORT, SSQL_DECIMAL, 3, 0, 0&, aLen(2), IcbValue) ret = SQLBindParameter(s_StoredProc, 3, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_CHAR, 195, 0, 0&, aLen(3),IcbValue)
4. Execute the stored procedure. When you issue the SQLExecute statement, nothing happens until you pass in the parameters. SQLParamData and SQLPutData are used to pass in the parameters. The final SQLParamData causes the statement to be executed.

ret = SQLExecute(s_StoredProc) If ret = SQL_ERROR Then Call GiveErrMsg(s_StoredProc, ″Error on SQLExec of Stored Proc.″ ) Else If ret = SQL_SUCCESS Or ret = SQL_NEED_DATA Then ret = SQLParamData(s_StoredProc, aToken) ′ Parameter 1 ret = SQLPutData(s_StoredProc, ByVal INWDC, 10) ret = SQLParamData(s_StoredProc, aToken) ′ Parameter 2 ret = SQLPutData(s_StoredProc, OLINES, Len(OLINES)) ret = SQLParamData(s_StoredProc, aToken) ′ Parameter 3 ret = SQLPutData(s_StoredProc, ByVal INORDINF, 195) ret = SQLParamData(s_StoredProc, aToken) End If End If
5. Retrieve the results. SQLFetch is used to retrieve the returned data from the result set that was created by the AS/400 stored procedure program. The result set is returned as an array result set. SQLGetdata is used to move the returned data to the client personal computer storage locations. Again you do not bind the output locations to the ODBC statement because of Visual Basic restrictions.

ret = SQLFetch(s_StoredProc) If ret = SQL_NO_DATA_FOUND Then Call GiveErrMsg(s_StoredProc, ″Error on SQLExec of Stored Proc.″ ) End If ret ret ret ret ret ret = = = = = = SQLGetdata(s_StoredProc, SQLGetdata(s_StoredProc, SQLGetdata(s_StoredProc, SQLGetdata(s_StoredProc, SQLGetdata(s_StoredProc, SQLGetdata(s_StoredProc, 1, 2, 3, 4, 5, 6, SQL_CHAR, SQL_CHAR, SQL_CHAR, SQL_CHAR, SQL_CHAR, SQL_CHAR, ByVal ByVal ByVal ByVal ByVal ByVal outname, 361, IColLen) outqty, 46, IColLen) outorg, 16, IColLen) outprice, 76, IColLen) outamt, 106, IColLen) outrepeat, 62, IColLen)

6. Host code - setting the result set. The AS/400 program places the information to be returned to the PC client program by executing the following statement:

156

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

C/EXEC SQL C+ SET RESULT SETS ARRAY :OINAME FOR 1 ROWS C/END-EXEC
For complete listings of the programs, refer to Appendix A, “Example Programs” on page 393.

5.3.24.2 SQL Result Sets
This example calls an RPG program named custsrch. This program will retrieve a result set containing the Customer Id, Last Name, First Name, and Middle Initials. The host program is expecting 3 parameters:
• • •

The warehouse ID. Passed as a 4 characters string field. The district ID. Passed as a Visual Basic Integer Field (SQL SmallInt). The last name search pattern. Passed as a 16 characters string field.

The program searches a customer table with approximately 30,000 records using a logical file keyed by last name. The result set contains all customers with last names greater than or equal to the search criteria received as parameters. The program on the PC issues the DROP PROCEDURE and CREATE PROCEDURE statements before issuing a CALL procedure. After returning from the call, ODBC SQLFetch and then SQLGetData are used to retrieve the data. In this example, the returned data is fetched as rows from the result set rather than being returned in an array. 1. Create the stored procedure. Note: The function mCreateProcedureCUSTSRCH creates the procedure using the Create Procedure SQL statement. For more information on the CREATE PROCEDURE, see 5.3.23, “Calling Stored Procedures Using Result Sets” on page 154.

Function mCreateProcedureCUSTSRCH (lHstmt As Long) As Integer Dim lSQl As String Dim lRc As Long lSQl lSQl lSQl lSQl = = = = ″Create Procedure CSDB.CUSTSRCH ″ lSQl & ″ ( In P1 Char(5), In P2 SmallInt, In P3 Char(17)) ″ lSQl & ″Result Sets 1 External Name CUSTSRCH ″ lSQl & ″Language RPG General″

′ Allocates ODBC Handle lRc = SQLAllocStmt(a_hdbc, lHstmt) ′ Prepares SQL lRc = SQLPrepare(lHstmt, lSQl, Len(lSQl)) ′ Executes The Statement If SQLExecute(lHstmt) <> SQL_SUCCESS Then mCreateProcedureCUSTSRCH = False Exit Function End If

mCreateProcedureCUSTSRCH = True End Function

Chapter 5. Client/Server Database Serving

157

This soft copy for use by IBM employees only.

2. Prepare and execute the stored procedure. This subroutine shows allocating an ODBC statement, preparing the statement to call a stored procedure, then executing the statement. The results are returned in a result set.

Sub cmdSearch_Click () Dim lRc As Integer Dim Dim Dim Dim Dim aToken As Long lCbValue As Long lWHouseLen As Long lLNameLen As Long lHstmtCreateProc As Long

Dim lWHouse As String Dim lLName As String Dim lDistrict As Integer If txtLastNameSearch = ″″ Then Exit Sub End If Screen.MousePointer = 11 ′ Prepare only the first time through If mSearchPrepared Then lRc = SQLFreeStmt(mHstmtSearch, SQL_CLOSE) Else ′ Allocates ODBC Handle lRc = SQLAllocStmt(a_hdbc, mHstmtSearch) lSQl = ″Call CSDB.CUSTSRCH (?, ?, ?)″ ′ Prepares SQL lRc = SQLPrepare(mHstmtSearch, lSQl, Len(lSQl))

mSearchPrepared = Not mSearchPrepared End If lCbValue = SQL_DATA_AT_EXEC lRc = SQLBindParameter(mHstmtSearch, 1, SQL_PARAM_INPUT, SQL_C_CHA SQL_CHAR, lWHouseLen, 0, 0&, lWHouseLen, lCbValue) lRc = SQLBindParameter(mHstmtSearch, 2, SQL_PARAM_INPUT, SQL_C_DEF SQL_SMALLINT, 0, 0, 0&, 0, lCbValue) lRc = SQLBindParameter(mHstmtSearch, 3, SQL_PARAM_INPUT, SQL_C_CHA SQL_CHAR, lLNameLen, 0, 0&, lLNameLen, lCbValue) ′ Read input parameters from screen lWHouse = String$(lWHouseLen - 1 - Len(frmNewOrder!txtWarehouse), ″0″) & frmNewOrder!txtWarehouse lDistrict = Val(frmNewOrder!txtDistrict) lLName = txtLastNameSearch lRc = SQLExecute(mHstmtSearch) If lRc = SQL_SUCCESS Or lRc = SQL_NEED_DATA Then lRc = SQLParamData(mHstmtSearch, aToken) ′ Parameter 1 158
AS/400 Client/Server Performance

R, AULT, R,

This soft copy for use by IBM employees only.

lRc = SQLPutData(mHstmtSearch, ByVal lWHouse, Len(lWHouse)) lRc = SQLParamData(mHstmtSearch, aToken) ′ Parameter 2 lRc = SQLPutData(mHstmtSearch, lDistrict, 0&) lRc = SQLParamData(mHstmtSearch, aToken) ′ Parameter 3 lRc = SQLPutData(mHstmtSearch, ByVal lLName, Len(lLName)) lRc = SQLParamData(mHstmtSearch, aToken) ′ No More Parameters

Grid1.Rows = 1 cmdGetMore_Click Else GiveErrMsg mHstmtSearch, ″Error During Execute″ Screen.MousePointer = 0 End If End Sub
3. Retrieve the information from the result set. This subroutine shows using SQLFetch to retrieve the selected rows from the result set. SQLFetch is executed until no data is returned. The ODBC statement is then freed, so it can be used again.

lRc = SQLFetch(mHstmtSearch) ′ Get 1st Record If lRc = SQL_NO_DATA_FOUND Then cmdGetMore.Enabled = False return For lIx = 1 To lcPAGE_SIZE lRc = SQLGetData(mHstmtSearch,1,SQL_CHAR, ByVal lCustomerId, 5,lColLen) lRc = SQLGetData(mHstmtSearch,2,SQL_CHAR, ByVal lLastName, 17,lColLen) lRc = SQLGetData(mHstmtSearch,3,SQL_CHAR, ByVal lFirstName, 17,lColLen) lRc = SQLGetData(mHstmtSearch,4,SQL_CHAR, ByVal lMidInitials, 3,lColLen) ′ Move data to screen buffer lRc = SQLFetch(lHstmt) ′ Get next Record If lRc = SQL_NO_DATA_FOUND Then cmdGetMore.Enabled = False Exit For Next lRc = SQLFreeStmt(mHstmtSearch, SQL_CLOSE)

5.3.24.3 Stored Procedure Using Result Sets Host Code
This RPG program is called by the previous Visual Basic example. It uses SQL to access the database and returns all records matching the search criteria. Finally, it returns the data in a result set using the SQL SET RESULT SETS function.

* I* I* Defines District ID As a Small Integer (Binary 2.0) I* I#DSTDS DS I B 1 20#DISTR * C *ENTRY PLIST C PARM $WHID 4

Chapter 5. Client/Server Database Serving

159

This soft copy for use by IBM employees only.

C PARM #DSTDS C PARM $SRCH 16 C* C* Copy District Id to RPG Native Variable With Same Attributes Of C* Field In Customer Master File (3,0) For Performance Issues C* C Z-ADD#DISTR DIST 30 C* C/Exec Sql Declare C1 Cursor For C+ Select C+ CID, -- Customer ID C+ CLAST, -- Last Name C+ CFIRST, -- First Name C+ CINIT -- Middle Initials C+ C+ From CSTMR -- From Customer Master File C+ C+ Where CWID = :$WHID C+ And CDID = :DIST C+ And CLAST >= :$SRCH C+ C+ Order By CLAST, CFIRST -- Sort By Last Name, First Name C+ C+ For Fetch Only -- Read Only Cursor C/End-Exec C* C/Exec Sql C+ Open C1 C/End-Exec C* C/Exec Sql C+ Set Result Sets Cursor C1 C/End-Exec C* C RETRN

5.3.25 Using Parameters with AS/400 Languages
The format of the parameters is very important when sending data to and receiving data from the AS/400 system. Also, each language has it′s own implementation of parameter handling. For example, in RPG, a Visual Basic integer (SQL SmallInt) field must be received as a data structure containing a single sub-field. The sub-field must be binary, with a length of 2 and 0 decimals. The data structure name itself must be specified in the *ENTRY section of the RPG program.

I* I* Defines District ID As a Small Integer (Binary 2.0) I* I#DSTDS DS I B 1 20#DISTR * C *ENTRY PLIST C PARM $WHID 4 C PARM #DSTDS C PARM $SRCH 16 C*

160

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

If you use a parameter received as a data structure as part of your selection criteria on a SELECT statement, you should copy the parameter into a host variable with the same attributes as the database field you are querying. Otherwise, the query optimizer will have to perform a data type conversion and poor performance may result. In the previous example, if the District Id is declared in the database as a (3,0) field, then you should include the following line just before the SELECT:

C* C C*

Z-ADD#DISTR

DIST

30

You should then use DIST rather than #DISTR in your SELECT.

For a complete list of parameter formats for all languages, see the DB/2 SQL Programming manual, SC41-3611.

5.3.26 Commitment Control Considerations
It is important to keep in mind the commitment control level being used when you execute stored procedures. When you compile a host program of the type SQLxxx (like SQLRPG), the compiler defaults to COMMIT(*CHG). If your files are not journaled, you will not be able to retrieve data unless your data source is also under commitment control *CHG. If you are not using commitment control, change your compiler option to COMMIT(*none). If you use native code to perform your database changes, keep in mind the following:

Your files must be opened under commitment control. In RPG, you can accomplish this by using a continuation line (K line) after your F Specification. You should not issue any COMMIT or ROLLBACK operations in your host program, unless your design requires it. Normally the commits and rollbacks are performed from the PC, using the SQLTransact API. Your program should include a parameter to be used as a return code, to let the calling program know if the operation was a success.

5.3.27 Using Optimistic Record Locking
A commonly used method to handle a multi-user environment is Optimistic Record Locking. In this method, you use SQL selects with the FOR FETCH ONLY clause. When you read this way, all your cursors are read only. Your file design should include the change date and change time fields. Every time a program changes a record, the current date and time are posted. If a program requests to update or delete a record, the change date and time from the program are compared with the ones in the file. If they don′t match, it means that another program has changed this record since the time you read it. The program should then end, sending a return code to the caller program. Here are the steps to follow:

Use The ″For Fetch Only″ Clause when selecting records. This will also make your read faster, since the AS/400 will be able to use blocking on your file.
Chapter 5. Client/Server Database Serving

161

This soft copy for use by IBM employees only.

Include the changed date and changed time on your file design. Every time a record is changed, the current date and time should be updated. When updating records, compare the changed date and changed time in the file with the changed date and changed time when the record was read. If they are the same, proceed with the update. If they are different, end the program and send an error code back to the caller. If you are using commitment control, do not forget to commit or rollback your changes depending on your application design.

5.3.27.1 Example of Stored Procedure To Delete Records
The following stored procedure uses RPG native code to delete a record from a file:

• • • • • •

Receives the parameters using the RPG parameters convention. The parameters include the changed date and time received as SQL integers (Visual Basic Long Fields) and the return code as a SQL smallint (Visual Basic Integer Fields). Opens the file. Reads and locks the record. If record not found, sends error code and ends. If some other error (record lock), sends error code and ends. If date or time changed, sends error code and ends. If all is OK, deletes the record and returns 1 (number of records deleted in return code).

* Delete FM1FILEL0UF E F * I* I* Change Date I@CDTDS DS I I* I* Change Time I@CTMDS DS I I* I* Return Code I@RCDDS DS I I* I* Constants I* I -99 I -88 I -77 I -66 * C* C* Define the entry C* C *ENTRY C C C C C

K

DISK KCOMIT

B

1

40@CDT

B

1

40@CTM

B

1

20#RCD

C C C C

KERR99 KERR88 KERR77 KERR66

parms PLIST PARM PARM PARM PARM PARM

@P1 2 @P2 2 @CDTDS @CTMDS @RCDDS

Change Date Change Time Return Code

162

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

C* C* C* C C C* C* C* C C C C* C C* C * C* C* C* C C* C* C* C C C* C* C* C C C C C* C* C* C C* C* C* C C* C* C* C C C C C* C* C* C C C C C* C* C* C* C

DEFN Area *LIKE *LIKE DEFN FLDATE DEFN FLTIME #CDT #CTM Change Date Change Time

FLL0K1. Partial Access Path To M1FILEL0 FLL0K1 KLIST KFLD KFLD EXSR B$MAIN RETRN +---------------------------------------------------------------+ +Routine B$MAIN. Program Main Control Loop + +---------------------------------------------------------------+ B$MAIN BEGSR Converts Binary To Native Data Types Z-ADD@CDT Z-ADD@CTM #CDT #CTM

@P1 @P2

If first time, execute routine to open the file *IN20 IFEQ *OFF EXSR B$1TIM MOVE *ON ENDIF

*IN20

Initialize Work Variables Z-ADD*ZERO Reads and Locks Records FLL0K1 CHAINFGKFILE 9091 #RCD

If Record not found, put -88 in return code and leave *IN90 IFEQ *ON Z-ADDKERR88 RETRN ENDIF

#RCD

If Record Lock found, put -66 in return code and leave *IN91 IFEQ *ON Z-ADDKERR66 RETRN ENDIF

#RCD

If Date or time has changed, put a -77 in return code and leave FLDATE IFNE #CDT 163

Chapter 5. Client/Server Database Serving

This soft copy for use by IBM employees only.

C C C C C* C* C* C C C C C C C* C * C* C* C* C C* C C C C C C* C

FLTIME

ORNE #CTM Z-ADDKERR77 RETRN ENDIF

#RCD

Deletes Record DELETFGKFILE IFEQ *ON Z-ADDKERR99 ELSE Z-ADD1 ENDIF ENDSR 90 #RCD #RCD

*IN90

E$MAIN

+---------------------------------------------------------------+ +Routine B$1TIM. Open Files. + +---------------------------------------------------------------+ B$1TIM BEGSR OPEN M1FILEL0 IFEQ *ON Z-ADDKERR99 #RCD RETRN ENDIF ENDSR 90

*IN90

E$1TIM

5.3.28 Using Stored Procedures to Run Commands
• •

Any AS/400 program can be called as a stored procedure. QCMDEXC is called to execute any valid AS/400 command.

strcpy(Query,″CALL QSYS.QCMDEXC (′ STRDBG UPDPROD(*YES)′ , 0000000020.00000)″ ) ; ret=SQLExecDirect(s_Command, Query, SQL_NTS);
Note the length parameter must be passed to QCMDEXC as a fixed decimal field with a length of 15 with five decimal places. The value is the length of the string. This example shows how to start DEBUG for an ODBC application. Debug writes application information to the JOBLOG that can be examined for SQL Optimizer messages. These messages are used to performance tune your application. For more information on the SQL Optimizer, see Chapter 1, “Application Design” on page 1.

For performance analysis and tuning: − − STRDBG - SQL analysis in JOBLOG. CHGQRYA - Parallelism and Query processing time limit. An example of using CHGQRYA is included in 5.3.38, “Using the Predictive Query Governor from ODBC” on page 184.

164

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

5.3.29 Extended Fetch
Extended Fetch and Block Insert can be used to enhance the performance of an ODBC application. They allow you to retrieve or insert rows in blocks rather than individually. This reduces the data flows and line turn around between the client and the server.

Extended Fetch − − − − Returns a block of data (one row set) in the form of an array for each bound column. Scrolls through the result set according to the setting of a scroll type argument - forwards, backwards, or by row number. Works in conjunction with the SQLSetStmtOption. To fetch one row of data at a time in a forward direction, an application should call SQLFetch.

HSTMT s_Item1; char Query[320]; ret = SQLAllocStmt(hdbc,&s_Item1); char ItemID[15][8] /*item return array*/ char ItemName[15][30] /*Name return array*/ float fItemPrice[15] /*price return array*/ char ItemData[15][60] /*data return array*/ ret = SQLAllocStmt(hdbc,&s_Item1); strcpy(Query,″Select IID, INAME, IPRICE, IDATA from ITEM ″ ) ; strcat(Query,″where IID in ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)″ ) ; // For extended fetch ret=SQLSetStmtOption(s_Item1,SQL_CONCURRRENCY,SQL_CONCUR_READ_ONLY); SQL_CONCUR_READ_ONLY is the default, so this statement could be left out. However, if update was required then you would use SQL_CONCUR_LOCK value as the last parameter.
Note:

ret=SQLSetStmtOption(s_Item1,SQL_CURSOR_TYPE,SQL_CURSOR_FORWARD_ONLY); SQL_CURSOR_FORWARD_ONLY is the default, so this statement could be left out. However, if you wanted to scroll other then forward, you could use this statement to set the option that you require.
Note:

ret=SQLSetStmtOption(s_Item1,SQL_ROWSET_SIZE,15);
Note:

We will retrieve 15 rows.

// Prepares s_Item1 for use. ret=SQLPrepare(s_Item1,(unsigned char *)Query,SQL_NTS); // Sets the input parameters (the items to fetch). // note s_parm is an array to hold items to be fetched ret=SQLBindParameter(s_Item1, 1,SQL_PARAM_INPUT,SQL_C_CHAR, SQL_CHAR,6,0,s_parm[ 1],6,NULL); ret=SQLBindParameter(s_Item1, 2,SQL_PARAM_INPUT,SQL_C_CHAR, SQL_CHAR,6,0,s_parm[ 2],6,NULL); . . ret=SQLBindParameter(s_Item1,15,SQL_PARAM_INPUT,SQL_C_CHAR, SQL_CHAR,6,0,s_parm[15],6,NULL); // Bind the columns. // Bind dependant of EFetch support. ret=SQLBindCol(s_Item1, 1, SQL_C_CHAR, ItemID, 8, CheckID); ret=SQLBindCol(s_Item1, 2, SQL_C_CHAR, ItemName, 30, CheckName); ret=SQLBindCol(s_Item1, 3, SQL_C_FLOAT,fItemPrice, sizeof(float), CheckPrice); ret=SQLBindCol(s_Item1, 4, SQL_C_CHAR, ItemData, 60, CheckData); //crow will show number of rows actually fetched ret=SQLExtendedFetch(s_Item1,SQL_FETCH_FIRST,Ordercount,&crow,

Chapter 5. Client/Server Database Serving

165

This soft copy for use by IBM employees only.

fgrRowStatus); SQL_FETCH_NEXT SQL_FETCH_PREV SQL_FETCH_LAST

5.3.30 Block Insert
• •

Insert blocks of records with one SQL call. Reduces the flows between the client and server.

// Make ready for ORDLIN insert strcpy(tmpbfr,″Insert into ORDLIN (OLOID, OLDID, OLWID, OLNBR, OLSPWH, OLIID, OLQTY, OLAMNT, OLDLVD, OLDLVT, OLDSTI) VALUES (?,?,?,?,?,?,?,?,?,?,?)″ ) ; ret= SQLPrepare(s_Ordlin1,(unsigned char far *)tmpbfr,SQL_NTS); // Set the parameters. //the storage areas you are binding are arrays (lOrderNum,uiDistrict) //you will fill these arrays with values before you execute ret=SQLBindParameter(s_Ordlin1, 1,SQL_PARAM_INPUT,SQL_C_LONG , SQL_DECIMAL,9,0,&(lOrderNum),9,NULL); ret=SQLBindParameter(s_Ordlin1, 2,SQL_PARAM_INPUT,SQL_C_SHORT, SQL_DECIMAL,3,0,&(uiDistrict),3,NULL); ret=SQLBindParameter(s_Ordlin1, 3,SQL_PARAM_INPUT,SQL_C_CHAR, SQL_CHAR,4,0,szWid[0],4,NULL); // Insert into ORDLIN // m_OrderCount will contain the number of rows to insert // rowcnt is a pointer to keep track of current row number ret=SQLParamOptions(s_Ordlin1, m_OrderCount, &rowcnt); // a loop is used to fill in the arrays with contain the data // for the rows to be inserted for(ol_ctr=1;ol_ctr<=NewOrd_IO->m_OrderCount;ol_ctr++){ // Fill parameters for block insert lOrderNum[ol_ctr-1] = NewOrd_IO->m_OrderNum; uiDistrict[ol_ctr-1] = NewOrd_IO->m_District; strcpy(szWid[ol_ctr-1],s_parm[3]); . . point to next order } ret=SQLExecute(s_Ordlin1); // Execute Order Line insert.

5.3.31 Visual Basic Controls and Database Objects
Visual Basic allows you access ODBC Data Sources directly through:
• •

Data controls Database objects

Both of these methods create RecordSet objects to allow you to read and update data. Data controls and database objects allow you to access ODBC data sources. They can simplify writing ODBC applications because the programmer is shielded from the ODBC calls. However, performance is adversely impacted and you cannot achieve as good performance as you can using the APIs. These interfaces use the Microsoft Jet Database Engine to access the ODBC data source. The ″Jet Engine″ lies between the application and the physical database. When it is used to access a database through ODBC, it will transform high level statements into ODBC calls. For a comparison of the performance results achieved when implementing an application using several different

166

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

techniques, see section 5.3.41, “Comparison of ODBC Techniques Using Windows 3.1” on page 189. A complete listing of this example can be found in the SPEEDJET directory of the included PC media.

5.3.31.1 Connect
If the first parameter (database name) of the OpenDatabase function call is an empty string, and the last parameter (connect) is ″ODBC″, then a connection to an ODBC data source is made. In this example, the data source name is held in DataSource.

. . Dim aDB as Database . Sub mConnect_Click () On Error GoTo ConnectError Connect = ″ODBC;DSN=″ & DataSource & ″ ; UID=″ & UserName & ″ ; PWD=″ & Password Set aDB = OpenDatabase(″ ″ , False, False, Connect) Exit Sub ConnectError: Beep MsgBox Error$(Err), 48, ″Error″ Exit Sub End Sub

5.3.31.2 Accessing Database Tables
Visual Basic uses two different types of database objects. For input-only queries, you can use Snapshots. For tables that may be updated, you can use Dynasets. These objects can be used to access and update AS/400 tables.

. . Dim t_Stock as Recordset Dim t_Customer as Recordset

Sub Proc_NO ()
In this example, a database object named t_Customer is created as a Snapshot, while t_Stock is created as a Dynaset. When using a Snapshot, the dbSQLPassThrough option can be used. This option will bypass the ″jet engine″ and interface directly to the AS/400 ODBC driver. This will result in improved performance. The dbSQLPassThrough option is not available for Dynasets.

Query = ″Select CLAST, CDCT, CCREDT, WTAX from CSTMR, WRHS where CWID=′″ & s_parm(1) & ″ ′ and CDID=″ & s_parm(2) & ″CID=′″ & s_parm(3) & ″ ′ and WID=′″ & s_parm(4) & ″ ′ ″ Set t_Customer = aDB.OpenRecordset(Query,dbOpenSnapshot,dbSQLPassThrough) Query = ″Select STQTY, STYTD, STORDRS, STREMORD, STDATA from STOCK″ where (STWID=′″ &
Chapter 5. Client/Server Database Serving

167

This soft copy for use by IBM employees only.

s_parm(1) & ″ ′ and STIID=′″ & s_parm(2) & ″ ′ ) ″ Set t_Stock = aDB.OpenRecordset(Query,dbOpenDynaset) End Sub

5.3.31.3 Update
A dynaset can be used to update a table. In this example, the columns STQTY, STYTD, and STREMORD are being updated with new information.

Proc_NO () t_Stock.Edit t_Stock(″STQTY″) = aQty t_Stock(″STYTD″) = aYTD t_Stock(″STREMORD″) = aRemOrd t_Stock.Update End Sub

5.3.31.4 Close
Visual Basic can close the database objects when it has completed processing them.

Sub Form_Unload (Cancel As Integer) If a_bConnected = True Then ′ Free and drop all statements. t_Customer.Close t_Stock.Close aDB.Close End If End Sub

5.3.32 Configuring an ODBC Data Source for Windows 3.1
This section covers configuring an ODBC data source for Windows 3.1. For information on configuring a data source for Windows 95, please refer to section 5.3.36, “Configuring an ODBC Data Source for Windows 95” on page 173. Before you can use ODBC, you must configure the data source you want to connect to. To configure the data source: 1. Open the Client Access/400 for Windows group. 2. Double-click the ODBC Driver icon to configure a data source. The following window is shown:

168

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Figure 40. Configuring an ODBC Data Source

3. Click the Add... button.

Figure 41. Adding an ODBC Data Source

4. Select the Client Access/400 ODBC Driver. Click the OK... button.

Chapter 5. Client/Server Database Serving

169

This soft copy for use by IBM employees only.

Figure 42. ODBC Driver Setup

5. Supply the following parameters:

Data Source Name: REQUIRED This is the name used by the application to connect.

• •

Description: AS/400 ODBC driver. System: REQUIRED This is the name of your AS/400 system.

• • • •

Commit Mode: *NONE. Default library: QGPL,QTEMP. Click Ok when done. Naming convention: Many tools, for example, Visual Basic, only know about the *SQL naming convention. These tools may not work if you select *SYS as the naming convention. Commit Mode: The implementation of commitment control depends upon many things in the ODBC environment, including: − − − − The SQL_AUOTCOMMIT connection option. If the files are journaled. The tool you are using. The Commit Mode set in the ODBC Administrator.

170

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

You can configure more than one data source that refers to the same AS/400 system. Each data source can have different characteristics for the different tasks you want to perform.

5.3.33 Performance Tuning IBM′s ODBC Driver
This section explains the performance options available for the CA/400 for Windows ODBC driver in the ODBC.INI file. There are two ways to tune your ODBC parameters:

Use the ODBC Administrator (the ODBC Driver Icon) in the refreshed version of CA/400 for Windows 3.1 or the Windows 95 client. Edit the ODBC.INI file. Windows 95 client The Windows 95 client does not use the ODBC.INI file to store the ODBC information. It uses the Windows 95 system registry to store the ODBC configuration. You cannot edit registry information directly, but must use a special registry edit tool. If you decide to do this, proceed with caution because you could damage key Windows 95 settings and cause Windows 95 to no longer function. Please refer to 5.3.36, “Configuring an ODBC Data Source for Windows 95” on page 173 for details on the Windows 95 ODBC configurator.

Using the ODBC Administrator is the recommended method because no typing is required. ODBC parameter selections and changes are automatically written into the ODBC.INI file. The important ODBC performance parameters are:
• • • • • •

DefaultLibraries RecordBlocking BlocksizeKB LazyClose ExtendedDynamic PackageAPPLICATIONNAME

Note: If you do not have the V3R1M1 version of CA/400 for Windows 3.1, your only choice is to edit the ODBC.INI file. Do so carefully; mistakes are hard to find.

5.3.34 Windows 3.1 ODBC Administrator
The ODBC administrator is a Graphical interface used to update the ODBC.INI file. It will automatically update the ODBC.INI file. CA/400 for Windows 3.1 (V3R1M1) has an updated version of the administrator. In order to access the performance options, the Performance Options button is selected on the Client Access/400 ODBC Driver Setup screen.

Chapter 5. Client/Server Database Serving

171

This soft copy for use by IBM employees only.

Figure 43. ODBC Administrator with Performance Options

Note: All of the important ODBC parameters are controlled through this graphical interface. Each of the ODBC performance parameters is discussed in the following sections.

5.3.35 ODBC.INI
• • •

Is a windows text file used to store ODBC information. Contains a section for each ODBC data source. Can be edited to change ODBC performance parameters.

The following is a sample ODBC.INI file section for a data source named SYSASM01.

[ODBC Data Sources] SYSASM01=Client Access/400 ODBC Driver GUPTA=Client Access/400 ODBC Driver SPEED=Client Access/400 ODBC Driver [SYSASM01] Driver=c:\cawin\ehnodbc3.dll Description=Client Access/400 ODBC driver System=SYSASM01 UserID= CommitMode=0 ; *NONE (Commit Immediate) DefaultLibraries=CSDB 172
AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Naming=0 ; *SQL (SQL Naming Convention .) DateFormat=5 ; *ISO (date format yyyy-mm-dd) DateSeparator=1 ; *Date separator -(dash) TimeFormat=0 ; *HMS (time format hh:mm:ss) TimeSeparator=0 ; *Time separator :(colon) Decimal=0 ; *Decimal format .(period) AlwaysScrollable=0 ; *No scrollable cursor if row set is 1 BlocksizeKB=32 ; *Blocking size from 1 to 512 KBytes ExtendedDynamic=1 ; *Extended dynamic enabled ForceTranslation=0 ; *No translate for CCSID 65535 LazyClose=1 ; *LazyClose enabled LibraryView=0 ; *Library list ODBCRemarks=0 ; *OS/400 object description RecordBlocking=2 ; *Block except for FOR UPDATE OF specified PackageSQLIBM=QGPL/SQLIBM(FBA),2,0,1 PackageSPEED=QGPL/SPEED(FBA),2,0,1
The Windows 3.1 INI file is a standard file format that Windows uses to store configuration information. Each Windows installation has several .INI files, and an application can also create its own. ODBC drivers store information in the ODBC.INI file, which is in the Windows directory. ODBC.INI consists of sections that begin with a name and square brackets. After each section name, there is a list of labels and their values. When you add a CA/400 ODBC data source using the ODBC Administrator program, a section for that data source is added to the ODBC.INI file. You can use a text editor such as Notepad to edit an ODBC.INI file. Each of the ODBC performance parameters is discussed in section 5.3.37, “ODBC Parameters” on page 178.

5.3.36 Configuring an ODBC Data Source for Windows 95
The following shows how to configure a Windows 95 data source.

5.3.36.1 The ODBC Administrator

Figure 44. Microsoft ODBC Administrator Dialog Box

Chapter 5. Client/Server Database Serving

173

This soft copy for use by IBM employees only.

From the Microsoft ODBC Administrator Dialog Box, as shown in Figure 44, you can install ODBC drivers and configure data sources. In this example, assume you are configuring a data source for the Client Access ODBC driver.

5.3.36.2 Client Access ODBC Setup

Figure 45. General Tab

This, and the other property sheet pages (shown in Figure 45) are displayed by the Client Access ODBC driver. You will see different screens with other drivers. Data source name Provides a space for you to type the data source name. You must type the data source name before you can use the Client Access ODBC driver to access AS/400 data. Data source names can be changed or deleted at any time. The name can be up to 32 characters, must start with an alphabetic character, and cannot include the following characters: [ , ] , {, }, (, ), ?, *, =, !, @, ;. System Specifies the configured AS/400 system that contains the data source. Click on the down arrow to select another system. The following information is optional for setting up a Client Access ODBC data source: Description Provides a space for you to type a description of the data in the data source. You can type up to 80 characters. User ID Specifies the user ID for connecting to the AS/400 system with the SQLDriverConnect prompt. This is shown in Figure 46 on page 175.

174

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Figure 46.

Default libraries Provides a space for you to type the AS/400 libraries to be used during connections to this data source. The library names can be separated by commas or spaces. The libraries can either be added to your library list or replace the list entirely. To replace the list specify a list of library names. To add to the existing user library list, add *USERLIBL to the list of libraries. All libraries listed before *USRLIBL will be added to the front of the user library list. All libraries listed after *USRLIBL will be added to the end of the user library list. Commit immediate (*NONE) Specifies that commitment control is not used. COMMIT and ROLLBACK statements are not allowed. Uncommitted changes in other jobs can be seen. If the SQL DROP COLLECTION statement is included in the program, *NONE must be used. If a relational database is specified on the RDB parameter and the relational database is on a system that is not on an AS/400, *NONE cannot be specified. Read committed (*CS) Specifies the objects referred to in SQL COMMENT ON, CREATE, DROP, GRANT, LABEL ON, and REVOKE statements and the rows updated, deleted, and inserted are locked until the end of the unit of work (transaction). A row that is selected, but not updated, is locked until the next row is selected. Uncommitted changes in other jobs cannot be seen. Read uncommitted (*CHG) Specifies the objects referred to in SQL COMMENT ON, CREATE, DROP, GRANT, LABEL ON, and REVOKE statements and the rows updated, deleted, and inserted are locked until the end of the unit of work (transaction). Uncommitted changes in other jobs can be seen. Repeatable read (*ALL) Specifies the objects referred to in SQL COMMENT ON, CREATE, DROP, GRANT, LABEL ON, and REVOKE statements and the rows updated, deleted, and inserted are locked until the end of the unit of

Chapter 5. Client/Server Database Serving

175

This soft copy for use by IBM employees only.

work (transaction). Uncommitted changes in other jobs cannot be seen.

5.3.36.3 Client Access ODBC Setup - Format Tab

Figure 47. Format Tab

The naming convention specifies one of the following naming conventions. Click on the down arrow to select a new setting. This is shown in Figure 47. SQL: The SQL naming convention that uses a period (.) between the collection and table names. This is the default. SYS: The SYS naming convention that uses a forward slash (/) between the library and file names. For ease of porting, it is better to use the SQL naming convention if possible.

5.3.36.4 Client Access ODBC Setup - Performance Tab

176

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Figure 48. Performance Tab

Enable extended dynamic support Specifies whether extended dynamic support is enabled. Extended dynamic support provides a way to cache dynamic SQL statements on the AS/400 server. Lazy close support Used to control the way SQLFreeStmt with the SQLClose option is handled by the Client Access ODBC driver. If it is enabled, SQLFreeStmt with the SQLClose option is not sent to the AS/400 system until the next request is sent. If it is not enabled, SQLFreeStmt with the SQLClose option causes an explicit flow to the AS/400 system to close the statement. Record blocking Allows you to specify the type and size of blocking under which the driver retrieves multiple rows of data from the AS/400 system. OS/400 library view Specifies the set of libraries to be searched for returning a list of table owners by calling the SQLTables function. The All libraries on the system option is effective only when SQLTables is called with specific parameters. In most cases, use the Default library list option because returning table owners for all libraries on the system will take a long time. To change the default library list, use the Default libraries option on the Server tab. This is shown in Figure 49 on page 178.

5.3.36.5 Client Access ODBC Setup - Other Tab

Chapter 5. Client/Server Database Serving

177

This soft copy for use by IBM employees only.

Figure 49. Other Tab

Translation Specifies whether translation of ASCII data stored in columns found on the AS/400 system with an explicit CCSID of 65535 is disabled or enabled. Object description type Specifies the types of values returned by ODBC catalog APIs in the REMARKS column. Scrollable cursor Specifies whether cursors are scrollable when the row set size is 1.

5.3.37 ODBC Parameters
Following are ODBC parameters which can affect performance.

5.3.37.1 DefaultLibraries
Selects the default libraries you want to see. For example, to see libraries QGPL and QPLS, enter QGPL, QPLS for this parameter. The more default libraries you select, the longer it takes ODBC to read all of them. Many ODBC applications use the ODBC′s catalog functions to retrieve lists of available owners (libraries), tables (files), and columns (fields). The libraries you enter for the DefaultLibraries setting make up the catalog view (that is, the catalog functions work only against these libraries). This text box specifies the AS/400 library or libraries that the Client Access/400 ODBC driver searches. You can enter multiple libraries separated by commas or spaces in this text box. To add to the existing user library list, you need to add a special entry, *USRLIBL, to the list of libraries. All of the libraries listed before *USRLIBL are

178

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

added to the front of the library list and all of the libraries listed after *USRLIBL are added to the end of the list. The first library in the list is used as the default library. If you do not want a default library, start the list with a comma. If tables are specified in SQL statements without a library name, the default library is used. If the application does not specify a library name with the DBQ parameter in the SQLBrowseConnect or SQLDriverConnect API, the ODBC driver uses the first library in this text box. Specifying every library that you own or may ever want to access is not the best idea as far as performance is concerned. Each library you add to the list increases the number of search paths and the amount of data the host creates and transfers to your PC every time an application requests catalog information. If you must access a large number of libraries, consider creating multiple data sources, each with a limited number of libraries.

5.3.37.2 RecordBlocking
ODBC can fetch (retrieve) rows of data from an AS/400 file one at a time or in blocks, which reduces communications and other overhead. The RecordBlocking setting lets you control how the ODBC driver fetches rows of data. There are three possible values:
• •

0 indicates that the driver fetches only one row at a time. 1 indicates the driver fetches a block of rows if the SQL SELECT statement ends with a FOR FETCH ONLY clause; otherwise, the driver fetches one row. 2, the default, indicates that the driver fetches a block of rows unless the SQL SELECT statement contains a FOR UPDATE OF clause.

The default value usually gives the best performance. However, if an application assumes it has a lock on any row it has fetched and does not use the FOR UPDATE OF clause in the SQL SELECT statement, the application cannot update the row if you are using the default setting. One result of reading data in blocks is that the SQL SELECT statement is opened for read only. If an application assumes that any SELECT statement is used for updating, then you must change this setting to 0 or 1 for the application to work properly. When updating data and using the default setting, you must use the For Update Of clause in your SQL Select statement and set on the SQL_CONCURRENCY option using the SQLSetStmtOption interface.

5.3.37.3 BlocksizeKB
This parameter lets you select how much data (in kilobytes) is retrieved in one block from the AS/400 system. Values are from 1 to 512 kilobytes with the DEFAULT=32. Shorter block sizes may make certain applications look faster. Larger block sizes should transfer the most data in the shortest time. The BlocksizeKB setting lets you specify the maximum number of kilobytes of AS/400 data the ODBC driver can retrieve in one block. Valid values for this setting are from 1 to 512, with a default value of 32. If you need to retrieve large amounts of data from the AS/400 system, you can reduce overhead by using a larger block size.

Chapter 5. Client/Server Database Serving

179

This soft copy for use by IBM employees only.

However, in some cases, a smaller block size can make an ODBC application′ s performance appear faster to a user even though the data retrieval is actually slower than it is if ODBC were using a larger block size. Some applications (Microsoft Access, for example) start displaying or operating on the first row or rows of data as soon as they are received. If you specify a very large block size, you delay the arrival of the first row while the data block is assembled on the AS/400 system and sent to the PC. Another case in which you might want to use a smaller block size is an ODBC application that selects large amounts of data but generally uses only the first few rows.

5.3.37.4 LazyClose
LazyClose improves performance by eliminating communication flows between the PC and the AS/400 system. LazyClose should almost always be used as DEFAULT. When using LazyClose, review how your programs uses locks. Another way to improve performance is to reduce the number of ″flows″ of information between the PC and the AS/400 system. Each instance of the communications flow changing direction costs time and impedes performance. LazyClose improves performance in most situations by reducing the time spent in communications handshaking. With LazyClose, when the ODBC application closes a statement, the driver sends data to indicate that close to the AS/400 system, but the communications buffer is not flushed. The close does not actually happen until the next operation that forces the communications to be flushed. The LazyClose setting has two possible values: 0 disables LazyClose, and 1, the default, enables it. If LazyClose is enabled, a SQLFreeStmt with the SQLClose option is not sent to the AS/400 system until the next request is sent. If disabled, a SQLFreeStmt with the SQLClose option causes an explicit flow to the AS/400 system to close the statement. Why would you want to disable LazyClose? Consider an application that waits for user input, locks and potentially updates a row of data, and then waits for more input. If the application counts on the close to unlock the locked row, LazyClose could cause rows in the database to remain locked for extended periods of time (until the application receives more input from the user).

5.3.37.5 ExtendedDynamic
ExtendedDynamic support improves performance by:
• • •

Preparing the SQL statement. Optimizing the SQL statement. Saving the statement for reuse.

Use of ExtendedDynamic support is the DEFAULT. Always use ExtendedDynamic support except:
• •

During development. If you rarely use the same SQL statement more than once.

Extended Dynamic SQL support, also known as package support, offers perhaps the biggest potential performance boost to users of both off-the-shelf and custom applications. Package support lets you store prepared SQL statements in a package file on the AS/400 system so you can use the statements in the future without the overhead of preparing and optimizing the statement again.

180

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

With extended dynamic support, the first time the statement is run, information about an SQL statement is saved in an SQL package object on the AS/400 server. The next time the statement is run, the SQL statement utilizes the information saved on the AS/400 server, thus saving significant processing time. Statements that are cached include SELECT, positioned UPDATE and DELETE, INSERT with subselect, DECLARE PROCEDURE, and all other statements that include parameter markers. A system without package support for SQL statements is like a system that stores programs only in source form and then has to compile a program every time a user needs to run it. The ExtendedDynamic setting has two possible values: 0 disables extended dynamic support; 1, the default, enables package support. The setting works in conjunction with the PackageAPPLICATIONNAME setting. Package support as implemented by the CA/400 applications driver is transparent to ODBC applications (and ODBC application programmers). The ODBC driver creates the SQL packages, and no special programming is required from the ODBC application to add statements to and use them from the package. You can use package support with an application right out of the box or with your own custom applications. You can also control package usage on a per application basis, as you see in the PackageAPPLICATIONNAME section. Most applications benefit from using SQL package support. However, an application that rarely or never uses the same SQL statement more than once does not benefit much from using a package. In fact, storing information in a package that never is used may cause slight performance degradation. You might want to disable package support with a 0 setting for such an application. You might also want to disable package support for an ODBC application in development. The many changes a developing application goes through can fill a package with statements that are never reused. Once the application is complete and ready to be shipped, the package should be primed with the SQL statements that will be used.

5.3.37.6 PackageAPPLICATIONNAME
Use this parameter to customize your ExtendedDynamic support. You can customize the following:
• • •

In which AS/400 library the package information is stored. The package name. How a package is created and used: − − − 0-Never created. 1-Used if already created. 2-Created and used-DEFAULT.

What to do if the package is full: − − 0-Be read only-DEFAULT. 1-Be cleared.

What to do if the package is unusable: − − − 0-Give an error message. 1-Give a warning message-DEFAULT. 2-Give no indication.

When you run an ODBC application with extended dynamic support enabled, the CA/400 ODBC driver checks the ODBC.INI file for an entry for that application. If
Chapter 5. Client/Server Database Serving

181

This soft copy for use by IBM employees only.

the driver finds no entry (if this is the first time you have run the application, the driver does not find an entry), it creates one using default values. If the driver finds an entry, (usually because the application has been run before), it uses the information in the entry to control extended dynamic support for the application. Thus, you can use extended dynamic support on an application-by-application basis. For example, the Windows program name for the Database Access application that comes with CA/400 is SQLIBM. So, the ODBC driver looks for an entry in the data source′s section of the ODBC.INI file with the label PackageSQLIBM= in the appropriate data source section of the ODBC.INI file. The entry looks similar to the following:

PackageSQLIBM=QGPL/SQLIBM(FBA),2,0,1
The entries for application names are in the form:

PackageAPPLICATIONNAME=LIBL/PACKAGE(SFX),USAGE,FULL,ERROR
• •

LIB is the AS/400 library where the package resides or is created. PACKAGE is the package name (if the name is longer than seven characters, the first seven are used). SFX is a three-character suffix that completes the package name. As was mentioned previously, the ODBC driver creates the application entry using default values for all of the entry′s parameters the first time you run an ODBC application with extended dynamic support enabled. You can subsequently change the library and package names; however, you cannot change the suffix. The ability to change the package and library names offers flexibility in using package support. For an ODBC application with fixed SQL statements accessing one library, it may make sense for all users to use the same package. For an interactive query application that accesses a variety of databases in many libraries, it may be valuable to create a separate package or even multiple packages for each user. When the ODBC driver creates an SQL package, several settings for the job creating the package are stored in or associated with the package. These include the current naming convention, commitment control mode, date and time formats and separators, decimal separator, and the default collection. With the exception of the default collection (the first library in the DefaultLibraries setting), the ODBC driver uses these setting to determine the three-character suffix that is part of the package name. This is why you are not allowed to change the package name suffix. You may change the first seven characters of the package name and the library name where it is created. Because of the way SQL packages work, the current default collection (the first library in the DefaultLibraries setting) must be the same as when the package was created or the package is not usable.

The first numeric value after the suffix specifies USAGE. The USAGE parameter has three possible values. − − − 0 indicates that the ODBC driver does not use a package for this application. 1 indicates the driver uses an existing package in read-only mode (that is the driver does not add new statements to the package. 2, the default, indicates the driver creates a new package if one does not exist and adds new statements to the package.

The 1 parameter deserves a little more explanation. It is used when there are a limited number of SQL statements that are always the same for all users. The developer for such an application could ″prime″ the package with

182

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

the statements by running the application once to add the statements to the package. The developer can then set the USAGE parameter to 1, read-only mode, keeping the package contents constant.

A package holds a finite number of SQL statements. FULL lets you decide whether the package is read-only when it is full (0, the default) or the package is cleared when it is full (1). Full = 0 lets you continue to use any statements you already have in the package and does not let you add any new ones. Full = 1 clears all statements out of your package and lets you start over again. Packages can store up to 512 statements.

ERROR lets you decide whether the ODBC driver returns an error when a package cannot be used. Situations where the package cannot be used include: − A package is unusable if the default library name in the ODBC.INI file is different than the one the package was created with. For example, if you use the ODBC administrator and change the default library name (the default library name is the first library in the ODBC′s Administrator′ s Default libraries list), then run the application again; the old package is unusable (and a new package is not created). The job CCSID does not match the job CCSID under which the package was created. Error is set as follows: - 0 indicates that the driver returns an error if the package is unusable. - 1, the default, indicates the driver returns a warning message if the package is unusable. - 2 indicates the driver does not return an error if the package is unusable. Package usability is determined on the first call of SQLPrepare or SQLExecuteDirect API that tries to use the package. Some ODBC applications may not let you proceed even if only a warning is returned, in this case you probably want to use a 2 parameter for the ERROR value. This way your application can continue.

5.3.37.7 Summary - ODBC Settings
Default settings are usually the best performing. There are several ways to control and tune the performance of the DB2/400 ODBC drivers. For the Windows 3.1 client, the controlling and tuning are mostly done through the ODBC.INI file. For the Windows 95 client, the tuning information is stored in the Windows 95 system registry. The most important parameter for improved ODBC performance is most often the use of ExtendedDynamic support. Other control and tuning parameters include:
• • • • •

DefaultLibraries RecordBlocking BlocksizeKB LazyClose PackageAPPLICATIONNAME
Chapter 5. Client/Server Database Serving

183

This soft copy for use by IBM employees only.

As you can see from this tour of the DB2/400 ODBC driver settings, the default settings provide good performance to most users. But some applications may benefit from analysis and tuning for optimal performance. The biggest performance gains are probably made by adjusting the SQL package support setting to suit your situation. The text for the section on ″ODBC Parameters″ was supplied by Bob Nelson and reprinted with permission from his article in the July 1995 issue of NEWS 3X/400 magazine.

5.3.38 Using the Predictive Query Governor from ODBC
This section explains how client applications or users can obtain access to the V3R1 DB2/400 predictive query governor or I/O parallelism. The main key to this access is using either the stored procedure capability of DB2/400 or exit program specific to the client/server interface used. Methods to invoke stored procedures through Client Access/400 ODBC and Remote SQL are discussed. Most of the techniques discussed here can also be applied to other client/server interfaces. Two new features of DB2/400 were introduced in V3R1, the predictive query governor and parallel I/O processing. The governor can stop the initiation of a query if the query′s estimated or predicted runtime (elapsed execution time) is excessive. If the query′ s estimated runtime exceeds the user-defined time limit, the initiation of the query is stopped. The time limit is user-defined and specified as a time value in seconds using the Query Time Limit (QRYTIMLMT) parameter on the Change Query Attributes (CHGQRYA) CL command. DB2/400 can also use parallel I/O processing to shorten the processing time required for long-running I/O-bound queries. This method is effective for queries that either scan all of the rows in an entire table or perform complex queries using tables and indexes that can fit in the available memory in the shared pool. The application must enable parallel I/O processing by queries in the job by specifying DEGREE(*ANY) on the Change Query Attribute(CHGQRYA) CL command. This leads to the question, ″How do I invoke the CHGQRYA CL command from a client?″ The answer is stored procedures or exit programs that are described in the following sections.

5.3.38.1 Using Stored Procedures
The easiest and most flexible method to get a CHGQRYA CL command to run is to use a stored procedure. Stored procedure is a new function of V3R1 DB2/400 SQL. SQL stored procedures support provides a means for an SQL application to define and then invoke a procedure through SQL statements. The procedure invoked is any high-level language program (except System/36 programs and procedures) or REXX procedure. Conveniently, a system module named QCMDEXC is invoked directly as a called procedure. QCMDEXC can run any CL command passed to it. All that needs to be done is to format and run the following SQL statement through an EXECUTE IMMEDIATE interface:

CALL QSYS.QCMDEXC (′ CHGQRYA QRYTIMLMT(5)′, 0000000020.00000) 184
AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

The preceding statement invokes QSYS.QCMDEXC with two parameters. The first is a character string containing the CL command to be executed, and the second designates the number of bytes contained in the previous parameter. The second parameter must be a number with 15 digits; five of those digits must be to the right of the decimal point. If your client interface has an interactive SQL type of application, you can try this statement by entering it into that application. Once that statement is entered, any query whose estimated runtime exceeds five seconds ends with an SQL CODE of -666. If you want to put this statement into an application, then you need to format the CALL statement text into a variable and use an EXECUTE IMMEDIATE API. The following code example shows how to format this CALL statement into a C NUL terminated string variable called ′stmt′:

_fstrcpy(stmt,″CALL QSYS.QCMDEXC(′ CHGQRYA QRYTIMLMT(5)′ , ″ ) ; _fstrcat(stmt,″0000000020.00000)″);
This statement is then passed in through the ODBC API for executing a stand-alone statement (SQLExecDirect).

SQLExecDirect(hstmt,stmt,SQL_NTS);
For Remote SQL, use the EHNRQ_EXEC API. For other client application platforms, use whatever EXECUTE IMMEDIATE API is provided.

5.3.38.2 Using User Exit Programs
If you want to enable the governor or parallel I/O processing for all users that connect through Remote SQL or Client Access/400 ODBC, it can be done by creating exit programs. For Remote SQL, exit program QIBM_QRQ_SQL can be registered. This exit program is called once for each request from the client. One of the parameters the exit program passes is the statement type of the Remote SQL request being processed. It is recommended to look at the statement type and only running the CHGQRYA command if the statement type is CONNECT. In the Remote SQL environment, the stored procedure approach has an advantage over the exit program in that the overhead of the SQL CALL statement is only done once for the application whereas the exit program is run for every request processed. Client Access/400 ODBC has four different exit programs that are registered. For this purposes, use the QIBM_QZDA_INIT exit program, which gets called once per job at connect time.

5.3.38.3 Summary - Predictive Query Governor
Stored procedures enable client server applications to use the full power of a DB2/400 server including the predictive query governor and parallel I/O. The exit programs provided by Client Access/400 and Remote SQL allow the administrator to configure characteristics of the DB2/400 server for client/server connections. We want to give special thanks to Randy Eagan, who wrote the Predictive Query Governor text.

Chapter 5. Client/Server Database Serving

185

This soft copy for use by IBM employees only.

5.3.39 Exit Programs
This section describes the support for exit programs with Client Access/400 servers. An exit program allows you to control the functions a client can perform with a server. For example, you might want to restrict the payroll department from the file transfer download function. With PC Support/400 you were able to define an exit program via the network attribute, PCSACC. You could only specify one program which would be used by all PC Support/400 servers. This was a problem because all client requests would use the program, even though you were, for example, only trying to restrict file transfer requests. Client Access/400 servers now use the OS/400 Registration Facility. This allows you to define exit points for applications and to define programs that will run at that exit point. Each Client Access/400 server has at least one exit point, so you can register a program that will be used just for file transfer; performance of other Client Access/400 requests would not be affected. Some servers have more than one exit point; the database server has exit points for both SQL requests and native a database requests. The QZDAINIT/QZDASOINIT prestarted jobs cannot open files ″ahead″ of the connection/conversation as the data source. On the AS/400 opening of files ″ahead of the user transactions″ can improve performance for any 5250 interactive job and a client server job. CA/400 does provide an ″exit point″ for a user program that could do some pre-processing, but this program would still get called after the evoke is received. The CA/400 exit point to consider is QIBM_QZDA_INIT. CA/400 exit points can be viewed with the Work with Registration Information (WRKREGINF) command and are described in the OS/400 Server Concepts and Administration Manual (SC41-3740-00).

5.3.39.1 Registration Facility
The following screen shows the OS/400 Registration Facility. The command WRKREGINF is used to display this screen.

Work with Registration Information Type options, press Enter. 5=Display exit point 8=Work with exit programs Exit Exit Point Opt Point Format Registered Text QIBM_A1A_TAPE_INF MEDI0100 *YES BRM Services/400 media inform QIBM_A1A_TAPE_MOVE MEDM0100 *YES BRM Services/400 media moveme QIBM_QCQ_AGENT ENDE0100 *YES QIBM_QCQ_AGENT STRE0100 *YES QIBM_QGW_NJEOUTBOUND NJEO0100 *YES Network Job Entry outbound ex QIBM_QHQ_DTAQ DTAQ0100 *YES Original Data Queue Server QIBM_QLZP_LICENSE LICM0100 *YES Original License Mgmt Server QIBM_QMF_MESSAGE MESS0100 *YES Original Message Server QIBM_QNPS_ENTRY ENTR0100 *YES Network Print Server - entry QIBM_QNPS_SPLF SPLF0100 *YES Network Print Server - spool QIBM_QNS_CRADDACT ADDA0100 *YES Add CRQ description activity 8 QIBM_QZDA_INIT ZDAI0100 *YES Database Server - entry Command ===> F3=Exit F4=Prompt F9=Retrieve F12=Cancel

186

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

5.3.39.2 QIBM_QZDA_INIT Example
Selecting option 8, Work with exit programs on the previous screen will result in the Work with Exit Programs screen being displayed. Option 1 on this screen allows adding an exit program to the exit point.

Work with Exit Programs Exit point: QIBM_QZDA_INIT Format: ZDAI0100

Type options, press Enter. 1=Add 4=Remove 5=Display Exit Program Number

10=Replace

Opt 1

Exit Program

Library

(No exit programs found.)

The following screen shows adding the exit program USEREXIT in library RWM to the exit point QIBM_QZDA_INIT.

Add Exit Program (ADDEXITPGM) Type choices, press Enter. Exit point . . . . Exit point format Program number . . Program . . . . . Library . . . . Text ′ description′ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . > > > > > QIBM_QZDA_INIT ZDAI0100 Name 1 1-2147483647, *LOW, *HI USEREXIT Name RWM Name, *CURLIB *BLANK

Additional Parameters Replace existing entry . . . . . > *NO *YES, *NO

5.3.39.3 USEREXIT Example (QIBM_QZDA_INIT)
The source code for the program USEREXIT is shown below. It is a RPG program which will be executed whenever an ODBC job is initiated. The program checks for a user ID ″JONES″. If that user id is the one being used, the QCMDEXC command is used to change the time limit on any queries run by this user to two seconds.

I* HEADER INFORMATION IPCSDTA DS 4171 I 1 10 USERID I 11 20 APPLID I 21 30 FUNCID I*-----------------------------------------------------187

Chapter 5. Client/Server Database Serving

This soft copy for use by IBM employees only.

I ′ CHGQRYA QRYTIMLMT(2)′ C CHGQRY I ′ JONES ′ C JONES C*--------------------------------------------------C* MAIN PROGRAM C* C*--------------------------------------------------C* C *ENTRY PLIST C PARM RTNCD 1 C PARM PCSDTA C MOVE ′ 1 ′ RTNCD C USERID IFEQ JONES C Z-ADD20 FLDDL 155 C MOVE CHGQRY QCMD 20 C CALL ′ QCMDEXC′ C PARM QCMD C PARM FLDDL C END C SETON LR C RETRN
The original servers still support the PCSACC network attribute. But, a special value allows the servers to use the registration facility for determining which exit programs to run. For the original servers to use exit programs, the network attribute PCSACC must be set equal to *REGFAC. The following values are valid for the PCSACC network attribute:
• •

• •

*OBJAUT - the servers verify normal object authority *REGFAC - the servers use the OS/400 registration facility to determine the exit program to call *REJECT - the servers reject all requests from the PC LIBRARY NAME/PROGRAM NAME - the supplied program is called

The following original servers check the PCSACC network attribute to determine how to handle exit programs:
• • • • •

Transfer function server Message function server Original data queue server Remote SQL server License management server

5.3.40 Running 16 Bit ODBC Applications under Windows 95
If you want to run ODBC applications that were written to use the 16bit ODBC support provided by Client Access for Windows 3.1, you must do three things:
• •

You must have a 32bit Data source defined to the AS/400 system The PATH statement in the AUTOEXEC.BAT file must be updated to include the following entry, where C:\Progr ∼ 1\IBM\Client ∼ 1 is the directory where Client Access for Windows 95 is installed:

C:\Progr∼ 1\IBM\Client∼ 1\shared

Copy the following files from the Windows 3.1 system directory to the Windows 95 system directory

188

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

ODBC.DLL ODBCINST.DLL CTL3DV2.DLL
After properly setting up the Windows 95 environment to support 16 bit ODBC application, you can then run ODBC applications that were created using Windows 3.1.

5.3.41 Comparison of ODBC Techniques Using Windows 3.1
All tests were done using the following set of references: Transaction type Client Configuration New Order transaction with ten items in the order. IBM ThinkPad (Intel DX4 100/33 MHz) with 40MB RAM. Windows 3.1 DOS 7.0 Client Access/400 Windows 3.1 client Server Configuration AS/400 model 510 running OS/400 Version 3.6, PTF C6220360. Communications link 4 Mbps Token-ring LAN. The following utilities were used to derive the results:
• •

Client Response Time measured by the client program Performance Tools/400 (5763-PT1 LPP) − Component Report - Number of Database I/Os - Number of Communications I/Os Example Programs

All the example programs used for the performance tests are available with this redbook on the included PC media. Please refer to Appendix A, “Example Programs” on page 393 for more information and a guided tour of the application code.

Using the different techniques discussed can result in very different performances for your application. The following table shows the number of communication I/O operations and some response times for a “sample” order entry style operation. The identical application was written using several different methods. The first method uses Visual Basic database objects, the second method uses ODBC APIs, the third method uses blocked inserts and the fourth method uses a combination of ODBC APIs and stored procedures. The following table summarizes performance information for these applications in the Windows 3.1 environment. As is shown in the table, reducing communication I/O requests between the client and server can dramatically affect response time.

Chapter 5. Client/Server Database Serving

189

This soft copy for use by IBM employees only.

Table 5. I/O and Response Times (Windows 3.1)
Logical I/O count Database Objects Visual Basic ODBC APIs Visual Basic ODBC APIs with blocked insert C++ ODBC APIs calling a stored procedure Visual Basic 80 46 41 Communication I/O count 511 85 73 Response time (secs) 8.46 1.82 1.62

46

5

.77

5.3.42 Comparison of ODBC Techniques Using Windows 95
All tests were done using the following set of references: Transaction type Client Configuration New Order transaction with ten items in the order. IBM ThinkPad (Intel DX4 100/33 MHz) with 40MB RAM. Windows 95 Client Access/400 for Windows 95 Server Configuration AS/400 model 510 running OS/400 Version 3.6, PTF C6220360. Communications link 4 Mbps Token-ring LAN. The following utilities were used to derive the results:
• •

Client Response Time measured by the client program Performance Tools/400 (5763-PT1 LPP) − Component Report - Number of Database I/Os - Number of Communications I/Os Example Programs

All the example programs used for the performance tests are available with this redbook on the included PC media. Please refer to Appendix A, “Example Programs” on page 393 for more information and a guided tour of the application code.

This section presents the performance measurements recorded when running the ODBC application under Windows 95. All the applications used were created using Visual Basic. We implemented the same application that was used in the Windows 3.1 environment using the following methods:
• • • • • •

32 32 32 32 32 32

bit bit bit bit bit bit

database objects using SNA ODBC APIs using SNA stored procedures SNA database objects using TCP/IP ODBC APIs using TCP/IP stored procedures using TCP/IP

190

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

We found that we could achieve the best response time with Windows 95 using a TCP/IP connection. Again we found that reducing communication I/Os has a dramatic impact on the response time.
Table 6. ODBC I/O and Response Times (Windows 95)
Logical I/O Count Visual Basic Database Objects SNA ODBC APIs SNA ODBC APIs calling a stored procedure SNA Visual Basic Database Objects TCP/IP ODBC APIs TCP/IP ODBC APIs calling a stored procedure TCP/IP 80 Communication I/O Count 512 Response time (secs) 51.24

46 46

87 5

2.8 1.21

80

N/A

46.79

46 46

N/A N/A

2.19 1.04

5.4 OLTP Serving
The following figure summarizes performance results seen from various implementation methods of AS/400 client/server online transaction processing (OLTP). The same application was implemented using a number of different techniques.

Chapter 5. Client/Server Database Serving

191

This soft copy for use by IBM employees only.

Figure 50. OLTP Implementation

To summarize:
• • • • •

Improvement in remote SQL response (V3R1 over V2R3). ODBC packages better than remote SQL. Writing to ODBC APIs improve response over 4GL/SQL packages. Block insert better than row-at-a-time. Application serving (stored procedures) has shortest response.

5.5 Client/Server 4GL and Middleware
Many users build client/server applications using client toolkits as C/S 4GLs or CASE tools. Most of the new 4GL tools use ″middleware″ or interface code to connect to a server. In Windows or OS/2 this middleware usually consists of one or more DLLs used to connect to a given server. The middleware converts the client′s request into commands and data that the client can understand. Often the middleware is written by the toolkit provider to interface to a specific server or to a standard server API set. Because the user is often isolated from the APIs and the middleware manages the database access method, it is important to build applications using tools that optimize for performance. In many cases tools that are built for ″openness″ for many servers tend to met the worst performers because they are built to the least common denominators.

192

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Ensure that your AS/400 toolkit has support for functions like stored procedures and blocked insert. If not, ensure that there is a mechanism to write directly to the CA/400 API set for the best performance. Many performance problems with client development toolkits are due to the client tool creating inefficient database access requests. For example, a simple database transaction that should result in minimal interaction with the server can generate hundreds of unnecessary ODBC requests and responses. By choosing high performance toolkits and with planning and tuning, these problems can be avoided. Choose a server access method that provides high performance database serving. If your 4GL supports multiple access methods to AS/400 server, consider the following:

Use ODBC for best SQL access performance. ODBC can improve performance over other SQL access methods. Distributed Relational Database Architecture (DRDA) provides acceptable performance in most cases. When possible, use static SQL statements for the best performance. Distributed Data Management (DDM) does not have the flexibility of an SQL interface but, in most cases, provides good record-level file access performance.

Use client tools to assist in tuning the client application and middleware. Tools such as ODBCSpy and ODBC Trace (available through the ODBC Driver Manager) are useful in understanding what ODBC calls are being made and what activity is taking place as a result. Client application profilers may also be useful in tuning client applications and are often available with application development toolkits. Refer to the ″Client Access/400 Windows 3.1 Client for OS/400 ODBC User′s Guide″, SC41-3533, for more information on specific client tools.

5.6 Query Download/Upload (Database File Transfer)
Download/Upload queries represent a set of queries that either fetch a significant number of records from DB2/400 tables (or files) or insert a significant number of rows into DB2/400 tables or files. Because of the number of rows processed, there is a significant amount of processing that occurs on the client. Many times, significant performance gains may be realized by running these types of queries on a faster client processor.

5.6.1 Query Download
Most of the time spent for large record download operations is in the client or for communications. ODBC query download rates can be comparable to IFS file transfer rates. For fastest retrieval times for an entire large database table, do not immediately format and display all the data retrieved. Instead, use client tools to manipulate and display the data after it has been entirely downloaded to the client. When retrieving the entire database table, the recommended ODBC Record Blocking setting is 512 KB. Decreasing this size may cause slower performance.

Chapter 5. Client/Server Database Serving

193

This soft copy for use by IBM employees only.

When using client tools to browse through the data, limit the query to display only the first screen of data or, if the client operating options permits, write the first received block of rows to the screen while receiving more rows from the server. Fetch the next set of data when needed. Set the record Blocking to 32K or less for fast retrieval of only a small number of rows from a large table. As the number of columns to be retrieved increases, the retrieval rate decreases and response time increases. Large frame size settings may improve performance.

5.6.2 Query Upload
Client applications that perform inserts, updates or deletes will generally perform these SQL commands one at a time to the CA/400 data access server. However, for inserts, there is an opportunity to use the blocked INSERT SQL statement, which can be used to send a set of rows to the server in a single communications flow. Measurements have demonstrated that this form of insert can be over 20 times faster than doing inserts one at a time. See topic 5.3.30, “Block Insert” on page 166.

5.7 Summary - Database Serving
In order to achieve the best performance when implementing a database serving application:

Visual Basic controls (database objects or data controls) may cause poor performance. When executing a statement multiple times: − − Prepare the statement using parameter markers. Then execute it passing in the variable information. Enable extended dynamic support so that you use SQL package support.

• •

Use Blocked Inserts when inserting large numbers of rows. Use Stored Procedures for optimum database serving performance. − − − − − − − If using SQL, AS/400 SQL Performance techniques apply. Refer to the DB2/400 SQL Programmers Guide for details. Avoid data type conversions. If joining tables, consider using native joins. Consider using native database operations rather than SQL for Updates, Inserts, and Deletes. Create the stored procedures rather than declaring them. Stored procedures need to be created only once; they can be re-used. Consider creating the stored procedure on the AS/400 system so you do not have the overhead of creating it from the client.

Use Application serving techniques (Distributed Logic) for high performance, customized programmed implementations. Tune the ODBC data source as appropriate.

194

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Chapter 6. Client/Server File Serving
This chapter covers AS/400 file serving performance. It will first distinguish file serving from database and application serving:

Database serving: − − − − The The The The database resides on the server. client sends database requests to the server. server handles the requests. results are returned to the client.

Application serving: − − − − The The The The application is split between the client and the server. client sends a message (request) to the server. server application processes the message. results are returned to the client.

File serving: − − − − The files reside on the server. The client sends file I/O requests to the server. - For example: Read, Write, Open, and Close. The server handles the requests. The results are returned to the client.

6.1 AS/400 Client/Server Options
Note: The File Server Input/Output Processor(FSIOP) has been renamed to the AS/400 Integrated PC Server. In this chapter the term FSIOP and AS/400 Integrated PC Server are used interchangeably. The AS/400 system provides file server capability through the Integrated File System or through the FSIOP:

• • •

Client Access/400 and Integrated File System (IFS): − Network drives − QDLS (previously known as shared folders) LAN Server/400 and File Server IOP (FSIOP) OS/400 Integration for Novell Netware OS/400 Integration of Lotus Notes

File serving on the AS/400 system has been greatly improved by the AS/400 Integrated PC Server (FSIOP). The FSIOP supports running LAN Server/400, Novell Netware servers, and Lotus Notes servers. This allows a client using the proper requester software to access the AS/400 disk directly across the FSIOP. This chapter first covers using the IFS for file serving and then covers using the FSIOP.

© Copyright IBM Corp. 1996

195

This soft copy for use by IBM employees only.

6.2 Integrated File System (IFS) Overview
The IFS (Integrated File System) provides a common interface that allows applications to access many different file structures including stream files (large blocks of data stored sequentially), database files, documents, and other objects stored on the AS/400 system. Client Access/400 takes advantage of the IFS to present all of the file systems to the Client Access/400 user as a network drive. The different file systems appear as subdirectories under the root directory.

Figure 51. Integrated File System

6.2.1 IFS File Server
The IFS file server provides optimized performance over the old shared folders server. Subsystem QXFPCS has been replaced and all jobs now run in subsystem QSERVER. Beginning with V3R1:

CA/400 Server functions are incorporated into OS/400: − No longer part of Client Access/400.

• • •

IFS File Server replaces Shared Folders Type 2 Server. Access to the entire Integrated File System (IFS) is provided. Supports all clients: − − Original clients only access documents in folders (QDLS file system). New clients can access root, QDLS, QOpenSys, and QLANSrv file systems.

IFS provides an integrated structure over all information stored in the AS/400 system. − Supports stream input/output.

6.2.2 AS/400 Integrated PC Server (FSIOP) Concepts
The File Server Input/Output Processor is an IOP implementation designed to provide AS/400 users with superior file serving performance. It comes with a network card, either token-ring or Ethernet, that is equivalent in performance to the high performance 2617 or 2619 adapters. You can order the FSIOP with either one or two ports, and these ports can be either Ethernet, token-ring, or a mix of the two. The FSIOP also acts as just a LAN adapter in a communications

196

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

network. On the main card, there are two CPUs. The Intel i960 processor is used for communication between the AS/400 bus and the FSIOP memory and CPU. The Intel i486 is the CPU of the FSIOP used for processing data and instructions. How much data is processed and how quickly depends on the amount of cache memory available on the FSIOP. A maximum of 64MB is currently available.

Figure 52. FSIOP Card

The FSIOP requires the LAN Server/400 program product to be activated. Once activated, data stored in the QLANSrv file system is accessible through the FSIOP to LAN Requester clients and also to Client Access/400 clients using the IFS. When configuring storage spaces through the FSIOP, the storage spaces are allocated in chunks of up to 8GB with a maximum of 16 storage spaces per FSIOP. This makes a maximum of 128GB of AS/400 disk usable through an FSIOP. A network server description has to be created on the AS/400 system to link the FSIOP to the storage spaces for administration purposes. Note: IBM has announced its intention to upgrade the FSIOP processor speed and memory capacity.

6.2.3 LAN IOP Response Time

Chapter 6. Client/Server File Serving

197

This soft copy for use by IBM employees only.

Figure 53. LAN IOP Response Time

Conclusions:

The 6506 FSIOP used with LAN Server/400 has similar response time performance characteristics to the 2619 and 2617 LAN IOPs. The 6506 FSIOP can be used simultaneously for LAN Requester traffic flowing to LAN Server/400 and ″normal″ LAN traffic flowing to the AS/400 system. This data is based on internal IBM tests in February 1995, and is not representative of a specific customer environment. Results in other environments may differ significantly.

198

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

6.2.4 LAN IOP Throughput

Figure 54. LAN IOP Throughput

Conclusions:

A 16 Mbps LAN was used for the TRLAN environments, and a 10 Mbps LAN was used for the Ethernet environments. The 6506 FSIOP used with LAN Server/400 can achieve throughput rates higher than the 2619 token-ring LAN IOP. The 6506 FSIOP used with LAN Server/400 has similar throughput characteristics to the 2617 Ethernet LAN IOP. This data is based on internal IBM tests in February 1995, and is not representative of a specific customer environment. Results in other environments may differ significantly.

6.3 AS/400 File Serving Performance
In general, V3R6 file serving performance for Client Access/400, and for FSIOP file serving is equivalent to V3R2 when comparing systems of equal configurations. V3R7 file serving performance is generally equivalent to file serving in V3R6. All numbers in this section were collected in V3R6 unless otherwise noted. This section will focus on the following topics.
• •

• • •

File Serving Performance Positioning Client Access/400 File Serving with the Integrated File System (IFS) File Server/400 FSIOP and LAN Server/400 File Serving Multimedia File Serving OS/400 Integration for Novell NetWare (V3R1)

Note: Additional performance information is available in the ″AS/400 Performance Capabilities Reference″ book. This book is an IBM internal document that is frequently updated. IBM publishes new versions at major announcement and general availability dates. Your IBM representative can access this information by referring to MKTTOOLS.

Chapter 6. Client/Server File Serving

199

This soft copy for use by IBM employees only.

6.4 File Serving Performance Positioning
With the unique requirements in the area of PC file serving, it is important to provide an environment to meet the needs of the customer. For PC file serving on AS/400 there are basically two options: file serving with the FSIOP, and Client Access/400. If your requirement is for high performance file serving from client workstations running DOS, Microsoft Windows**, or OS/2* attached via LAN to AS/400, then the FSIOP is the best solution. The FSIOP running Lan Server/400 or Novell Netware provides competitive PC file serving performance comparable to the leading PC servers, and requires dramatically less AS/400 CPU resource than using Client Access/400 and the Integrated File System for file serving. Where performance is not a key requirement in PC file serving, such as in the areas of casual file serving or client administration, file serving provided by Client Access/400 is sufficient. Client Access/400′s strength is in providing seamless integration of the PC desktop, not only in the area of file serving, but also in areas such as database serving, print serving, AS/400 application access, and 5250 emulation. When you require both high performance file serving and access to all the PC desktop services provided by Client Access/400, the FSIOP and Client Access/400 can be used concurrently on the client. The LAN requester provides access to all data maintained by the FSIOP server. Client Access/400 provides access to other AS/400 resources, such as the AS/400 database, byte stream data in the Integrated File System (IFS), and printers.

6.4.1 File Serving Workloads and Configurations
The following workload environments and client/server configurations will be used in this chapter to compare file serving performance.

BAPCo5 Workload Description The BAPCo5 (Business Application Performance Corporation) workload represents a client/server environment in which PC users run the following five commercially available applications and access programs, batch files, and data files residing on the server. Some of the functions utilized by each application are listed. − − − − Harvard Graphics: macros, create org chart, 3-D pie chart, import Lotus and Excel spreadsheets, plot, create text chart, slide show, print Paradox: define databases in the file system, selective retrieval, report generation, print WordPerfect: small and large documents, scroll, help feature, tables, headings, search and replace, edit Excel: bivariant normal distribution, print product forecasting, create tables, cut/paste, data entry, 2-D and 3-D graphs, format, print tax forms, load forms, enter data, scroll, links, print preview, print CCMail: electronic mail, selective retrieval, edit, send response, print

The preceding applications and end-user functions were selected based on research on which applications and functions are most prevalent. The applications and the files are maintained on the networked server and are accessed from client machines running Win-OS/2. Each client machine runs an automated script (beginning with a different application) and runs

200

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

one application at a time until the entire set of applications has been run. The automated scripts run with zero think times and thus generate file serving requests that would be representative of many clients. Print output is held in a print queue on each client.

Low Level PC Primitives Workload Description The following file operations are executed from a client ″C″ program to files that reside on a network drive.

Test case --------Open100 Close100 Read1MB Write1MB Create100 Delete100 GetFilAtt100

Description -------------Open 100 files 2 dir deep for Read (i:\dir1\dir2\test) Close 100 files 2 dir deep Read (i:\dir1\dir2\test.001) 1048576 bytes/16384 bytes per read Write (i:\dir1\dir2\test.001) 1048576 bytes/16384 bytes per write Create 100 files 2 dir deep (i:\dir1\dir2\test) Delete 100 files 2 dir deep (i:\dir1\dir2\test) Get Normal File Attributes by Name 100 times

Client and Server Configurations The following configurations were used to collect the data for the file serving measurements in this chapter.

Servers
− AS/400 model 400-2131 96 MB memory, 33000K for Machine Pool, 8-1GB disk drives 16 Mbps Token Ring LAN, 2619 for Client Access 16 Mbps Token Ring LAN, 64MB FSIOP for LAN Server/400 PS/2 Server 95 running NetWare 4.1 for DOS 64MB RAM, 4 - 350 MB SCSI disk drives Auto LAN Streamer MC 32 Adapter 16 Mbps Token Ring LAN

Clients
− − ValuePoint, 486-66Mhz clients running LAN Requester from OS/2 or NetWare Requester from DOS ValuePoint, 486-66Mhz clients running Client Access/400 16 MB RAM memory 320 KB PC cache CA/400 for DOS Extended (original client) CA/400 for OS/2 (original client) V3R1.1 CA/400 for Windows 3.1 (premier client) V3R1.1 CA/400 for Optimized OS/2 (premier client)

6.5 Client Access/400 File Serving Performance
With Version 3 of OS/400, the Windows 3.1* client and the Optimized for OS/2 client were introduced. In June 1996, the Windows 95 client was introduced. These clients are designed to provide integration with their respective desktop operating system, and also to take full advantage of the AS/400 Integrated File System. In addition, the Windows 3.1 Client provides native Windows communication support. Only adapter drivers, such as the LAN Support Program, run as DOS TSR (terminate and stay resident) programs. Less real memory is required as a result of this change to eliminate most TSR programs.

Chapter 6. Client/Server File Serving

201

This soft copy for use by IBM employees only.

For these new clients, the Root file system offers the best file serving performance. Therefore, as users migrate to using these clients from the existing DOS clients and the original OS/2 client (16 bit), it is recommended to also move their data into the new Root file system from the QDLS file system to get the best performance. However, this is not always possible. In environments where new clients, such as Windows 3.1, and original clients, such as DOS extended, need to access the same data, the data will need to be stored in the QDLS (shared folders) file system.

6.5.1 Performance Data
The following performance information was collected using the low level PC primitives workload.

Figure 55. PC File Serving Primitives Response Time Comparison

6.5.2 Conclusions and Recommendations

Performance data for Optimized for OS/2 client to Root file system is not shown but has performance characteristics comparable with Windows 3.1 to Root. The Root file system provides faster response times than QDLS for the Windows 3.1 client, especially for the Create100, Delete100, Open100, and Close100 test cases shown above. Similar results for Root versus QDLS can be expected for the Optimized for OS/2 client. Both the Windows 3.1 and Optimized for OS/2 clients provide Anynet support for running APPC based applications over TCP/IP. Using Anynet support will add to the response times shown above. FSIOP file serving provides the fastest response times compared with any of the Client Access configurations.

202

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

6.5.3 CA/400 for Windows 95 File Serving
Measurement Results: This section provides measurement results in terms of end user response time.
Table 7. CA/400 for Windows 95
CA/400 for Windows 95 V3R1M0 AS/400 400-2131 V3R7 PC 486-66Mhz, 32MB memory Response time in Milliseconds Primitives (primitive ID see intro section) Open (OPNR2) Close (CLS6) Read (RD1) Write (WRT1) GetFileAtt(GNAN) Create (CRT3) Delete (DLT3) TCP/IP Connection Root 3190 1980 11640 3900 3080 15220 8190 APPC Connection Root 4880 4010 7740 3850 5330 16090 11260 IPX Connection Root 3350 1820 27300 5160 3400 14670 8240

Note: TCP/IP and APPC numbers are for CA/400 Windows 95 V3R1M0. The V3R1M1 numbers for these two protocols are very similiar to V3R1M0. IPX numbers are for CA/400 Windows 95 V3R1M1.

Conclusions/Explanations: 1. Windows 95 clients should use TCP/IP for the fastest response time. This is due to the client contribution to response time being faster with TCP/IP. 2. AS/400 server capacity is best with APPC. This is due to the AS/400 server communications pathlength difference between TCP/IP and APPC. 3. CA/400 for Windows 3.1 clients using ANYNET should experience an improvement in response time and capacity when migrating to CA/400 for Windows 95 using native TCP/IP. Client and server contributions to response time are better for TCP/IP than ANYNET for Windows 3.1. 4. CA/400 for Windows 3.1 clients using APPC may experience a degradation in response time when migrating to CA/400 for Windows 95. This is due to client performance differences between Windows 3.1 and Windows-95. AS/400 server performance is unchanged between Windows 3.1 and Windows-95 clients. Therefore, AS/400 server capacity is unchanged. 5. Clients directly connected to the AS/400 server will generally achieve fastest response times. Gateways tend to add response time to client/server transactions. Performance-sensitive workloads usually perform best when directly-attached to the AS/400 server since the number of processing layers is minimized.

Chapter 6. Client/Server File Serving

203

This soft copy for use by IBM employees only.

6. For optimum performance on the Windows 95 V3R1M0 client, the client should be updated to at least Service Pack 2. Improvements were made in that Service Pack for SNA, TCP/IP, and IPX communications. 7. Refer to ASKQ item RTA000097671 for more details on CA/400 for Windows-95 performance.

6.5.4 Performance Tips/Techniques for Client Access/400 File Serving
Throughout this section IFS File Serving will refer to the functions formerly known as Shared Folders .

6.5.4.1 Setup and Configuration Tips & Techniques
Following are setup and configuration tips for all clients, for DOS clients, and for OS/2 clients. All Clients:

The following should be considered when choosing the size of the cache for CA/400 clients: − Use a small cache (256 kilobytes - 1MB) for applications accessing data sequentially in small amounts (or if you do not have much available PC memory). Use a medium to large cache (500KB or larger) for applications accessing data randomly. Use a medium to large cache (500KB or larger) for applications accessing data both randomly and sequentially. The required cache size varies with the application. Creating a cache larger than is necessary may not further improve the performance of that application. However, if the caching is effectively reducing communication interaction with the AS/400 system, then the load on the AS/400 system processor is reduced. If you typically run most of your applications once a day and use the data associated with those applications only once, the cache size to choose is the maximum cache size determined for those applications. If you typically run some or all of your applications more than once and use the data associated with those applications more than once, the cache size to choose is the sum of the cache sizes determined for those applications. Note: Increased cache sizes will increase the amount of memory used on the PC. If there is already memory usage problems on the PC, increasing the cache size for CA/400 will only worsen the problem.

− − −

Use a large frame size if possible. Significant performance improvement may be achieved for Windows 3.1 clients when using large frame size. See 6.7.1, “Conclusions and Recommendations” on page 215 for more information.

DOS Clients:

The following should be considered when choosing the size of the cache for DOS clients: − Client Access/400 supplies a tool called GETSTAT that can help you tune your PC memory cache size. The tool is in the AS/400 folder QIWSTOOL. Run the program IWSTOOL, which is also in the QIWSTOOL folder, and

204

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

select GETSTAT to download to your fixed disk or a diskette. GETSTAT will tell you the memory used by your IFS File Serving programs and buffers. It also gives you an idea of how effectively your cache is working to limit data transfer with the AS/400 system. Note: GETSTAT works only with the DOS Extended client. − − If using DOS Extended, use a minimum cache of 128 kilobytes. The value specified on the MCAC, MCAE, and MCAX identifiers is used to create both the IFS File Serving cache and a cache table to keep track of things within the cache.

If you are using DOS Extended (XMS), space for the cache as specified on the MCAX parameter is allocated from extended memory. Using XMS does not permit the allocation of cache and the associated cache table in conventional memory. If you are using DOS 5.0 and have the following statement in your CONFIG.SYS file, it is possible for DOS Extended IFS File Serving code to only use 96 bytes of your conventional memory.

dos=high,umb

There are two connectivity types that do support the placement of the communications buffer in extended memory: SDLC and ASYNC. In these cases the FEMU will be ignored and the buffer placed into extended memory. When possible, for performance advantage specify this in conventional memory. The size of the communications buffer can have an effect on the performance of IFS file serving. The size is specified with the CBSZ identifier in the configuration file (CONFIG.PCS). Note that the size of the communications buffer can be changed only if it is located in conventional memory, with the exception as noted earlier for the 2 connectivity types that support extended memory (ASYNC & SDLC). − − If enough memory is available on the PC, increasing the size of the communications buffer can improve response times. If the PC is having problems with memory usage, use the default communications buffer size (8K).

OS/2 Clients:

The LAN Server/400 and the FSIOP, the IBM OS/2 LAN Server, or the PC LAN program may be better file-serving alternatives if heavy file I/O is the only function to be performed. Client Access/400 should be considered when many of the required functions are not available with other file servers. These functions include host data transfer, host integration, 5250 emulation, and remote system access. Because Client Access/400 can reside simultaneously with a PC file server (i.e., LAN Server/400, OS/2 LAN Server, or PC LAN Program) in the same work station, it may be desirable to install more than one server on the LAN. For example, the PC LAN Program could be used to provide the performance needed for program loading from a PC server while IFS File Serving Function could be used for storage of data for remote PCs and for data exchange between PCs and the host system. Another alternative is the NetWare(*) for SAA product that supports NetWare serving from a PC server. It also supports all the Client Access/400 products from an AS/400 attached to the NetWare PC server.
Chapter 6. Client/Server File Serving

205

This soft copy for use by IBM employees only.

6.5.4.2 Application Tips & Techniques
In general, PC hard disks typically provide better performance than the IFS File Serving function. However, there are techniques to lessen performance differences. The following techniques may be used:
• •

When appropriate, store some PC programs/files on the PC hard disk. Copy PC files to a PC RAM disk or hard disk, use the files from the RAM disk/hard disk, and copy the changed files back to the AS/400 system when the work is complete. Minimize the number of times a file is uploaded or downloaded. A PC application should use an appropriate create or open operation instead of searching for the existence of a file. Searching for a file requires significant time and resources compared with other operations. PC applications should be designed to open a file once, perform all necessary operations, and then close the file. Use ″write with verification″ only when absolutely necessary. When verification is used, write operations are not buffered by the File Server/400. When reading data from a file, read from beginning to end and not randomly. The File Server/400 buffers are used more efficiently when files are accessed sequentially from beginning to end. Design your PC applications so that small files are read into memory. Memory accesses are always faster than accessing the disk or data on the AS/400 system. Increase the LANMAXOUT value in the AS/400 controller description for a clients attached by a TRLAN. Changing the default value from 2 to 7 provided significant performance improvement during testing in a lab environment. Changing the default value to 6 may yield similar improvement. Backing up files to a folder can take significant AS/400 resources. To avoid performance impacts to other AS/400 applications, consider the following options: − It is possible to use a packaging utility such as PKZIP to create a single large file that contains images of multiple files. Then the single large file can be saved for back-up purposes. This can eliminate hundreds of creations of files, and can literally make the time to back-up hundreds of times faster. Back up large numbers of files to folders when system activity is low. Use the CHGJOB (Change Job) command to decrease the run-time priority of the job doing the backup so that system resources are not tied up.

• •

− −

For the new clients, the Root or QOpenSys file systems offer best file serving performance. Avoid doing many open/delete operations from the same Root or QOpenSys directory. Delete causes the name cache for the specified directory to be invalidated and results in poor performance. Try to keep directories that have frequent deletions (e.g. temp files) separate from directories containing files that don′t frequently get deleted. Avoid any administration (create, delete, update operations) of authorization lists during peak IFS usage. This will improve the efficiency of internal caches.

206

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Avoid using supplemental group profiles. Use the default value of *NONE for the SUPGPPRF parameter in the user profile. Best performance is achieved for users that do not have supplemental group profiles. The Root or QOpenSys file system supports hard links and symbolic links. Hard links can improve performance by reducing the number of directory lookups. However, having many hard links to the same file can result in poor performance. Best results are achieved when you use a small number of hard links for key files that are commonly used. For Optimized OS/2 clients, use ″current working directory″. This can be done by assigning a drive to the working directory or use the CD command. Using current working directory can reduce the number of directory lookups and improve performance. Avoid doing many mkdir and rmvdir operations when using the Root or QOpenSys file system. Avoid doing many create and delete operations when using the QDLS file system.

6.5.4.3 File Server/400 Tuning
The following tips can be used to tune File Server/400 (IFS File Server on the AS/400). Client Access/400 clients use this server when accessing network drives. There is always the risk of giving too much of any one resource to a particular job, so consider the other jobs on the AS/400 and their importance before shifting resources to the File Server/400.

.As with other jobs, run priority and timeslice can be adjusted for File Server/400 jobs. For File Server/400 this can be accomplished by changing job class QPWFSERVER. Performance of File Server/400 will also benefit from having more memory available. One option to provide File Server/400 jobs with adequate memory is to associate the QSERVER subsystem with its own pool and set aside the appropriate amount of memory for that pool. Multiple clients executing the BAPCo workload were used to determine the optimal amount of memory needed per client. It was found that in the range of 1.5MB to 2.0MB per user, client response times ″leveled off″ such that more memory did not significantly improve response times. As the response times leveled off the paging in this pool was still higher than recommended, but because the DASD arms were at relatively low utilization the additional IO did not significantly affect response time. However, as additional memory was added to the pool beyond 2.0MB per user, the paging and faulting for that pool continued to decrease significantly until about 4.0MB was available for each client (QPWFSERV file server job to be more exact).

6.6 LAN Server/400 and FSIOP File Serving Performance
V3R7 will provide similar performance characteristics to V3R6 for the LAN Server/400 environment. Most of the performance data provided in this section is from V3R6 systems. LAN Server/400 was a new AS/400 licensed program in V3R1. It works with the AS/400 File Server IOP (FSIOP) to provide high performance file serving to PC LAN clients.

Chapter 6. Client/Server File Serving

207

This soft copy for use by IBM employees only.

LAN Server/400 implements OS/2 LAN Server technology on the AS/400 and allows the AS/400 to provide services similar to those of an OS/2 LAN Server. PC clients can access files stored on the AS/400 by using the IBM LAN Requester program, which is the same software used to request service from an OS/2 LAN Server. From the perspective of PC clients, AS/400 and OS/2 LAN Server are both servers that can store data and programs. PC LAN clients running DOS, OS/2, or Microsoft Windows are all able to request services. LAN Server/400 and FSIOP provide high performance file serving at a level not previously possible with the AS/400 system. File serving performance with FSIOP and LAN Server/400 is comparable with best of breed PC LAN file servers. Response time and throughput characteristics of file serving with an FSIOP are consistently comparable with PC LAN file servers such as OS/2 LAN Server and Novell NetWare running on equivalent hardware. This is true for a small or large number of users. The remainder of this section gives performance information for several scenarios using LAN Server/400 and FSIOP. For a thorough understanding of LAN Server/400 features and function, the following IBM redbook is recommended: LAN Server/400 - A Guide to Using the AS/400 as a File Server, GG24-4378 . Another recommended document is LAN Server for OS/400 Administration, SC41-3423 . QLANSrv is a file system which will be present on systems that have the LAN Server/400 licensed program installed. QLANSrv provides a mechanism for host applications and the OS/400 Integrated File System (IFS) file server (File Server/400) to access data that is in the LAN Server/400 file system. Note: Host applications or PC clients (using File Server/400) that access files through QLANSrv will not achieve the same level of performance that PC clients using LAN Requester applications can achieve. For information on Saving/Restoring LAN Server/400 files and storage spaces, refer to 6.11, “Save/Restore Considerations” on page 222. LAN Server/400 and FSIOP provide equivalent performance with a 2617 or 2619 IOP when used as a LAN communications controller. The FSIOP has greater capacity for large transfer scenarios - achieving rates over 14 Mbps. Please refer to 6.2.3, “LAN IOP Response Time” on page 197 for additional information comparing FSIOP (6506 IOP) with 2617 and 2619 IOPs.

6.6.1 LAN Server/400 and FSIOP Sizing Guidelines
The following sections contain sizing guidelines.

6.6.1.1 Estimating the Number of Clients Supported by an FSIOP
The number of clients that an FSIOP can support when used as a file server depends on the amount of memory available for the HPFS cache on the FSIOP, the rate of requests from the clients, client hardware configurations, LAN utilization, and other factors. It is difficult to generalize with so many variables. The following estimates for the number of users supported are only guidelines. For casual file serving, the number of supported users will be larger. For very heavy file serving (e.g. multimedia continuous medium-quality video that delivers data to each client at a rate of 150Kbytes per second), the number of supported users will be lower, as resources such as the LAN become a bottleneck.

208

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Table 8. AS/400 FSIOP - Getting Started
Guidelines for Getting Started FSIOP Memory Size 16 MB 32 MB 48 MB 64 MB Number of Users 1 - 20 20 - 50 50 - 100 100 - 250

6.6.1.2 AS/400 CPU Requirements
The FSIOP is supported by all AS/400 Advanced System and Advanced Server models. Because relatively little AS/400 system CPU is consumed on behalf of file serving requests generated by LAN Requester clients, the various AS/400 models will typically yield similar file serving performance characteristics. LAN Server/400 will provide more file serving throughput at significantly less CPU as compared with Client Access file serving. To minimize the use of AS/400 CPU resource when doing read operations from FSIOP-attached clients, large file system caches should be used on the FSIOP. The section below on DASD I/O Requirements discusses HPFS cache sizes for the FSIOP. The data in table Table 9 on page 210 below shows the effect on AS/400 CPU and disk resources when running a fixed workload using FSIOPs of three different memory sizes.

Chapter 6. Client/Server File Serving

209

This soft copy for use by IBM employees only.

Table 9. Effects of FSIOP M e m o r y Size on AS/400 Resources for a Fixed Workload
LAN Server/400-FSIOP AS/400 400-2131 V3R6 FSIOP Configuration 16MB 32MB 64MB AS/400 CPU Utilization 8.0 4.5 3.3 AS/400 I/Os Per Second 19.5 10.9 7.5 BAPCo5 Average Completion Time 719 691 680

Note: Average utilization of 16Mbps LAN was 9% for all tests Average utilization of FSIOP ranged from 28% to 32%

6.6.1.3 AS/400 System DASD I/O Requirements
To minimize the impact on the AS/400 disk I/O′s it is best to make maximum use of the HPFS cache on the FSIOP by ensuring that sufficient FSIOP memory is available for HPFS cache. The data in Table 9 shows fewer disk I/Os happening with larger FSIOP memory, because the HPFS cache is larger. HPFS cache is used by the High Performance File System for data being read from and written to disk. In the IBM Redbook LAN Server/400 - A Guide to using the AS/400 as a File Server, GG24-4378 , a detailed description is given on how the memory on the FSIOP is allocated. HPFS memory allocations for various FSIOP configurations are shown below:
Table 10. AS/400 FSIOP HPFS Cache Sizes
HPFS Cache Size (MB) FSIOP Memory Size (MB) 1 Port 16 32 48 64 4.8 16.3 28.6 43.6 2 Ports 2.8 13.8 25.6 40.6

Notice that for the 16MB FSIOP with two ports, less than 3MB is available for HPFS cache. HPFS Cache statistics and other performance counters are available from the FSIOP. This data can be collected using the STRPFRMON command. IBM AS/400 Work Management, SC41-3306 , provides information on the data collected by the STRPFRMON command.

6.6.1.4 AS/400 System Memory Requirements (Machine Pool)
Using an FSIOP with one port will require approximately 2700k bytes from the AS/400 system pool #1 (machine pool) exclusively for FSIOP and LAN Server/400. Using an FSIOP with two ports will require approximately 4200k bytes from system pool #1 exclusively for FSIOP and LAN Server/400. These memory requirements are different from the memory on the FSIOP card itself.

210

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

6.6.2 BAPCo5 Workload File Serving Comparisons
The data for the following chart was collected using the BAPCo5 workload, which is described at the beginning of this chapter. Because the clients run with zero think-time scripts, the load generated on the AS/400 server by these clients is representative of a load which would typically be generated by a larger number of clients. The following chart shows the relative positioning of file serving performance using the following client --> server configurations:
• • •

Client Access Optimized for OS/2 client --> IFS root file system OS/2 LAN Requester 4.0 --> LAN Server/400 and FSIOP NetWare requester 4.1 --> NetWare 4.1 on PC Server95 running DOS

Figure 56. PC File Serving Using Five PC Applications

6.6.3 Conclusions and Recommendations

LAN Server/400 and FSIOP provide high performance file serving competitive with PC Network servers. Data was estimated for LAN Server/400 running at a rate of 1.1 BAPCo5 applications per minute for purposes of comparing with client access running to the Integrated File System. While end user response times are fairly close at this point, the AS/400 CPU utilization is much lower for LAN Server/400 (3% versus 40%), because the AS/400 CPU is primarily used when the FSIOP accesses files on AS/400 DASD. A wide range of AS/400 CPU models will yield similar response time results for the FSIOP workload. This is because the CPU and disk resources on the AS/400 system are used in a fashion that is generally asynchronous to individual client requests. Of course the overall capacity (how many FSIOPs can be supported) will vary based on the AS/400 CPU.

Chapter 6. Client/Server File Serving

211

This soft copy for use by IBM employees only.

Data for NetWare is provided as a comparison. For four of the five applications that are part of the BAPCo5 workloads, the response times were very similar for LAN Server/400 and NetWare. The fifth application was limited (by NetWare and/or the NetWare requester) to sending very small frames of information. This contributed to the overall response times being slightly longer for the NetWare configuration than for the LAN Server/400 configuration. It is estimated that at 4 BAPCo5 applications per minute on the chart, a load approximately equivalent to 100 file serving clients is being generated. For more casual file serving clients, this rate of throughput could represent as many as 200 clients. Near 4 BAPCo5 applications per minute, the FSIOP utilization was 46%. This utilization is shown in the AS/400 Performance Tools LPP component report. Other FSIOP performance data collected by STRPFRMON can be queried in the file QAPMIOPD. Other than IOP utilization, data from the QAPMIOPD file is not available through any of the Performance Tools reports and must be queried. The file QAPMIOPD contains a large number of statistics associated with the FSIOP, including the software operating within it - LAN Server/400 and the High Performance File System, OS/2 utilizations, and FSIOP CPU utilizations. For more information on QAPMIOPD file field descriptions, please refer to Appendix A of the Version 3 Work Management Guide, SC41-3306-00 . Client Configuration The speed and configuration of LAN Requester clients can have a big impact on file serving capacity. When a larger share of the processing can be done on the client, the more perceived capacity the file server has. Faster clients are generally able to generate a higher volume of file serving requests, depending on the PC application being used. For the BAPCo5 workload, part of the average completion time is spent on the client. Therefore, the configuration of the client (e.g. processor speed, amount of memory, etc.) affects the results. A slower client or one with less memory will have longer completion times than shown in this chart. These measurements were performed using PS/ValuePoint 486-66Mhz clients. Each PS/ValuePoint client was executing the BAPCo5 workload with zero think-times, which has the effect of simulating many PCs per client. Based on tests using a subset of the BAPCo5 workload, OS/2 LAN Requester provided slightly faster response times than the DOS LAN Requester.

212

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

The following table provides additional information collected for the BAPCo5 file serving comparisons.
Table 11. File Serving Performance using 5 PC Applications
Client Access/400, LAN Server/400-FSIOP, NetWare Servers: AS/400 400-2131 or PS/2 Server95 Environment BAPCo5 Apps Per Minute 1.1 1.1(1) 4.0 6.2 AS/400 CPU Util ( % ) 40 03(1) 05 08 16Mbps LAN Util ( % ) 03 03(1) 17 26(2) FSIOP Util (%) BAPCo5 Avg Completion Time (sec) 910 690(1) 710 730

CA/400 Opt OS/2 LAN Server/400 LAN Server/400 LAN Server/400

-18(1) 46 64

Note: (1) This data estimated for comparison with CA/400 Opt OS/2 (2) Data not available and is estimated

6.7 Multimedia File Serving
This section provides performance information for multimedia file serving using V3R6 and Ultimedia System Facilities (USF). USF is a feature of OS/400 that brings multimedia function to the AS/400. Image, audio, and motion video can easily be added to new or existing applications executing in either the client or AS/400 Host/Server using the provided APIs. Repository and object services support provides the capability to capture, register, and play back multimedia objects such as video clips, audio files, and still images. A repository is provided to catalog and track objects. Additional capabilities to edit, transform, and sequence objects give a complete support platform on which to build multimedia enabled applications. USF also extends or supersedes the capabilities covered by the Ultimedia Video Delivery System/400. The USF product requires the use of a PC client running either OS/2 2.1 with Multimedia Presentations Manager/2, or DOS 5.0 and MS Windows 3.1 with Multimedia Extensions. AS/400 host based applications or PC-based C language applications can make use of the provided API′s. Client Access/400 is required and additional multimedia hardware and software may also be required depending upon the type of multimedia data to be used. The following workloads were used to evaluate multimedia file serving performance. Workload Descriptions

Heavy Image: A USF sequence object displays a bitmap image every 15 seconds (150KB average size) Heavy Image with Audio:

Chapter 6. Client/Server File Serving

213

This soft copy for use by IBM employees only.

A USF sequence object displays a bitmap image every 15 seconds, an audio clip (16 bit 22Khz) plays continuously in the background while the images (150KB average size) are displaying

Video (high) - Continuous High Quality DVI Video: A sequence object repeatedly invokes a production level DVI video file, which requires 180KB/sec average transfer rate (this constitutes a high quality DVI file), files are varied to ensure file accesses are from disk Video (medium) - Continuous Medium Quality DVI Video: A sequence object repeatedly invokes a DVI video file, which requires 150KB/sec average transfer rate (this constitutes a medium quality DVI file), files are varied to ensure file accesses are from disk

The following configurations were used for the multimedia tests described in this section. Configurations

Server: 9406 Model 510-2143 running V3R6 − − − 256MB memory, 2-6606 (1967MB) and 1-6602 (1031MB) DASD 2619 Token Ring LAN IOP, used at 16Mbps Frame size = 16393, RU size = 16384

Client: PS/2 Model 95 - 50Mhz − − − − Tested with both WARP and Windows 3.1 32MB memory 4/16 Mbps Token Ring adapter, used at 16Mbps Action Media II adapter

Measurement Results The Optimized for OS/2 Client Access/400 client was used to collect the following information. Measurement data shown below was reported by the OS/400 performance monitor.
Table 12. V3R6 Multimedia File Serving Performance Data
Ultimedia System Facilities Server: AS/400 510-2143 V3R6 Client: Optimized for OS/2 CA/400 running USF Scenario AS/400 CPU Util ( % ) 1.0 2.3 2.3 2.1 AS/400 Disk IOP Util (%) 0.9 1.7 3.3 3.6 AS/400 Disk Util ( % ) 1.7 2.7 5.4 4.9 AS/400 TRLAN 2619 IOP Util (%) 4.9 7.2 8.4 7.8 16Mbps TRLAN Util (%) <1 2.0 8.0 7.0

Heavy Image Heavy Image with Audio Video (high) Video (medium) Note:

The utilizations above were generated using a single client.

214

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

6.7.1 Conclusions and Recommendations

The results from Table 12 on page 214 give an indication of how AS/400 system resources are used for a single client executing the various scenarios described. Performance when using the CA/400 for Windows 3.1 client was not as good for these and other scenarios. Video playback was noticeably worse than when using the OS/2 client, and may not be acceptable for software motion video using a similar client configuration. Using the Action Media II adapter improved the video quality. This adapter may no longer be available, but other digital video cards can be used to provide hardware assist for off-loading decompression. The USF product is built on the QDLS file system and uses QDLS to store object and attribute information for all USF objects. The Integrated File System introduced in V3R1 provides improved performance to the new Client Access/400 clients, and USF allows for files to be served from the Root file system as well as QDLS. Performance improvements can be realized by serving video files from the Root file system. Please refer to 6.5, “Client Access/400 File Serving Performance” on page 201 for additional information. Using the largest frame and RU sizes possible is a necessity when running multimedia applications. 16K frames and RU sizes were used for all measurements. The following steps can be used to adjust the settings for the indicated client: − Optimized for OS/2 client: Click on AS/400 icon on desktop, click on CA/400 Connections icon, select AS/400 server, click on SNA Configuration, click on LAN, click on Advanced Options, set maximum I field size to 16393, click on Mode, select QPCSUPP and click on change, click on Max RU size and enter 16384, click on OK, close and save settings. Windows 3.1 client: Click on CA configuration, click on System, select AS/400 server, click on Global Options, set maximum frame size to 16393.

For client hardware, ensure that reference configuration (for microchannel) or switch settings (for non-microchannel) specifies at least 16KB RAM for the token ring card. Older cards sometimes come configured for 8KB, which will limit data throughput and can cause jerky video.

6.8 FSIOP Performance Monitor Query - Cache
Figure 59 on page 217 shows the query definition when selecting ″HPFS data″ from Performance Monitor database file QAPMIOPD. Figure 57 on page 216 shows the query output for a specific collection of FSIOP ″HPFS data″ from Performance Monitor database file QAPMIOPD. Refer to the Work Management Guide for field definitions. Field XIDTYP containing a ″3″ identifies HPFS data. Internal lab performance tests indicate top performance is associated with a cache hit ratio of 90% or higher. In general, the FSIOP CPU utilization is not a bottleneck. Either a cache hit ratio below 90% or utilization of the LAN itself is the critical bottleneck in a heavy file serving environment.

Chapter 6. Client/Server File Serving

215

This soft copy for use by IBM employees only.

8 Users LAN Server/400 - QAPMIOPD(HPFS) Read, Write Cache, File Open/Close QUERY NAME . . . . . HPFSCOOK02 LIBRARY NAME . . . . DJOHNSON FILE LIBRARY MEMBER FORMAT QAPMIOPD FSCITY2 CPUTEST2 QAPMIOXR DATE . . . . . . . . 03/13/95 TIME . . . . . . . . 16:53:37 HPFS386 Statistics- Cook 02

03/13/95 16:53:37 Int Int IOP # Sec Bus Data Type 1 301 2 3 2 298 2 3 3 298 2 3 FINAL TOTALS TOTAL AVG

Total # READS 95,347 232,309 222,045 549,701

CACHE HIT % READS 87.41 95.12 93.72

LAN Server/400: HPFS Cache Statistics and File Open/Close Total # CACHE Files Files Read Reqs Read Reqs Write reqs Writes HIT % Opened Closed frm CACHE from DISK from DISK WRITE 6,933 100.00 995 975 83,351 11,996 0 3,136 100.00 3,969 3,974 220,981 11,328 0 7,537 99.98 3,980 3,995 208,117 13,928 1 17,606 512,449 99.99 2,981 2,981 37,252 1

PAGE 1 Write Reqs LAZY Written 6,933 3,136 7,536 17,605

92.08

Figure 57. FSIOP Performance Monitor Query - Cache

If you can configure a LAN configuration and workload with an OS/2 LAN server, you can use the ″CACHE386 /stats″ command that shows cache statistics for reads and writes as a base metric to compare to FSIOP cache read and write statistics. The OS/400 Performance Monitor records FSIOP CPU, disk, and HPFS statistics in file QAPMIOPD. A cache hit ratio of 90% or higher is recommended. You must create your own query to provide the FSIOP cache statistics. The preceding example is of a query definition for FSIOP HPFSCache data (field XIDTYP=3) and output of the query.

6.9 FSIOP Performance Monitor Query - CPU
Figure 60 on page 219 shows the query definition when selecting ″CPU data″ from Performance Monitor database file QAPMIOPD. Figure 58 shows the query output for a specific collection of FSIOP ″CPU data″ from Performance Monitor database file QAPMIOPD. Refer to the Work Management Guide for field definitions. Field XIDTYP containing a ″2″ identifies FSIOP CPU data. A CPU utilization below 80% is recommended.
8 Users Interconnected - FSIOP CPU QUERY NAME . . . . . CPU486COOK LIBRARY NAME . . . . DJOHNSON FILE LIBRARY MEMBER FORMAT QAPMIOPD FSCITY2 BAP8WS64MB QAPMIOXR DATE . . . . . . . . 03/22/95 TIME . . . . . . . . 13:59:07 FSIOP 486 CPU Utilization Statistics

03/22/95 13:59:07 8 Interconnected LAN Server/400 Clients-FSIOP CPU Int IOP Data IOP Interval 486 CPU 486 CPU # Bus Type Type Seconds Seconds Utilization (OS/2) (%) 1 2 2 6506 298 93.8 31.4 2 2 2 6506 296 131.4 44.4 3 2 2 6506 299 120.6 40.3 4 2 2 6506 197 47.1 23.9 FINAL TOTALS TOTAL 1,090 392.9 AVG 35.0

Figure 58. Query Output - CPU

216

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

5763QU1 V3R1M0 940909 IBM Query/400 SYSASLA6 Query . . . . . . . . . . . . . . . . . HPFSCOOK02 Library . . . . . . . . . . . . . . . DJOHNSON Query text . . . . . . . . . . . . . . HPFS386 Statistics- Cook 02 Query CCSID . . . . . . . . . . . . . . 65535 Query language id . . . . . . . . . . . ENU Query country id . . . . . . . . . . . US *** . is the decimal separator character for this query *** Collating sequence . . . . . . . . . . Hexadecimal Processing options Use rounding . . . . . . . . . Ignore decimal data errors . . Ignore substitution warnings . Use collating for all compares Selected files ID File Library T01 QAPMIOPD FSCITY2 Result fields Name Expression TOTALREADS XICT01 + XICT02 CACHEHITRD

3/13/95

16:53:36

Page

1

. . . .

. . . .

. . . .

No No (default) Yes Yes

Member CPUTEST2

Record Format QAPMIOXR Column Heading Total # READS CACHE HIT % READS Total # Writes CACHE HIT WRITE Len 11 5 Dec 0 2

100 * XICT01 / (XICT01 + XICT02)

TOTALWRITE CACHEHITWR

XICT03 + XICT04 100 * XICT04 / (XICT03 + XICT04)

7 7

0 2

Select record tests AND/OR Field XIDTYP AND XICT01 AND XICT04

Test EQ GT GT

Value (Field, Numbers, or ′ Characters′ ) ′3′ 0 0

IBM Query/400 3/13/95 16:53:36 Ordering of selected fields Field Sort Ascending/ Break Field Name Priority Descending Level Text INTNUM Interval Number INTSEC Elapsed Interval Seconds XIIOPA IOP Bus Address XIDTYP Type of data in record TOTALREADS CACHEHITRD TOTALWRITE CACHEHITWR XICT10 Counter 10 XICT11 Counter 11 XICT01 Counter 01 XICT02 Counter 02 XICT03 Counter 03 XICT04 Counter 04 Report column formatting and summary functions Summary functions: 1-Total, 2-Average, 3-Minimum, 4-Maximum, 5-Count Overrides Field Summary Column Dec Null Dec Numeric Name Functions Spacing Column Headings Len Pos Cap Len Pos Editing INTNUM 0 Int 5 0 1 0 # INTSEC 1 Int 7 0 5 0 Sec XIIOPA 1 IOP 3 0 Bus XIDTYP 1 1 Data Type

Page

2

Figure 59 (Part 1 of 2). QAPMIOPD File Query Definition - CACHE

Chapter 6. Client/Server File Serving

217

This soft copy for use by IBM employees only.

TOTALREADS CACHEHITRD

1 2

0 0

TOTALWRITE CACHEHITWR

1 2

0 0

XICT10 XICT11 XICT01

2 2 1

0 0 0

Total # READS CACHE HIT % READS Total # Writes CACHE HIT % WRITE Files Opened Files Closed Read Reqs frm CACHE

11 5

0 2 5 2

7 7

0 2 7 2

11 11 11

0 0 0

6 6 11

0 0 0

IBM Query/400 3/13/95 16:53:36 Report column formatting and summary functions (continued) Summary functions: 1-Total, 2-Average, 3-Minimum, 4-Maximum, 5-Count Overrides Field Summary Column Dec Null Dec Numeric Name Functions Spacing Column Headings Len Pos Cap Len Pos Editing XICT02 1 0 Read Reqs 11 0 from DISK XICT03 1 0 Write Reqs 11 0 from DISK XICT04 1 0 Write Reqs 11 0 LAZY Written Selected output attributes Output type . . . . . . . . . . . . . . Printer Form of output . . . . . . . . . . . . Detail Line wrapping . . . . . . . . . . . . . Yes Wrapping width . . . . . . . . . . . 168 Record on one page . . . . . . . . . No Printer Output Printer device . . Report size Length . . . . . Width . . . . . . Report start line . Report end line . . Report line spacing Print definition .

Page

3

. . . . . . . . . . PRT03 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 . 166 . 6 . 60 . Double space . Yes

Printer Spooled Output Spool the output . . Form type . . . . . . Copies . . . . . . . Hold . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. Yes . (Defaults to value in print file, QPQUPRFIL) . 1 . Yes

Cover Page Print cover page . . . . . . . . . . . Yes Cover page title 8 Users LAN Server/400 - QAPMIOPD(HPFS) Read, Write Cache, File Open/Close

IBM Query/400 Page headings and footings Print standard page heading . . . . . . Yes Page heading LAN Server/400: HPFS Cache Statistics and File Open/Close Page footing

3/13/95

16:53:36

Page

4

Figure 59 (Part 2 of 2). QAPMIOPD File Query Definition - CACHE

218

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

5763QU1 V3R1M0 940909 IBM Query/400 SYSASLA6 3/22/95 Query . . . . . . . . . . . . . . . . . CPU486COOK Library . . . . . . . . . . . . . . . DJOHNSON Query text . . . . . . . . . . . . . . FSIOP 486 CPU Utilizationt Statistics Query CCSID . . . . . . . . . . . . . . 65535 Query language id . . . . . . . . . . . ENU Query country id . . . . . . . . . . . US *** . is the decimal separator character for this query *** Collating sequence . . . . . . . . . . Hexadecimal Processing options Use rounding . . . . . . . . . Ignore decimal data errors . . Ignore substitution warnings . Use collating for all compares

13:59:07

Page

1

. . . .

. . . .

. . . .

No No (default) Yes Yes

Selected files ID File Library Member Record Format T01 QAPMIOPD FSCITY2 BAP8WS64MB QAPMIOXR Result fields Name Expression Column Heading Len Dec CPU486INT ((XICT01 / 1000) / INTSEC) * 100 486 CPU % 7 1 Utilization XICT01SECS XICT01 / 1000 486 CPU 7 1 Seconds Select record tests AND/OR Field Test Value (Field, Numbers, or ′ Characters′ ) XIDTYP EQ ′2′ AND XICT01 NE 0 Ordering of selected fields Field Sort Ascending/ Break Field Name Priority Descending Level Text INTNUM Interval Number

IBM Query/400 3/22/95 13:59:07 Ordering of selected fields (continued) Field Sort Ascending/ Break Field Name Priority Descending Level Text XIIOPA IOP Bus Address XIDTYP Type of data in record XITYPE IOP Type INTSEC Elapsed Interval Seconds XICT01SECS CPU486INT Report column formatting and summary functions Summary functions: 1-Total, 2-Average, 3-Minimum, 4-Maximum, 5-Count Overrides Field Summary Column Dec Null Dec Numeric Name Functions Spacing Column Headings Len Pos Cap Len Pos Editing INTNUM 0 Int 5 0 1 0 # XIIOPA 1 IOP 3 0 Bus XIDTYP 1 Data 1 Type (OS/2) XITYPE 2 IOP 4 Type INTSEC 1 1 Interval 7 0 5 0 Seconds XICT01SECS 1 2 486 CPU 7 1 Seconds CPU486INT 2 2 486 CPU 7 1 4 1 Utilization (%)

Page

2

Figure 60 (Part 1 of 2). QAPMIOPD File Query Definition - CPU

Chapter 6. Client/Server File Serving

219

This soft copy for use by IBM employees only.

Selected output attributes Output type . . . . . . . . . . . . . . Printer Form of output . . . . . . . . . . . . Detail Line wrapping . . . . . . . . . . . . . No

IBM Query/400 Printer Output Printer device . . Report size Length . . . . . Width . . . . . . Report start line . Report end line . . Report line spacing Print definition . . . . . . . . . . . *PRINT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 . 132 . 6 . 60 . Triple space . Yes

3/22/95

13:59:07

Page

3

Printer Spooled Output Spool the output . . Form type . . . . . . Copies . . . . . . . Hold . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. Yes . (Defaults to value in print file, QPQUPRFIL) . 1 . Yes

Cover Page Print cover page . . . . . . . . . . . Yes Cover page title 8 Users Interconnected - FSIOP CPU Page headings and footings Print standard page heading . . . . . . Yes Page heading 8 Interconnected LAN Server/400 Clients - FSIOP CPU Page footing

Figure 60 (Part 2 of 2). QAPMIOPD File Query Definition - CPU

6.10 FSIOP Recommendations

AS/400 machine pool storage for an FSIOP: For a single LAN FSIOP, you should allocate 2700K bytes to the machine pool. For a two LAN FSIOP, you should allocate 4200K bytes of storage to the machine pool.

FSIOP CPU utilization 80% or less guideline: Internal lab performance tests indicate top performance is achieved when FSIOP cache hit percentages are above 90% and the FSIOP CPU utilization is 80% or less. A cache hit percentage below 90% may indicate that more memory is required for the FSIOP. A high CPU utilization may indicate more memory is required or an additional FSIOP is required. You must create your own query to provide the FSIOP CPU statistics. The preceding example is of a query definition for FSIOP CPU data (field XIDTYP=2) and output of the query.

AS/400 CPU utilization and disk busy: As the cache hit ratio decreases below 90%, the AS/400 CPU utilization and disk busy percentage increases as the FSIOP must retrieve data from the AS/400 system. In situations where the cache hit ratio is in the 90% range, CPU utilization should be in the 3-5% or less range.

220

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Note that the FSIOP contains a ″pipe task CPU″ just for accessing AS/400 storage management with a special interface to AS/400 disks just for the FSIOP.

LAN Server/400 NetBIOS parameters: LAN Server/400 uses NetBIOS communications over the LAN. The CRTNTBD (Create NetBIOS Description) command contains various NetBIOS LAN protocol timers, retries, buffer size, and transmit or receive values. Take the defaults unless you are a NetBIOS expert.

LAN Server/400 storage link parameters: LAN Server/400 provides some HPFS buffering parameters that affect performance. The defaults on the ADDNWSSTGL (Add Server Storage Link) command are recommended. Do not change them unless you are an OS/2 LAN Server performance expert. Using the default NetBIOS and LAN Adapter parameters are recommended, although these can be changed. These parameters should be changed only if you require tuning the LAN Server/400 according to special requirements on your network: − − − − Bridges and routers Wide area links Requester′s adapter configuration Requester′s configuration

Chapter 6. Client/Server File Serving

221

This soft copy for use by IBM employees only.

6.11 Save/Restore Considerations
Two methods of save and restore exist.

At the network drive level (dramatically faster!): − − The FSIOP must be varied off. The entire drive must be saved or restored.

At the file or directory level on a network drive: − − The FSIOP can be varied on. It is possible to selectively save and restore files as needed.

Recommendations:

• •

Partition volatile data (changing data) and static data (programs or archived data) on separate network drives. Use option 1 to save network drives containing static data. Use option 2 for drives with volatile data if you need to be able to restore at the file or directory level, or you need 100% availability of the FSIOP.

If you need to be able to restore at the file or directory level, but find that option 2 does not provide the required performance, you should consider the following:
• •

• • •

Perform save using option 1. To restore, create a temporary network drive in QLANSrv, and restore into the temporary network drive. Selectively restore required files from the temporary network drive. Delete the temporary drive when complete. (Consult reference material indicated later in this section).

6.12 OS/400 Integration for Novell NetWare
Benchmark testing results indicate comparable (within 5%) performance of OS/400 Integration for Novell NetWare relative to the following configurations/environments:
• •

Novell Netware 4.1 on PC hardware(w/DOS) LAN Server/400 (FSIOP on AS/400)

The intent of this section is to provide data which compares Netware FSIOP to the platforms that users are most familiar with or migrating from. Many of the concepts, terminology, workloads are common to and covered by the other sections (CA/400, IFS, LAN Server/400...) - please refer to them for more information as appropriate. The following paragraph contains introductory excerpts from the publication OS/400 Integration for Novell NetWare (SC41-3124) . ″The FSIOP is a dual purpose adapter, providing standard AS/400 communications over SNA, TCP/IP, and IPX, in addition to providing file serving. It has an Intel 80486 DX2 (66mhz) used for file serving. Performance is enhanced with the use of an Intel i960 processor and to a lesser extent, the AS/400 main CPU. The FSIOP can have between 16 and 64MB of memory in increments of 16MB - a minimum of 32MB is required when running Novell NetWare 4.1. It can have a maximum of two LAN adapters or ports. Approximately 11MB of memory when configuring one adapter and 14MB of memory when configuring two adapters is not available

222

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

for NetWare 4.1 to use. To help calculate your memory requirements, see Appendix A: Calculate RAM Requirements, in the NetWare 4.1 Installation and Upgrade document (Novell No: 100-002068). For example, in a FSIOP with 64MB of memory configured with one adapter, NetWare has approximately 53MB available for its use.″

6.12.1 Configurations
Measurements were made on the following platforms/configurations. The Server hardware platforms (FSIOP, PCs) were selected/configured to be as equivalent as possible in terms of CPU type, speed, memory size, and amount of DASD. This is to allow reasonably valid performance comparisons across the different platforms.
• • •

NW/FSIOP (NetWare on the FSIOP) NW/PC-DOS (Native NetWare on the PC with DOS) LS/FSIOP (LAN Server/400 - on the FSIOP)

The server hardware (FSIOP and the PC) are basically equivalent
• • • •

i486DX-2 66MHz, 64MB RAM MCA (MicroChannel Bus) single 16/4 Mbps token-ring port (all measurements were run at 16 Mbps) 4 DASD devices/volumes (Note that the FSIOP uses the AS/400 system DASD)

AS/400 System on which the FSIOP was installed
• • •

Model E25, 64MB memory, OS/400 V3R1 5 - 6105 (DASD, 320MB) 8 - 6109 (DASD, 988MB) (about 25% of DASD capacity was used)

The PC client workstation(s) are PS/2 ValuePoint 6384
• • • • •

486DX-2 66Mhz, 16MB of RAM ISA Bus 320MB IDE DASD 16/4 Mb Token Ring Adapter (all measurements were run at 16 Mbps) OS/2 Warp

6.12.2 Workload Descriptions
Two types of workloads were used (refer to 6.4.1, “File Serving Workloads and Configurations” on page 200 for a more detailed description):

File Serving Primitives; single PC client-workstation (file Opens, Closes, Reads, Writes...) Interactive PC applications suite (BAPCo subset) multiple PC workstations (ccMail, WordPerfect, Excel, Paradox)

6.12.3 Measurement Results
Performance values listed below are ratios of measured response times. The NW/PC-DOS platform is used as a reference/base value (that is, a ratio of 1.00 or less indicates equivalent or better response time than the reference; a 1.05 would indicate 5% longer response time). These particular primitives have been found to be useful values for projecting file serving performance:

Chapter 6. Client/Server File Serving

223

This soft copy for use by IBM employees only.

NW/FSIOP LS/FSIOP Write 1.01 0.89 Open 1.12 1.40 Read 0.41 0.42 Close 1.47 1.47 GNAN 1.00 0.90 (GNAN = get normal attribute by name)

NW/PC-DOS 1.00 1.00 1.00 1.00 1.00

Interactive workload with multiple workstations consisted of application scripts running user sequences on real PCs (as listed above). The following resulted from a 150 equivalent users measurement. The reader is cautioned that the workload and client configuration here is not the same one used in the LAN Server/400 (V3R1) release previously published.

NW/FSIOP LS/FSIOP

Relative to NW/PC-DOS 1.04 1.03

(ie NW/FSIOP is within 4% of NW/PC-DOS) (ie LAN Server/400 was within 3%)

Note: Due to current data collection limitations, we were unable to measure directly the FSIOP′s CPU utilization (it is always reported as 95%-99% utilized). However, based on the relative response times, we believe that it would be within the same range as that observed on the NW/PC-DOS system, which was about 20%. Here are some utilizations as reported by the AS/400 Performance Monitor.

CPU DASD I/O LAN DASD IOP FSIOP

Average High 5% 3% 7% 15% 21% 2% 4% (see *NOTE* above)

(Model E25) (16 Mbps) (model 2624, 4 units)

Note: Activating one FSIOP port will cause 2.7MB of Machine Pool memory to be allocated to the NW/FSIOP. Likewise, 4.2MB will be allocated when two ports are active. These memory allocations may change the performance of other jobs on the system if the 2.7MB and 4.2MB represent a non-trivial proportion of the machine pool. In the model E25 test machine, the pool size was 14.2MB. You should monitor system paging activity after adding the FSIOP.

6.12.4 Conclusions
For a particular File Serving workload or user application, the result will vary depending on the mix or distribution of functions (such as represented by the File Serving Primitives). Performance of an application that has mostly file Reads will be much better than an application which has a high incidence of file Opens and Closes. The results of the interactive suites with multiple PC workstations indicate that the NW/FSIOP performed to within 5% of the NW/PC-DOS platform. Users in this environment probably cannot perceive the small difference. As for the low utilizations observed in these tests, one might correctly project that each FSIOP has the capability for higher loads and that the AS/400 (host) has the capacity to support many more FSIOPs. Of course each installation must be tailored to its requirements and machine configuration.

224

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Disclaimer Our test and measurement environments are necessarily artificial due to testing requirements - thus actual users may experience different performance results in their environments.

6.13 OS/400 Integration of Lotus Notes
Most of the tables in this section contain measurement data from V3R2 systems. Because the Integrated PC Server plays the key processing role for this environment, a comparable AS/400 model running V3R7 with an Integrated PC Server will provide similar results. V3R7 data will be provided in the next update to this document and are expected to be very similar to the V3R2 numbers provided here. Performance information for OS/400 Integration of Lotus Notes is included in this section. The information will be divided into two general areas:
• •

Number of Lotus Notes clients that can be supported Performance guidelines for DB2 Integration function

For a complete overview and understanding of OS/400 Integration of Lotus Notes, please refer to the following resources:
• •

OS/400 Integration of Lotus Notes (SC41-3431) Using Lotus Notes on the Integrated PC Server (FSIOP) for AS/400 (SG24-4779)

The File Serving IOP (FSIOP) has been renamed the Integrated PC Server. These terms are used interchangeably in this section and refer to the FSIOP with an Intel 486 CPU.

6.13.1 Number of Notes Clients Supported
The Notes server on an FSIOP can support different numbers of users depending on the type and number of requests from the Notes clients. Three workload scenarios are described below and measurement results using these scenarios is shown in the tables following the descriptions.

6.13.2 Workload Scenario Descriptions
The following sections contain descriptions of the workload scenarios.

6.13.2.1 Mail
The mail workload scenario was driven by an automated environment that executed a script similar to the mail workload from Lotus NotesBench. Lotus NotesBench is a collection of benchmarks, or workloads, for evaluating the performance of Notes servers. The results shown here are not official NotesBench measurements or results. The numbers discussed here may not be used officially or publicly to compare to NotesBench results published for other Notes server environments. Each user completes the following actions an average of every 15 minutes:
• •

Open mail database Open the current view
Chapter 6. Client/Server File Serving

225

This soft copy for use by IBM employees only.

• • • • • •

Open 5 documents in the mail file Categorize 2 of the documents Compose 2 new mail memos/replies Mark several documents for deletion Delete documents marked for deletion Close the view

6.13.2.2 Mail and Discussion
The mail and discussion workload scenario was driven by a tool called TESTNSF which can be used to simulate a load of multiple Notes clients on a server. Using multiple PCs, automated scripts simulating individual users caused requests to flow to the Notes server on an FSIOP, which executed actions against the user′s mail database and a discussion database. Each user completes the following actions an average of every 15 minutes:
• • • • • • • • • • • • • • •

Open mail database Make sure 50 documents exist Open the current view Open 5 documents in the mail file Categorize 2 of the documents Compose 2 new mail memos/replies Mark several documents for deletion Delete documents marked for deletion Open a discussion database Make sure 200 documents exist Open the current view Page down the view 2 times Set the unread list to a randomly selected 30 documents Open the next 3 unread documents Close the view

6.13.2.3 Mail and Discussion with Import
For the mail and discussion with import workload scenario, three import requests (importing data from DB2/400 to a Notes database) were performed during the measurement. The imports executed sequentially and an import was active during the entire measurement. Please see 6.14, “Lotus Notes DB2 Integration Performance” on page 229 for additional information on the performance of import function. The mail and discussion portion of this workload scenario was the same as the mail and discussion workload scenario described above. The following tables provide performance information and guidelines using the three workload scenarios described above:
Table 13. Memory Guidelines for Mail Workload
Memory guidelines for number of Notes Mail users supported on Integrated PC Servers Memory Size 32MB 48MB 64MB Maximum Number of Users 100 150 200

226

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Table 14. Users on Integrated PC Server Running Notes
Lotus Notes on Integrated PC Server Various AS/400 models used, V3R2 64MB Integrated PC Server Workload Scenario Mail Mail Mail and Discussion Mail and Discussion with Import Number of Users 100 200 60 60 CPU(%) AS/400 Model 4.5% 20S-2010 9.0% 20S-2010 3.6% E25 13.4% E25 AS/400 IO/Sec 37.8 68.4 17.7 70.2 Int PC Srvr CPU(%) 33 66 33 70

Note: Average utilization of 16Mbps LAN was <1% for all scenarios

Table 15. Multiple Integrated PC Servers Running Notes
6 Integrated PC Servers running Lotus Notes on an AS/400 Model F90, V3R2 6 64MB Integrated PC Servers, each on separate 16Mbps LAN Workload Scenario Mail Number of Users 1200 (200 per FSIOP) CPU(%) AS/400 Model 7.1% F90 AS/400 IO/Sec 403.5 Int PC Srvr CPU(%) 59.0 61.0 65.0 65.0 65.0 68.0

Note: Average utilization of each 16Mbps LAN was <1%

6.13.3 Conclusions and Recommendations

Mail workload scenario The guidelines provided in Table 13 on page 226 are for a specific workload described earlier in this section. The AS/400 server model 20S-2010 shown in Table 14 has a 5.9 Relative Processor Rating (RPR) for non-interactive work. Using the data for the 100 user mail workload, we can predict that this same workload would require approximately 6.6% CPU on an AS/400 model E25 which has an RPR rating of 4.0. All of the AS/400 CPU required to support both the Notes server and DB2 Integration function runs in non-interactive mode on server models. Server models have a better price performance than traditional models for this environment.

Mail and Discussion workload scenario The data in Table 14 for the Mail and Discussion workload shows a lower number of users than the mail workload. This is because each of these users performs significantly more work than the users in the mail workload. Additional measurements were performed with a higher number of users and response times increased more rapidly beyond 60 users.

Mail and Discussion with Import workload scenario
Chapter 6. Client/Server File Serving

227

This soft copy for use by IBM employees only.

This scenario performed significantly more IO writes per seconds than the mail and discussion without import scenario. The high number of writes in this scenario occurred as a result of importing the data from DB2/400 and creating new documents in the Notes database. Having an import active used significantly more AS/400 and FSIOP resources and impacted response times for the 60 attached users. See 6.14, “Lotus Notes DB2 Integration Performance” on page 229 for additional information and recommendations regarding the import function.

Multiple Integrated PC Servers on a system Data from Table 15 on page 227 shows the effect of running multiple Integrated PC Servers on a single system. The amount of AS/400 CPU and IO/Sec is very close to what would have been predicted if the data from the 200 user mail workload from Table 14 on page 227 had been used to project the resulting utilization of 1200 mail users spread over six (6) Integrated PC Servers on an F90 AS/400 model. Additional measurements were run with eight (8) Integrated PC Servers and yielded results which scaled similarly.

AS/400 tasks and jobs For CICS based processor systems, the following types of tasks are used to process the DASD I/O requests for an FSIOP: − − − ROUTxx - handles communications across bus to FSIOP #O00yy - DASD I/O server task to map HPFS space to AS/400 DASD space SM00yy - storage management task to perform physical I/O to DASD

There will be one of each kind to these tasks processing DASD I/O for each FSIOP. There are additional tasks with similar names which are used for IPL and administrative processing. For RISC based processor systems, the following types of tasks are used to process the DASD I/O requests for an FSIOP: − − FPHA-NWSDname - performs function similar to ROUTxx and #O00yy above SMDSTASKaa - multiple tasks (2 x # of DASD arms) are used to process DASD I/O for all FSIOPs on the system

There will be a FPHA-NWSDname task for each FSIOP on the system, and the SMDSTASKaa tasks are shared by all FSIOPs on the system. Additional tasks, FPHI-NWSDname and FPN-NWSDname, also exist and are used for IPL and administrative processing. For the scenario involving import, additional AS/400 jobs are used to retrieve data from DB2/400 for import to the Notes database. Jobs used for this function will have names similar to QZDASOINIT and QZRCSRVS.

Performance Data Please refer to 6.8, “FSIOP Performance Monitor Query - Cache” on page 215 for information about FSIOP performance information collected by the AS/400 performance monitor (STRPFRMON). The AS/400 Performance Tools can be used to create reports from data collected by the performance monitor. The Component report shows FSIOP utilizations in the ″IOP Utilizations″ section, and shows job and task related data in the ″Job Workload Activity″ section.

228

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

6.13.4 Client and Server Configurations

AS/400 System and FSIOP The AS/400 systems and FSIOP used for these measurements were dedicated while executing the workload scenarios described above. Utilization data is provided in the tables for key system resources utilized during the measurements. A 64MB FSIOP was used for all measurements.

Notes clients 100MHz Pentium clients running OS/2 and OS/2 Notes client were used for the above workloads. Notes 3.1.1 clients were used for the two workload scenarios involving mail and discussion, and Notes 4.0 clients were used for the mail workload scenario.

6.14 Lotus Notes DB2 Integration Performance
This section contains performance information for three functions provided by Lotus Notes DB2 Integration:

Import Import refers to the capability to create a Lotus Notes database from existing data stored in a DB2/400 database.

Shadowing While performing an import, the shadowing option can be specified, which will initiate automatic updates to the imported Lotus Notes database based on on-going changes occurring to the DB2/400 files from which the data was imported.

Exit Program Exit program refers to the ability to create and register an exit program that periodically updates DB2/400 data based on changes made to a Lotus Notes database.

6.14.1 Importing DB2/400 Data To a Lotus Notes Database
In addition to the considerations and recommendations provided in this section, Lotus Notes recommendations for Notes databases should also be reviewed. For example, Lotus Notes recommends that Notes databases be less than 100MB in size. As Notes databases grow to this size and have large numbers of documents, some functions such as initially opening a Notes database can take quite a long time. The following table provides measurement data that was collected for imports with varying numbers of rows, sizes of rows, and number of fields per row. In the examples in this section, each imported row from DB2/400 becomes a Notes document. The data was imported from the same AS/400 system as where the FSIOP Notes server resides. Very little client activity was present during the import measurements described in this section, and the AS/400 system was not dedicated but had low resource utilization from other work. The numbers provided in Table 16 on page 230 can be used as guidelines to help estimate time to import data into a Notes database from DB2/400.

Chapter 6. Client/Server File Serving

229

This soft copy for use by IBM employees only.

Table 16. Importing DB2/400 Data to a Notes Database
Importing DB2/400 data to a Notes database on an Integrated PC Server (background task AMgr set to be 0% active during imports) AS/400 D60, 64MB Integrated PC Server, MTU on NWSD = 15400 Number of Imported Rows 1000 2000 4000 16000 20000 32000 1000 2000 4000 16000 24000 Bytes Per Row 2150 2150 2150 2150 2150 2150 6233 6233 6233 6233 6233 Columns Per Row 10 10 10 10 10 10 21 21 21 21 21 Import Time 5 min 6 min 10 min 95 min 100 min 175 min 6 min 13 min 25 min 105 min 171 min Resulting .NSF File Size 3MB 5MB 10MB 41MB 51MB 88MB 7MB 15MB 29MB 117MB 175MB

(Background task AMgr set to be 10% active during the following imports) 16000 20000 24000 6233 6233 6233 21 21 21 128 min 182 min 244 min 117MB 146MB 175MB

Maximum Transfer Unit It is recommended to set the Maximum Transfer Unit (MTU) size for the internal LAN to 15400 in order to provide maximum data throughput when using the DB2 Integration functions. The MTU parameter is found on the Network Server Description (NWSD).

AMgr settings The AMgr background task affects the performance of import function when it is active. It was observed that with default setting of 50% active daytime and 70% nighttime, that the time to import can vary greatly. These settings can be adjusted and it is recommended for optimum import performance that the setting for percent active time be set near 0 for the time of day when the imports will be occurring. It is also possible to not have this task start up at all by editing the NOTES.INI file and taking it out of the list of tasks that get started on the Notes server. Please refer to the Lotus Notes 4.0 Administrator ′ s Guide for detailed information. The AMgr (Agent Manager) task allows control of who can run agents and when they can run on each server. From the data in Table 16 it can be seen that with the AMgr task 10% active, Import times were greater than when AMgr was set to 0% active.

Starting an import Import requests are queued and processed one at a time in the order they are requested. If a combination of large and small imports are to be requested, it may be appropriate to submit requests for the smaller imports first (for example if Notes clients are waiting to use them), rather than delaying them the length of time of the larger imports. An agent program

230

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

wakes up every 2 minutes on the Integrated PC Server to check for import requests, so import requests may take 2 minutes before they begin processing.

Opening a database After importing a database the default view is to show all of the columns. Choosing fewer columns through a view can significantly improve the time to do the first open. Notes databases can takes minutes to open the first time if they contain tens of thousands of documents. Refer to Notes online help documentation for suggestions on improving view display times.

Import impacts to Notes clients While an import is active, the performance of Notes clients can be impacted. When possible it is recommended to perform imports when the level of Notes client activity is low.

Import versus Import with Shadowing consideration Importing a given number of rows will take somewhat longer if the import with shadow option is specified. Please refer to data in the next section on shadowing for additional information.

6.14.2 Shadowing DB2/400 Data To a Lotus Notes Database
When initiating an import, the user is given an option to start shadowing function for the DB2/400 database file(s) indicated in the import request. To shadow DB2/400 files to a Notes database, Data Propagator Relational/400 is used and the files to be imported from and shadowed must have journaling active. For further information on shadowing, please refer to the resources indicated at the beginning of this section as well as DataPropagator Relational Capture and Apply/400 (SC41-3346) . Other parameters that can be specified when initiating an import with shadowing include the time of day and/or frequency at which the user desires the shadowing activity to occur. Considerations for these setting are discussed later in this section. The data in the following table provides examples of the time required to shadow various types of changes (inserts, deletes, updates) from a DB2/400 file to a Notes database.

Chapter 6. Client/Server File Serving

231

This soft copy for use by IBM employees only.

Table 17. Shadowing DB2/400 Data to Notes Database
Shadowing DB2/400 data to a Notes database on Integrated PC Server (background task set to be 0% active during measurements) Documents in Notes database were 2150 bytes AS/400 D60 V3R2, 64MB Integ PC Server, MTU on NWSD = 15400 Description of shadowing changes made to 10,000 document Notes database 200 Inserts 200 Deletes (spread throughout Notes DB) 200 Updates (spread throughout Notes DB) 100 Inserts, 100 Deletes, 200 Updates Description of shadowing changes made to 20,000 document Notes database 200 Inserts 100 Deletes (spread throughout Notes DB) 200 Updates (spread throughout Notes DB) 100 Inserts, 100 Deletes, 200 Updates Time to Shadow Changes 20 minutes 12 minutes 11 minutes 27 minutes Time to Shadow Changes 49 minutes 14 minutes 24 minutes 54 minutes

Note: The ″time to shadow changes″ includes only the time to shadow the changes to the Notes database; the DB2/400 file changes had already occurred

6.14.3 Conclusions and Recommendations

Inserts Shadowing inserts to Notes databases occurs faster for smaller databases and proportionately longer for larger Notes databases. From the data in Table 17 it can be derived that inserts into the 10,000 document Notes database occurred at a rate of approximately 600 per hour, while inserts into the 20,000 document database occurred at a rate of approximately 250 per hour.

Deletes and Updates Data from Table 17 for shadowing deletes and updates indicates that for the 10,000 document Notes database the deletes and updates occurred at a rate of approximately 1000 per hour, and at a rate of approximately 400 to 500 per hour for the 20,000 document Notes database. Similar to insert, the size of the Notes database will typically impact the rate at which shadowed deletes and updates can occur. For these examples the documents to be deleted and updated were spread throughout the Notes database. If the documents had been found at the top of the Notes databases in both cases, the rates for shadowing delete and update would have been similar for the 10,000 and 20,000 document databases.

Shadowing vs Importing From the data in Table 17 above, it can be seen that the various types of changes can be shadowed at the rate of hundreds of changes per hour. The entire 10,000 document database that was being updated in the first part of the table was imported in 32 minutes. Many issues need to be considered regarding the use of the Notes database, but if the changed data is not required by applications or users in a real time manner, it may be a

232

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

consideration to repeatedly import the entire database if thousands of changes are anticipated for the DB2/400 files that would be shadowed.

Shadowing impact on Notes clients Like import, shadowing activity can significantly impact the performance of active Notes clients. When initiating the import with shadowing request, consider the settings for frequency of shadowing interval. This interval determines how often the Notes server checks to see whether shadowing changes are queued up and then begins making the changes to the Notes database. To avoid impact to Notes clients, attempt to schedule shadowing activity during times when client activity is low.

Shadowing impact on Import From data in Table 16 on page 230, a 20,000 row (2150 bytes per row) import took 100 minutes. Importing the same 20,000 rows with the shadowing option specified took 132 minutes (data for time to import with shadowing is not shown).

Estimating shadowing rates Data from Table 17 on page 232 can be used to estimate time to complete various combinations of shadowing activity. Using the individual rates for inserts, updates, and deletes of a 10,000 document database to estimate the time to perform 100 inserts, 100 deletes, and 200 updates would yield an estimate of 27 minutes. This is in fact how long it actually took when that specific combination of changes was measured as indicated in the table.

6.14.4 Exit Program: Data from a Notes Database to DB2/400
It is up to the user to create and register the Exit program to process the data once it is stored in the AS/400. The data in Table 18 provides examples for times it took to send the changes from the Notes database to DB2/400 for update. This environment used a 10,000 document Notes database and the rates of changes for inserts, updates, and deletes are very similar to those shown in Table 17 on page 232 for Shadowing changes.
Table 18. Exit Program Function
Exit Program: changed data from Notes database to DB2/400, does not include processing to apply changes to DB2/400 (Background task set to be 0% active during measurements) Documents in Notes database were 2150 bytes AS/400 D60 V3R2, 64MB Integrated PC Server, MTU on NWSD = 15400 Description of changes sent to DB2/400 based on changes to Notes database 200 Inserts 200 Deletes (spread throughout Notes DB) 200 Updates (spread throughout Notes DB) 100 Inserts, 100 Deletes, 200 Updates Time to Shadow Changes 22 minutes 12 minutes 13 minutes 29 minutes

Chapter 6. Client/Server File Serving

233

This soft copy for use by IBM employees only.

234

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Chapter 7. Client/Server Performance Tuning
AS/400 terminology and tips for designing client server applications on the AS/400 system have been previously discussed. SNA and Token-ring LAN parameters, OS/400 work management considerations, and SQL debugging techniques have also been discussed. This chapter acts as a review of this material and gives some additional insights as a prelude into performance problem analysis. Hardware plays a key role in performance. If your hardware is over utilized, then you can expect degraded performance. Some of the hardware components that affect performance are:

Server CPU: − − − − Utilization CPU speed Efficiency of frame size and Request Unit (RU) size Amount of error recovery processing

Server disk: − − − − Number of disk accesses Disk performance Disk storage capacity Total number of disks

Server memory: − − Amount of paging Cache

Server Input/Output Processor (IOP): − − − − Utilization IOP capacity Efficiency of frame size Amount of error recovery necessary

The Server CPU handles requests from a client. If the CPU utilization is exceptionally high, this causes a queueing of any incoming requests until there are enough CPU cycles free to process the next request. This means that a lot of the response time is spent in waiting for the CPU. Frame size also affects CPU; the bigger the frame size, the less frames are sent. Many smaller frames use more CPU cycles than fewer larger frames. Server main memory determines the amount of pages that can be processed. The more memory you have, the more pages can be brought into memory for processing, and the better the CPU is utilized. If you have a memory constraint, a fast CPU may be doing only a fraction of the workload it is capable of. Memory also plays an important role called caching. Caching is the process of bringing data into memory before it is needed, thereby reducing the time required to process the data. Hard disk access is crucial to response time. DASD performance is controlled by the disk subsystem, the actuator arms, and the capacity of the disks. The disk actuator arms should never exceed 40% busy, and the more actuator arms in the disk array, the better the performance.

© Copyright IBM Corp. 1996

235

This soft copy for use by IBM employees only.

The hardware issues for the client are basically the same as for the server. Some additional issues for the PC client are:

Client: − − − − Processor capacity (CPU and adapters) Memory size and speed Client operating system Disk accesses

Communications media: − − − − − Utilization Media speed Number of flows Efficiency of frame size Amount of traffic due to error recovery

Application: − − − − − Data Placement Application characteristics Number of communications invocations Design efficiency APIs used

This section begins with AS/400 utilization guidelines, then discusses tuning the AS/400 server and concludes with tuning the client.

7.1 AS/400 Utilization Guidelines
Here are some performance threshold guidelines for AS/400 resource utilization:

Communications media: − − Interactive environments: 30-50% Large transfer: 80-95%

IOP: − − Interactive environments: 60% Large transfer: > 6 0 %

CPU: − − Interactive environments: 70% Large transfer and batch: > 7 0 %

DASD: − − Interactive environments: 40% Batch, ASP Journal: > 4 0 %

Generally the faster the resource is, the higher is the utilization threshold tolerance. The slower the resource, the lower is the threshold tolerance. Each major system resource that is shared between different users should be considered for queueing effects. Suggested utilization thresholds differ for each system resource based on resource type and workload type.

236

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

7.2 AS/400 Server Tuning
This section looks at the various aspects of tuning the AS/400 server for better performance.

7.2.1 Workload and Memory
Workload management: It is important to manage the jobs running on the system to minimize peak workloads. Low priority batch jobs may be scheduled to run during periods of low utilization. Unnecessary creation of job logs is often a frequent cause of increasing the workload on a system.
The guideline for CPU usage by high priority jobs is 70%. By default the database server program has a run priority of 20 in keeping with its interaction with the client PC.

Manage concurrently executing jobs: − − − − Prioritize jobs (maximum 70% CPU for high priority). Run batch jobs in separate pool. Schedule batch jobs appropriately. Minimize job logs (print on failure only).
Level . . . . . : Severity . . . : Text . . . . : 0 0 *NOLIST

Memory: The default memory pool for the database server jobs (QZDAINIT/QZDASOINIT) is *BASE. This can result in memory contention and increase page faulting, particularly in environments with many client/server applications.
Ensure that the machine pool is appropriately sized. See Chapter 14 in the V3R1 AS/400 Work Management Guide , SC41-3306. Page faults should be kept in the range of zero to five. − − A separate memory pool is recommended to run the server jobs on the AS/400 system to minimize contention with other jobs. The following options allow AS/400 main storage to be used as a read cache. In some cases, these options may provide additional performance improvement: - Expert cache - SETOBJACC command All applications are different, but it is recommended to set the pool size to 1.5MB per concurrently active ODBC client. 9MB is a good starting point for the pool size with an activity level of six. Monitor the page rate in this pool to make sure that faulting stays within the acceptable range, as defined in the AS/400 Work Management Guide , SC41-3306.

Memory management: − System-wide: - Adequate memory and activity levels. - Control page faulting within guidelines. − − Run QZDAINIT/QZDASOINIT in separate pool. Select expert cache:

Chapter 7. Client/Server Performance Tuning

237

This soft copy for use by IBM employees only.

- System managed memory caching. − Use SETOBJACC: - User managed memory caching. - Specify pool size. - Select object to be loaded.

7.2.2 Assigning a Storage Pool
This section shows an example for assigning a separate storage pool to the AS/400 database server job, QZDAINIT.

Separate pool for QZDAINIT: 1. Create pool. 2. Set activity level. 3. CHGPJE in subsystem.

Change Subsystem Description (CHGSBSD) Type choices, press Enter. Subsystem description . . . . . > QSERVER Library . . . . . . . . . . . *LIBL Storage pools: Pool identifier . . . . . . . 2 Storage size . . . . . . . . . 9000 Activity level . . . . . . . . 6 + for more values Maximum jobs . . . . . . . . . . *SAME Text ′ description′ . . . . . . . *SAME Name Name, *LIBL, *CURLIB 1-10, *SAME Number, *BASE, *NOSTG. Number 0-1000, *SAME, *NOMAX

Change Prestart Job Entry (CHGPJE) Type choices, press Enter. Subsystem description . . Library . . . . . . . . Program . . . . . . . . . Library . . . . . . . . User profile . . . . . . . Start jobs . . . . . . . . Initial number of jobs . . Threshold . . . . . . . . Additional number of jobs Maximum number of jobs . . | | | | Pool identifier . . . . . Class: Class . . . . . . . . . Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . QSERVER QSYS QZDAINIT QIWS *SAME *SAME *SAME *SAME *SAME *SAME Name Name, *LIBL, *CURLIB Name Name, *LIBL, *CURLIB Name, *SAME *SAME, *YES, *NO 1-1000, *SAME 1-1000, *SAME 0-999, *SAME 1-1000, *SAME, *NOMAX

. . .: . . .: . . .:

2 QPWFSERVER QSYS

The database server job QZDAINIT runs in the QSERVER subsystem and uses *BASE memory pool by default. It also uses job class QSYS/QPWFSERVER which has the following values when shipped:

238

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

• •

Run priority = 2. Time slice = 3000 msec.

Depending on the number of server jobs running concurrently:

Create an additional memory pool in QSERVER subsystem with the appropriate activity level using the CHGSBSD command. Change the Prestart job entry using the command CHGPJE to select the additional pool. Create and assign an alternative job class to adjust the runtime priority, time slice, and so on, may also be specified.

7.2.3 Expert Cache
If it is difficult to select specific files for loading into memory, expert cache can be used. It allows system paging algorithms to evaluate which program and file pages are to be retained in memory. Expert cache is invoked by changing the page faulting value for the pool to *CALC. It can be used only for shared pools. You do not need to determine which files of the application should be ″cached″ on the AS/400 system. If you enable expert cache and performance does not improve, simply turn it off.

Use WRKSHRPOOL to select expert cache: − − Change Paging Option to *CALC It can be used for Shared pools only

Work with Shared Pools System Main storage size (K) . : 81920

Type changes (if allowed), press Enter. Defined Size (K) 20056 19113 38751 400 3600 0 Max Active +++ 21 15 9 3 0 Allocated Size (K) 20056 19113 38751 4000 Pool ID 1 2 4 3 5 -Paging Option-Defined Current *FIXED *FIXED *FIXED *FIXED *FIXED *FIXED *FIXED *FIXED *CALC *FIXED

Pool *MACHINE *BASE *INTERACT *SPOOL *SHRPOOL1 *SHRPOOL2

Chapter 7. Client/Server Performance Tuning

239

This soft copy for use by IBM employees only.

7.2.4 Set Job Access (SETOBJACC) Command
This section shows an example of effectively using the SETOBJACC command.

Clear Pool (CLRPOOL) Type choices, press Enter. Storage pool: Shared pool or subsystem name Pool identifier . . . . . . .

QSERVER 3

Name, *JOB, *SHRPOOL1. 1-10

Set Object Access (SETOBJACC) Type choices, press Enter. Object . . . . . . . . . . Library . . . . . . . . Object type . . . . . . . Storage pool: Shared pool or subsystem Pool identifier . . . . . . . . . . . . . name . . . STOCK CSDB *FILE QSERVER 3 Name Name, *LIBL, *FILE, *PGM Name, *JOB, * 1-10

SETOBJACC is a command that allows selected database or program objects to be loaded into memory, and reduces the amount of disk accesses. For database objects, you can load just the data (data space) or the index (access path). In some cases, you can only guess which files are candidates for SETOBJACC by knowing the application. You can also use the STRDSKCOL (Start Disk Collection), ENDDSKCOL (End Disk Collection), and PRTDSKRPT (Print Disk Report) commands to print reports that include disk I/O counts for each disk file member. Reducing the number of disk accesses normally improves performance if disk I/O is impacting response times. A sample PRTDSKRPT is included at the end of this section. In Figure 62 on page 242, the column Rqs shows the number of I/O requests for a file; the files with high numbers of requests are good candidates for SETOBJACC. DSKCOL commands The STRDSKCOL, ENDDSKCOL, and PRTDSKPRT commands are available only for CISC based systems. The Rochester lab is currently analyzing how to provide comparable support for RISC based systems.

A separate pool should be created. In the example we show using pool 3 that was defined in subsystem QSERVER. The size of the pool should be large enough to hold the entire file or index that you want to load into memory. You can use the Display File Description (DSPFD) command or the Display Object Description (DSPOBJD) command and add 10% to the object size to determine the pool size required. Prior to using the SETOBJACC command, the CLRPOOL command should be used to write all memory pages to disk. The database objects loaded into memory must be selected with a good understanding of the application and database.

240

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

When loading an object into the specified pool, you must ensure the entire object fits into the pool. If only part of the object fits, you may experience poorer performance than before using SETOBJACC. A frequently accessed file loaded into memory improves overall performance by reducing the number of physical disk I/Os. The following table demonstrates the reduction of synchronous disk I/O in a test application where the customer master file is much larger in size than the item file (20.9MB versus 10.5 MB). During the running of a specific application, the item file is accessed more frequently than the customer file. Thus, with a lower investment in memory for SETOBJACC, a greater return is realized through a reduction in synchronous disk I/O. Note: This does not reflect any specific customer environment. Results can vary depending on the situation.
• •

Customer file - 30,000 records (20.9MB) Item file - 100,000 records (10.5MB)

┌─────┬─────────────────────────┬──────────┬─────────┐ │Test │ SETOBJACC filename │ Sync │ Logical │ │ │ │ I/O │ I/O │ ├─────┼─────────────────────────┼──────────┼─────────┤ │ 1. │ CUSTOMER │ 7404 │ 470 │ ├─────┼─────────────────────────┼──────────┼─────────┤ │ 2. │ ITEM │ 3733 │ 470 │ ├─────┼─────────────────────────┼──────────┼─────────┤ │ 3. │ ITEM access path │ 3646 │ 470 │ └─────┴─────────────────────────┴──────────┴─────────┘
Figure 61. SETOBJACC Disk I/O Comparison

In this test, the same application scenario was run three times. In test one, the Customer file was loaded into memory. In test two, the ITEM file was loaded into memory and in test three, the ITEM access path was loaded. The Synchronous I/O counts shown in the table were captured from the Component Report, ″Job Workload Activity″ section. The test shows that the greatest reduction in disk I/O activity was achieved by loading the most frequently used file into memory.

Chapter 7. Client/Server Performance Tuning

241

This soft copy for use by IBM employees only.

The following example shows sample PRTDSKRPT output. See files CSTFIL, ITMFIL, and CSTMSTP. Sum the ″Rqs″ values for all disks that contain the file.
Run: 09/07/95 09:32:34 cook dskcol summary Data File: QPFRDATA/QAPTDSKD QADSKDTA Units: *ALL Objects: *ALL Object Class: *DB Object Activity Summary Unit ---1 1 2 2 2 3 3 3 3 4 5 5 5 5 : 6 6 7 7 7 7 9 : 10 10 10 : 11 11 11 11 13 13 13 : Rqs ----4 1 2 1 2 2 9 1 5 1 73 1 3 1 Length -----7 1 2 1 2 212 18 1 9 1 171 1 4 1 : 4 3 1 101 1 1 1 : 1 3 1 : 8 4 5 1 108 1 2 : 24 4 86 1 366 1 32 1 6 1 4 9 105 287 1 1 1 Object Library ---------- ---------BUPMENUN PFREXP QAOKP04A QAOKP04A QCSRCSOLN QHST95250A CSTFIL CSTMSTP QCSRC QHST95250A QCSRCSOLN CSTFIL CSTMSTP QAOKP04A QCSRC : CSTMSTP QAVATSYSP CSTFIL ITMFIL QCSRCSOLN QCSRCSOLN CSTFIL : CSTMSTP QAOKP01A QCSRC : QAIJSLG QAOFENRA QAVATSYSP QCSRCSOLN CSTFIL CSTINQ QAIJSMST : : CSTMSTP QAVATSYSP CSTFIL ITMFIL SOCKCLT2 SOCKSRV3 PFREXP : CMN38 QAOKP01A CPICSRV4 : QAIJSLG QAOFENRA QAVATSYSP DCERPCCL CSTFIL CMN38 QAIJSMST : : : Page 1 : : 13,400 64 : QDDS QDDS QDDS QDDS QDDS FILE QDDS Member ---------QAOKP04A QAOKP04A CPICSRV4 QHST95250A CSTFIL CSTMSTP CPICCLT2 QHST95250A DCERPCMR CSTFIL CSTMSTP QAOKP04A CPICCLT3 : Record Number ------255 19,600 Page 1

64 5 96 3,399 :

Object Type ------FILE QDDS QDDS QDDS QDDS QDDS QDDS QDDS QDDS QDDS QDDS QDDS QDDS QDDS

400 23,999 23,999 5 :

QDDS QDDS QDDS QDDS QDDS QDDS FILE

FILE QDDS QDDS

5 13,400

Run: 09/07/95 09:32:34 cook dskcol summary Data File: QPFRDATA/QAPTDSKD QADSKDTA Units: *ALL Objects: *ALL Object Class: *DB Object Activity Summary Unit ---: 17 17 18 18 18 18 19 20 20 20 21 : Rqs ----Length -----: 1 1 83 2 2 1 1 87 1 1 75 1 1 15,609 154 2 1 1 11,568 1 1 328 : Object Library Member ---------- ---------- ---------: QAOKP08A QCSRCSOLN CSTFIL ITMFIL QAOKP08A Q04079N001 QCSRCSOLN ITMFIL QCSRC QCSRC ITMFIL : : : QAOKP08A CPICSRV2 CSTFIL ITMFIL QAOKP08A Q935075951 MQMTRG ITMFIL CPICCLT4 DCERPCCL ITMFIL : Record Number ------: 88 19,600

Object Type -------

64 51 3,399 64

QDDS QDDS QDDS QDDS QDDS QDDS QDDS QDDS QDDS QDDS QDDS :

:

Figure 62. Disk Collection Report

7.2.5 Prestart Jobs
This section discusses some of the prestart job parameters.

Prestart database server (QZDAINIT/QZDASOINIT): − − − Reduces initiation time. Runs in QSERVER subsystem. Set job parameters.

242

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Change Prestart Job Entry (CHGPJE) Type choices, press Enter. Subsystem description . . Library . . . . . . . . Program . . . . . . . . . Library . . . . . . . . User profile . . . . . . . Start jobs . . . . . . . . Initial number of jobs . . Threshold . . . . . . . . Additional number of jobs Maximum number of jobs . . | | | | Pool identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .: .: .: .: . QSERVER QSYS QZDAINIT QIWS *SAME *SAME 5 2 3 *NOMAX Name Name, *LIBL, *CURLIB Name Name, *LIBL, *CURLIB Name, *SAME *SAME, *YES, *NO 1-1000, *SAME 1-1000, *SAME 0-999, *SAME 1-1000, *SAME, *NOMAX

. . .:

2

Prestart jobs ensure that the database server jobs are running in QSERVER subsystem, and not in QCMN. When the server runs in QSERVER subsystem, it uses more efficient interfaces to send and receive APPC data.
Change the prestart job entry using the command CHGPJE to set the following information on prestart jobs based on a knowledge of the number of database server jobs that run most frequently (initial and threshold values):
• • • •

Initial number of jobs - when the subsystem starts. Threshold - at which additional jobs are started. Additional number of jobs - started when the threshold is reached. Maximum number of jobs - allowed to be started (*NOMAX).

As previously discussed, the QZDAINIT/QZDASOINIT prestarted job canno open files ahead of the connection or conversation as the data source. On the AS/400 system, the opening of files ahead of the user transactions can improve performance for any 5250 interactive job and a client/server job. Client Access/400 does provide an exit point for a user program that could do some preprocessing, but this program still gets called after the evoke is received. The CA/400 exit point to consider is QIBM_QZDA_INIT. CA/400 exit points are viewed with the Work with Registration Information (WRKREGINF) command and are described in the OS/400 Server Concepts and Administration Manual , SC41-3740-00. For an example of using the QIBM_QZDA_INIT exit point, please refer to Chapter 5, “Client/Server Database Serving” on page 119.

7.2.6 Parallel Pre-Fetch
Parallel pre-fetching of data can be invoked through the CHGQRYA command. Pre-fetching involves the reading of data from multiple disk arms in parallel and reduces any performance impact resulting from disk arm contention. However, there is a memory overhead of approximately 1MB per actuator, and selecting expert cache for the memory pool is a prerequisite. Also, the increased availability of data compresses the processing into a shortened elapsed time, and increases the percentage of CPU utilization during execution.

Parallel pre-fetch:

Chapter 7. Client/Server Performance Tuning

243

This soft copy for use by IBM employees only.

− − −

Expert cache - prerequisite CHGQRYA DEGREE(*IO) Resource intensive

Change Query Attributes (CHGQRYA) Type choices, press Enter. Query processing time limit . . Parallel processing degree . . . *NOMAX *IO 0-2147352578, *NOMAX, *NONE, *ANY, *SAME

7.2.7 Communications - SNA
This section discusses some LAN communications parameters that affect performance.
• • •

Frame size (MAXFRAME). Maximum outstanding frames (LANMAXOUT). Frame acknowledgement frequency (LANACKFRQ).

Change Line Desc (Token-Ring) (CHGLINTRN) Type choices, press Enter. Line description . . . . . . . . LIND Resource name . . . . . . . . . RSRCNAME Online at IPL . . . . . . . . . ONLINE Vary on wait . . . . . . . . . . VRYWAIT Maximum controllers . . . . . . MAXCTL Line speed . . . . . . . . . . . LINESPEED Maximum frame size . . . . . . . MAXFRAME TRLAN manager logging level . . TRNLOGLVL TRLAN manager mode . . . . . . . TRNMGRMODE Log configuration changes . . . LOGCFGCHG Token-ring inform of beacon . . TRNINFBCN Local adapter address . . . . . ADPTADR Functional address . . . . . . . FCNADR + for more values

> ITSCTRN LIN051 *YES *NOWAIT 40 4M 1994 <<<< *OFF *OBSERVING *NOLOG *YES 400000001001 *SAME More...

Display Controller Description 09/07/95 Controller description . . . . Option . . . . . . . . . . . . Category of controller . . . . LAN frame retry . . . . . . . LAN connection retry . . . . . LAN response timer . . . . . . LAN connection timer . . . . . LAN acknowledgement timer . . LAN inactivity timer . . . . . LAN acknowledgement frequency LAN max outstanding frames . . LAN access priority . . . . . LAN window step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . : : : : : : : : : : : : : COOKPC *TMRRTY *APPC 10 10 10 70 1 100 1 <<< LANACKFRQ 2 <<< LANMAXOUT 0 *NONE

SYSASM01 10:48:09

More...

244

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

These displays show examples of LAN token-ring parameters affecting performance.

MAXFRAME: The default is 1994. For large data transfers, consider making the frame size larger if supported by the remote system or client. If you make the frame size larger than 1994, you need to be aware that a single large file transfer conversation may negatively impact performance for other remote system and client applications. Note that the line description specifies maximum frame size supported after vary on of the line. A control unit MAXFRAME parameter can specify to default to the line description or a frame size larger or smaller than supported by the line description MAXFRAME size. If the controller specifies a smaller frame size, that smaller frame size is used. If the controller frame size specifies a larger value, the AS/400 system uses the line description frame size. Additionally, consider that the remote system or client may have different frame sizes specified. The AS/400 system always uses the smaller of either the AS/400 frame size or the remote system or client frame size. This is negotiated with XID frames at initial vary on of the controller and the agreed-to size can be viewed in the help text for AS/400 message CPF5908:
Message . . . . : Cause . . . . . : Controller VP033 contacted on line ITSCTRN. If controller VP033 is on a local area network, the negotiated size is 1929.

LANACKFRQ: This parameter on token-ring controls how many frames are received before a response is sent to the session partner. LANMAXOUT: This parameter on token-ring controls how many frames are sent before a response is expected from the session partner.

The V3R1 Performance Capabilities Reference dated February 1995 suggests using the following values:
• •

LANACKFRQ = 1 LANMAXOUT = 2

7.2.7.1 Communications: MODE
A mode description is used to establish the characteristics of a session. The mode description parameters that affect performance are discussed here.

Chapter 7. Client/Server Performance Tuning

245

This soft copy for use by IBM employees only.

Change Mode Description (CHGMODD) Type choices, press Enter. Mode description . . . . . . . Class-of-service . . . . . . . Maximum sessions . . . . . . . Maximum conversations . . . . Locally controlled sessions . Pre-established sessions . . . Maximum inbound pacing value . Inbound pacing value . . . . . Outbound pacing value . . . . Maximum length of request unit Data compression . . . . . . . Inbound data compression . . . Outbound data compression . . Text ′ description′ . . . . . . . > QSERVER . #CONNECT . 64 . 64 . 0 . 0 . *CALC . 7 . 7 *CALC . *ALLOW . *LZ10 . *LZ10 . *BLANK Name Name, *SAME, #CONNECT... 1-512, *SAME 1-512, *SAME 0-512, *SAME 0-512, *SAME 1-32768, *SAME, *CALC 0-63, *SAME 0-63, *SAME 241-32768, *SAME, *CALC 1-2147483647, *SAME... *SAME, *RLE, *LZ9, *LZ10 *SAME, *RLE, *LZ9, *LZ10

This display shows an example of the SNA mode QSERVER used by CA/400. These mode values apply to every SNA conversation using this mode.

INPACING: This specifies how many SNA RUs the AS/400 system is willing to receive before it sends a pacing response. The actual inpacing value is negotiated with the remote system or client according to its outpacing value for the same mode name transmitted in an SDLC environment. OUTPACING: This specifies how many SNA RUs the AS/400 system wants to send before it expects a pacing response. The actual outpacing value is negotiated with the remote system or client according to its inpacing value for the same mode name transmitted in an SDLC environment. RU Size: The AS/400 system provides a single value even though the SNA bind actually has a send RU and receive RU length. These values are also negotiated with the remote system or client.

There is no display command on the AS/400 system that shows the negotiated pacing or RU size values.

7.2.8 Communications - TCP/IP
For TCP/IP communications, key performance parameters are the frame size, the maximum transmission unit size, and the window size. The maximum transmission unit (MTU) size is used similarly to the SNA maximum length RU size. The window size is used similarly to SNA pacing, but is the number of characters that can be sent before an acknowledgement is required. SNA pacing is the number of RUs that can be sent before an acknowledgement is required. The following provides additional TCP/IP considerations for frame size, MTU size, and window size:

Frame size TCP/IP can use the line description frame size (MAXFRAME parameter) to determine the number of data characters to send in a single physical transmission. We recommend using the largest frame size supported by your network.

246

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

You can specify that TCP/IP use the line description frame size by specifying ″Maximum transmission unit = *LIND″ as shown on the AS/400 Add TCP/IP Interface (ADDTCPIFC) command example below:
Add TCP/IP Interface (ADDTCPIFC) Type choices, press Enter. Internet address . . . . . . . . >n.n.nn.nnn_ Line description . . . . . . . . ITSCTRN Subnet mask . . . . . . . . . . _______ Type of service . . . . . . . . *NORMAL Maximum transmission unit . . . *LIND Autostart . . . . . . . . . . . *YES PVC logical channel identifier ___ + for more values ___ X.25 idle circuit timeout . . . 60 X.25 maximum virtual circuits . 64 X.25 DDN interface . . . . . . . *NO TRLAN bit sequencing . . . . . . *MSB Name, *LOOPBACK *MINDELAY, *MAXTHRPUT... 576-16388, *LIND *YES, *NO 001-FFF 1-600 0-64 *YES, *NO *MSB, *LSB

However, like the SNA mode MAXLENRU parameter, a TCP/IP parameter may override the MAXFRAME size value and cause a smaller frame to be sent. For TCP/IP support this is the MTU (Maximum Transmission Unit) parameter on the Add TCP/IP Route (ADDTCPRTE) command.

MTU size Whereas the SNA mode MAXLENRU value defaults to *CALC, which fits nicely into the maximum frame size, the MTU parameter defaults to 576 bytes. 576 is the default because it is the minimum size supported in already existing TCP/IP networks. 576 ensures TCP/IP data can be successfully routed worldwide. However, if you have control of your TCP/IP network, you should make your MTU value as large as possible. Rochester lab performance testing recommends an MTU value of up to 16,388. Note that the line description MAXFRAME value must be equal to or greater than the MTU value for successful operation. The following screen is an example of the AS/400 ADDTCPRTE command help text, showing possible values for the MTU:

Chapter 7. Client/Server Performance Tuning

247

This soft copy for use by IBM employees only.

Add TCP/IP Route (ADDTCPRTE) Type choices, press Enter. Route destination . . . . . . . > ′ TEST′__ Subnet mask . . . . . . . . . . > ′ ′_____ Type of service . . . . . . . . *NORMAL__ *MINDELAY, *MAXTHRPUT.. Next hop . . . . . . . . . . . . > ′ ′______ Maximum transmission unit . . . 576______ 576-16388, *IFC .......................................................... : Maximum transmission unit (MTU) - Help : : Specifies the maximum size (in bytes) of IP datagrams that : can be transmitted through this route. A datagram is a : basic unit of information passed over an internet network. : The minimum size of any maximum transmission unit value is : 576 bytes. : : The possible values are: : 576 : The maximum transmission unit (MTU) is 576 bytes. : : *IFC : The maximum transmission unit (MTU) is the MTU of the : interface that is associated with this route. : : maximum value (limited by transmission protocol) as: : : o X.25 4096 : o Token ring (4 meg) 4060 : o Token ring (16 meg) 16388 : o Ethernet 802.3 1492 : o Ethernet Version 2 1500 : o DDI 4352 : o Frame relay 8177 : o Wireless 802.3 1492 : o Wireless Version 2 1500

Window size TCP/IP window size can be specified for sending (TCP send buffer size) and receiving (TCP receive buffer size) as shown on the AS/400 Change TCP/IP Attributes (CHGTCPA) command help text below:
Change TCP/IP Attributes (CHGTCPA) Type choices, press Enter. TCP keep alive . . . . . . . . . 120 1-40320, *SAME, *DFT TCP urgent pointer . . . . . . . *BSD *SAME, *BSD, *RFC TCP receive buffer size . . . . 8192 512-8388608, *SAME, *DFT TCP send buff ........................................................... UDP checksum : TCP receive buffer size (TCPRCVBUF) - Help IP datagram f : IP reassembly : 1. User Datagram Protocol (UDP) does not have a IP time to li : configurable receive buffer size. ARP cache tim : Log protocol : 2. This value is also used as the default receive : buffer size by IP over SNA processing. : : 3. Setting this parameter does not guarantee the size : of the TCP receive buffer. This is the default : buffer size that is used for initial TCP connection : negotiations. An individual application can : override this value by using the SO_RCVBUF socket : option. For more information see the Sockets : Programming book, SC41-4422.

Specify minimum TCP buffer send and receive sizes of 8192. The AS/400 performance improves with values up to 64,386.

248

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Although you can specify values, the system/workstation you are communicating with may support smaller frame size, MTU size, and window size values than you specify. The only way to determine exactly what is being used is to review a communication line trace. Note that if only OLTP (″fast response″) applications are being used, the larger frame, MTU, and window sizes may not actually be necessary. The larger values are more important when large amounts of data are being exchanged. This concludes the ″tuning the server system″ section. We now discuss tuning the client.

7.3 Client Tuning
The processor used by the client PC makes a noticeable difference to the client performance. The 486 processor by its design is able to process more instructions in a single CPU cycle than the 386 processor, which greatly enhances performance. The recommendation made here is to use a 486 processor or better wherever possible. The type of bus used in the PC can make a major difference in the performance of the PC; for example, the industry standard ISA bus has a data transfer rate of 5 Mbps, while the newer PCI bus has a transfer rate of up to 132 Mbps. Memory on the client has the same considerations as the server, but in the DOS environment, it is important to note that a lot of main storage is used up by device drivers. In the DOS and Windows environment, it is vital to load as many of the device drivers as possible into the higher memory region (above the 640KB area). You need a memory manager to be able to access the memory above 640KB; for example, the DOS high memory managers HIMEM.SYS and EMM386.SYS. What you see on your PC display is controlled by the graphics adapter card in your PC. The faster the adapter and the more memory on the adapter card, the better the performance of the adapter. In other words, you could have a situation where the PC may seem to be processing very slowly when, in fact, it is just the graphics adapter that is slow to build the display image. Communications adapters such as LAN adapters play a significant role in performance. A high performance token-ring adapter is more efficient for processing high volumes than a standard token-ring adapter. This is true for the AS/400 system as well.

7.3.1 The SmartDrive (SMARTDRV) Command
In the DOS environment, disk access can be improved by using buffering programs such as SMARTDRV. SMARTDRV is provided by both DOS and Windows. In order to install SMARTDRV:

Add the Smartdrv device driver to the AUTOEXEC.BAT file. Do not load SMARTDRV high; it uses EXTended memory and UMB space as appropriate.

C:\DOS\SMARTDRV 2048 1024

Chapter 7. Client/Server Performance Tuning

249

This soft copy for use by IBM employees only.

SmartDrive supports both a read and a write cache. It shortens the wait time for disk access. The parameters are used to control the amount of memory used for the disk cache. The first parameter is the cache size under DOS, the second is the cache size under Windows. We have a cache size of 2048K under DOS and 1024K under Windows. You can start SmartDrive without parameters and it will use default cache sizes that are based on the size of physical memory installed on your system. Extended memory is used for the cache.

Add the Smartdrv device driver to the CONFIG.SYS file. Do not load SMARTDRV high; it uses EXTended memory and UMB space as appropriate.

DEVICE=C:\DOS\SMARTDRV.EXE /double_buffer

Determining if double-buffering is required. 1. Re-boot your system and check the status of SmartDrive by entering

SMARTDRV
at the DOS prompt. This displays the amount of cache being provided under both DOS and Windows. You also see the disk caching status. Note that hard files are cached for both reads and writes, while floppy drives are cached for read only. 2. Note the column that indicates buffering. Double-buffering is a technique used to insure the reliability of data being written to the disk. The data is first placed in conventional memory and then in the cache buffer. This support is not required for all hardware and can slow down performance. If you have a small computer system interface (SCSI) hard disk, you may need to use double-buffering. Double-buffering provides compatibility for hard-disk controllers that cannot work with virtual memory. Important Information The buffering column indicates whether or not the drive requires ″double buffering″. ″No″ indicates that it does not while ″Yes″ indicates that ″double buffering″ is required. If double buffering is not required, you should remove the SMARTDRV entry from the CONFIG.SYS.

DEVICE=C:\DOS\SMARTDRV.EXE /double_buffer

7.3.2 Microsoft Software Diagnostics (MSD)
If you want to know information about the client′s configuration and you have Windows installed, Microsoft Software Diagnostics (MSD) is useful. To use it, run the MSD program from the WINDOWS subdirectory and follow the menus.

C:\WINDOWS\MSD Memory -- this option will show the client memory configuration

250

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

7.3.3 The Defragment (DEFRAG) Program
If you are using the DOS or OS/2 File Allocation Table (FAT) file system on your client, you should check for fragmentation of the disk. Basically, fragmentation means that files or programs on your disk are not stored in contiguous areas, but are broken into smaller pieces and spread across the disk. While this may be good in an AS/400 environment, it is not good when using the FAT file system. It results in slow program load and file access. DOS 6.0 and later provides a DEFRAG program that is used to defragment the disk.

7.3.4 Data Placement
For data placement, it is important to consider the following:
• • • • • • •

Data space requirements Frequency of data access Frequency of data modification Data conversion Communications costs Data integrity Data sharing guidelines: − Place the data on the client if the data is: - Not shared. - Shared, frequently accessed, and not modified. − Place the data on the AS/400 system if the data is: - Highly modified shared data. - Infrequently accessed shared data.

Putting the data in the right place can have a big performance impact. By placing data on the client, you can reduce usage of the communications environment to retrieve data.

7.3.5 Application Design
The application design can make a big difference. It is important to pick the right interface and to implement it correctly. For example, if you are implementing a database serving application using ODBC, a Visual Basic API implementation performs much better than a Visual Basic Database Object implementation. It is also important to minimize line traffic and turn-around between the client and server. ODBC techniques such as stored procedures or block inserts are used to reduce line traffic and turn-around. See Chapter 4, “Client/Server Application Serving” on page 89 and Chapter 5, “Client/Server Database Serving” on page 119 for a more detailed discussion of application design and implementation.

7.3.6 Client Hardware Performance Comparison

Chapter 7. Client/Server Performance Tuning

251

This soft copy for use by IBM employees only.

Figure 63. Query Download Implementation

The contribution of the client to the overall performance for an application depends on the type of application being implemented. The preceding chart summarizes this impact. Application type:

Complex query − The major system requirement is on the server. Query download − The major system requirement is on the client. On Line Transaction Processing (OLTP): − The system requirements are equal on both the client and server. OLTP (Stored Procedures): − The major system requirement is on the server.

The test summarized in the bar chart shows the impact of the hardware when doing a Query Download. In this case, the client is responsible for the major contribution of response time. The test shows that by changing the client hardware, relative performance is dramatically impacted. For another type of application, for example, a Complex Query, more powerful client hardware has a much smaller impact on performance.

7.3.7 Client Check List
Following is a check list of items that can affect performance. Many of the items listed are outside the scope of this book, but are presented as a thought provoker to be used when dealing with client performance.
• •

What type of processor? X86(SX, DX, DX2, DX4, SLC) Pentium, other? How much memory (RAM)? What type of memory? EDO, 70ns, 85ns

252

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Is there a separate cache on the system? How big? 8K, 32K, 256K, other? How is the memory being used (use mem /c /p)? Disk drives: − − − − − How much hard disk? What kind of hard drive? Conner, Maxtor...- IDE or SCSI drives? How fast are the drives? If SCSI, how many devices are chained together with the hard drive? How full is the hard disk drive? Over 90% full? Are the drives fragmented?

• •

• • • • • • • •

What type of bus? AT, EISA, VESA, PCI, MCA, other? Is a video accelerator being used? How much VRAM, DRAM? What video resolution is being used? 640X480, 1024x768, other? Are there a lot of graphics being displayed? A bitmapped background? Are there other adapter cards that compete for software interrupts? What kind or brand of adapter cards are used? If DOS or Windows 3.1: − − Check CONFIG.SYS, AUTOEXEC.BAT. Is a memory manager being used? QEMM, EMM386, others? What version?

If Windows 3.1: − − − − − Is 32-bit disk access being used? What is the minimum TIMESLICE settings? How many TIMESLICES are being shared in foreground or background? Are Windows Applications Exclusive in the foreground? How big a SWAPFILE is being used? Is it permanent or temporary?

• •

Is RAMDISK or SMARTDRIVE being used? Double buffers? What network router is being used? CA/400 Win-16, CA/400 Ext DOS, non-IBM router? If OS/2, (2.1, 2.11, or WARP): − − − − How large is the SWAPFILE? Where is the SWAPFILE located? Is the HPFS file system being used? If DOS/WINOS2 is being used, have memory settings been set?

If Windows 95 − − What communications protocol is being used? TCP/IP generally performs better than SNA. Where is the SWAPFILE located?

Is CA/400 being used? − − − Which client? DOS, Windows 3.1, Win 95, OS/2, OS/2 Optimized? Which version? V3R1M0, V3R1M1? What is the MODE setting? QSERVER, QPCSUPP, other?

What communications protocols are installed and active?
Chapter 7. Client/Server Performance Tuning

253

This soft copy for use by IBM employees only.

• • • • •

How many concurrent sessions are active per client? What other connectivity products coexist on the client? Is there a significant amount of static client data? Are there multiple servers accessed? How much data is being retrieved from the server? Is all of it being displayed? Is the data manipulated prior to displaying it? − Client or server?

• •

Object conversion, ASCII <-> EBCDIC, data type, other conversions? Does the application write to the client′s disk during the transaction? − − − Transaction logging? Response time gathering? Error logging?

254

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Chapter 8. Client/Server Performance Analysis
This chapter outlines a methodology to review performance information to be used with a client/server application to determine if any performance bottlenecks exist. This chapter also discusses the performance tools available for doing this analysis with an AS/400 server and an ODBC client. Sample output is presented in some cases. In general, the tools and methodology presented here may be used with most AS/400 server/client application environments. However, there may be situations where different or additional tools are required.

8.1 A Methodology Overview
This is only one of many possible approaches to analyzing a performance concern in a client/server environment. There are many reports and tools to assist in performance reviews on the AS/400 system. However, the tools for resource utilization measurements on a PC are less readily available. We focus on AS/400 tools and the Client Access/400 for Windows 3.1 client ODBC trace tool. Figure 64 on page 256 provides the methodology flowchart. Note that understanding the performance problem in customer terms is key to minimizing unnecessary analysis of low priority problems or unrealistic customer expectations, such as an AS/400 20S server model handling 200 busy ODBC client workstations.

© Copyright IBM Corp. 1996

255

This soft copy for use by IBM employees only.

┌──────────────────────┐ │ Understand Problem │ │ versus │ Step #1 - Interview │ Performance Goals │ └──────────┬───────────┘ │ │ │ ┌──────────────────────┐ │ Review overall │ │ System Performance │ Step #2 - Interactive Commands │ │ - Summary Reports └──────────┬───────────┘ │ Performance │ Guidelines ? │────────────────────────────────┐ │ ┌──────────────────────┐ │ │ Resolve System-level │ │ │ Constraints/Problems │ Step #3 - Tuning │ │ │ - Upgrade │ └──────────┬───────────┘ │ │ Application │ │ Problem ? │ │ ───────────────────────────────┘ ┌──────────────────────┐ │ Application-level │ │ Analysis │ Step #4 - Performance Trace │ │ - Transaction Report └──────────────────────┘ - Transition Report - Locks/Seizes - SQLPKGINF - Job Trace Report - Communications Trace - ODBC Trace
Figure 64. Performance Analysis Methodology

The methodology for investigation includes: 1. Understanding the perceived problem, including:
• • • •

Application function Design overview Expected response or throughput Perceived response or throughput

It is important to understand the problem as perceived by the users. The expected response time goals need to be established and validated against the application architecture to ensure that it achievable. Is the problem system-wide, or just for a specific application? If it is for a specific application, is the problem just for certain transactions and not others? Does the problem only occur during some specific time periods during the work day? The user′s locality to the server is also a consideration in this regard, as line time is significant if the user is accessing the server through a low bandwidth line, or heavily utilized line.

256

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

2. Initially review overall system performance, considering:
• • • • • •

Hardware error log System values Page faulting Activity levels Disk arm activity Disk space usage

Initially, examine the system to ensure that the system parameters affecting performance are acceptable, and that memory pool allocation and activity levels are adjusted to suit the system. This preliminary review is done by checking the system interactively. The purpose of this is to avoid getting deeply involved in an application performance investigation when the problem may be with the setting of system parameters. That is, it is a system resource problem, rather than an application design or implementation problem. 3. Carry out a detailed system performance or capacity study based on OS/400 Performance Monitor summary data or repetitive usage of system commands, such as Work with System Status (WRKSYSSTS) and Work with Disk Status. If the service offering, PM/400 (Performance Manager/400), is installed, you may also use its Work with History commands. The use of the commands, though showing important information, requires significant human intervention to gather sufficient data over a meaningful period of time. Collection of Performance Monitor data and analysis of that data through the Performance Tools/400 Advisor function and individual printed reports provides a more a comprehensive set of information. Once the review of summary level data has completed, you may have determined that system-level tuning can resolve the performance problem. Using the following guidelines, you may determine that a faster CPU, more main storage, or faster or additional disk hardware is required. General guideline values shown in the following list can be used for system resource level review:

Communications media: − Interactive environments: 30-50% In client/server application environments where response time is critical, use this guideline. − Large file transfer or query download: 80% or higher. In large data transfer application environments, high utilization indicates that the speed of the line is being used effectively. If both interactive and batch applications are active on the same line at the same time, performance problems can be anticipated unless manipulating SNA pacing values can achieve acceptable performance. − The Performance Tools/400 Advisor, System Report ″Communications Summary,″ Component Report ″IOP Utilization,″ and Resource Interval Report ″Communications Line Detail″ provide communications performance data. The Advisor assists in identifying resources utilized at the guideline level.

Chapter 8. Client/Server Performance Analysis

257

This soft copy for use by IBM employees only.

IOP: − − − Interactive environments: 60% Large transfer: > 6 0 % The Performance Tools/400 Advisor, System Report ″Communications Summary,″ Component Report ″IOP Utilization,″ and Resource Interval Report ″Communications Line Detail″ provide IOP performance data. The Advisor assists in identifying resources utilized at the guideline level.

CPU: − Interactive environments: 70% This guideline can reach as high as approximately 76% on the 4-way processors. Interactive environments are typically thought of as 5250 or 3270 display applications. However, client/server applications where response time is critical - OLTP applications, can be thought of as an interactive environment. − Large transfer and batch: > 7 0 % Similarly to the communications line utilization, high CPU utilization for a batch (or query download) application is usually a good sign, provided the mixture of interactive and batch work on the system is meeting a customer′ s realistic performance expectations. − The Performance Tools/400 Advisor, System Report ″Workload″ and ″Resource Utilization″ sections, and Component Report ″Job Workload Activity″ provide CPU utilization level performance data. The Advisor assists in identifying interactive jobs with higher than expected CPU utilizations.

Memory: The Work Management Guide , SC41-3306, contains page fault guidelines for the system machine pool and the total for all other storage pools defined on the system. These guidelines are based on system processor speeds and listed based on model feature numbers. Refer to this manual for guideline values.

The Performance Tools/400 Advisor, System Report ″Storage Pool″, and Component Report ″Storage Pool Activity″ provide storage pool utilization (page faults per second) information. The Advisor assists in identifying storage pools that approach the guideline values.

DASD: − − − Interactive environments: 40% Batch, ASP Journal: > 4 0 % The Performance Tools/400 Advisor, System Report ″Disk Utilization″, and Disk Arm percent busy provide disk busy performance information. The Advisor assists in identifying disk arms that approach the guideline values.

258

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

4. Analyze the client/server application in detail, collecting detailed information while running the job in question. Generally the faster the resource is, the higher is the utilization threshold tolerance. The slower the resource, the lower is the threshold tolerance. If utilization guidelines for the various resources are exceeded, you need to first resolve these issues. If performance problems exist and all resource measurements are significantly below the guidelines, then application transaction performance data must be collected and analyzed by a performance specialist. Note that Performance Monitor data collects and the Performance Tools/400 Manager feature reports 5250-based interactive response times. Even though OLTP client/server environments are considered ″interactive″ from the end user′s viewpoint, the AS/400 summary level performance data does not identify client/server transactions and thus is not able to calculate and report ″non-interactive response time.″ As shown later, although trace-level detail performance data does not show summary level non-interactive response time and transaction counts, a printed Transaction Report for a specific job does indicate active and wait state transitions that correlate well with actual communication line receives and sends. Neither the Component Report nor the Transaction Summary Report are able to show client/server application transactions or response time per transaction averages in their summary level reports as this information is not collected by the Performance Monitor. Note: A detailed application review should only be undertaken after all of the system level issues have been resolved.

8.2 Data Collection Tools
Many of the AS/400 system′s performance measurement tools are used to study client/server performance involving the AS/400 system as a database server. The Client Access/400 (CA/400) for Windows 3.1 also includes tools to assist in the analysis of performance. The primary tool we used is the ODBC API trace. A very important prerequisite to analyzing a client/server application is to have a good understanding of the application logic as well as the functions from a user point-of-view. A complete performance analysis of a client/server application is beyond the scope of this chapter. However, this chapter and Chapter 10, “Case Study” on page 351 include examples of performance information and communications line trace data for a client/server application. The application is described in Chapter 10, “Case Study” on page 351 and is an order entry application written on the client in Visual Basic. This chapter takes you through the methodology for collecting and analyzing client/server performance data, while Chapter 10, “Case Study” on page 351 provides a more in depth analysis of the application.

Chapter 8. Client/Server Performance Analysis

259

This soft copy for use by IBM employees only.

The Visual Basic for the programs refer to Chapter discussion of the

program uses Client Access/400 ODBC APIs. The source code is on the PC media included with this redbook. You can also 5, “Client/Server Database Serving” on page 119 for a ODBC APIs.

System-level performance analysis and performance management considerations are outside the scope of this document. However, this chapter includes selected examples of Performance Tools/400 reports for OS/400 Performance Monitor data collected while running a client server application using Client Access/400 for Windows 3.1 ODBC interfaces. An example of the Client Access/400 for Windows 3.1 client ODBC API trace output is also included in this chapter. Review of the ODBC API trace can be used to identify inefficient uses of these APIs. In all client application performance analysis situations, you need to examine the server resource utilization, the client processor utilization, and the communication link between the server and the client. For thorough education on the use of the AS/400 Performance Tools/400, CISC licensed program 5763-PT1 and RISC licensed program 5716-PT1, we refer you to the AS/400 PERFORMANCE ANALYSIS and CAPACITY PLANNING conducted by IBM. In the US this course is course code S6027. For non-US countries this course is course code OL95V1. Also see the AS/400 Performance Management Redbook , GG24-3723-02, for more information on CISC systems performance. The RISC redbook version of Performance Management is SG24-4735 and is scheduled to be available March 1997. Collect all of the necessary performance data while running the client/server application being investigated. A suggested checklist is included later in this chapter. For the approach suggested in this document, the following information is collected:

AS/400 server − − − − − Performance data (with trace information) Communications trace Job trace SQL package information SQL Debug messages in the job log Note that Chapter 1, “Application Design” on page 1 discusses SQL package and SQL debug messages in the job log.

PC client − ODBC API trace In most cases, collection of the ODBC API trace data results in a significant performance degradation on the client workstation. We recommend collection ODBC API trace information at a time separate from collecting the AS/400 server performance information. − Measure response on PC: We did not find automatic tools to collect this response time information from the client workstation. The application had to add program instructions to record this data. This is not always possible, as the application developer may not be available.

260

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

However, later in this chapter we show some sample coding for a time recording program. Important Performance Analysis Notes 1. Client/Server performance analysis may require the use and review of all of the performance information previously listed. For ODBC applications, we believe enabling the OS/400 Query Optimizer to select the most efficient data access method and minimizing unnecessary ODBC requests from the client are the most important factors in achieving best performance possible. Therefore, the client ODBC API trace, the AS/400 joblog containing OS/400 Query Optimizer Debug messages, and an associated communications line trace should be the first sets of information analyzed after ensuring there are no AS/400 system hardware resource constraints. 2. It is important to understand the application execution sequence to ensure that the data collections are initiated and ended correctly to include all of the required data.

The next topics show how to collect the data and then print the appropriate reports. The data gathering tools should run only as long as the performance test is carried out, which should be only as long as it takes to demonstrate the problem. The user of the client/server application should keep a record of the transactions entered during the test. The following publications are used as reference material for more information on the topics outlined here:
• • • • •

CL Reference , SC41-3722 Work Management Guide , SC41-3306 Performance Tools/400 , SC41-3340 Best/1 Capacity Planning , SC41-3341 AS/400 Performance Management V3R1 Redbook , GG24-3723-02

8.2.1 AS/400 Server Performance Data
The AS/400 system has a standard performance data collection facility that is initiated by using the STRPFRMON command that is part of OS/400. The files created by this command are described in the Work Management Guide , and users can write their own queries to analyze the data. Performance Monitor files all start with the QAPMxxxx prefix. However, we recommend using the Performance Tools/400 Manager Feature to print the reports and run the Advisor. In most situations, the Performance Monitor is started with TRACE(*NONE) - no job level trace data collected. This is acceptable as the first step is to analyze performance from a system level. However, we suggest initially using the trace option (TRACE(*ALL)) as it saves having to re-run the application again if it seems system level performance information indicates no bottlenecks and application analysis is required. The following topics show how to collect Performance Monitor data. After the data is collected, you should run the Performance Tools Advisor against the collected data. Reviewing the Advisor messages may direct you to specific system tuning actions. Assuming you need

Chapter 8. Client/Server Performance Analysis

261

This soft copy for use by IBM employees only.

to further analyze the collected Performance Monitor data, you can print the various Performance Tools/400 reports. Remember the Performance Monitor data may be collected on any system, but you need the Performance Tools/400 Manager feature on the system in order to print these reports. The performance data library may be placed in a save file or saved to a tape and restored onto the system with the Manager feature. This chapter provides examples on how to print the specific Performance Tools/400 reports and some report examples. Performance data with the trace option provides detailed information of activity on the AS/400 system at the application level. In general, the System Report and Component Reports use system level or ″sample data″ for their output system level information. Transaction and Transition Reports use trace data and are intended for application level review.

8.2.1.1 Start Performance Data
If the licensed program Performance Tools/400 (5763-PT1 or 5716-PT1) is installed, you can start the performance monitor using the PERFORM menu which is accessed by typing GO PERFORM on a command line.

PERFORM

IBM Performance Tools/400 System: SYSASM01

Select one of the following: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Select type of status Collect_performance_data Print performance report Capacity planning/modeling Programmer performance utilities Configure and manage tools Display performance data System activity Performance graphics Advisor

70. Related commands Selection or command ===> 2 F3=Exit F4=Prompt F9=Retrieve F16=System main menu F12=Cancel F13=Information Assistant

Two additional displays are presented following the initial menu selection:

262

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

SYSASM01 Collect Performance Data 08/07/95 15:34:27 Performance monitor status: Status . . . . . . . . . . . . : Not running

Select one of the following: 1. Start_collecting_data 2. Stop collecting data 3. Work with performance collection

Selection or command ===> 1 F3=Exit F4=Prompt F5=Refresh F9=Retrieve F12=Cancel

The next display is shown.

Start Collecting Data Select one of the following: 1. Collect data with defaults 2. Collect data with menus 3. Collect_data_with_command Selection 3 F3=Exit F12=Cancel

Alternatively, you can type the OS/400 STRPFRMON command directly on the command line and press the F4 key to display the prompts that need to be responded to. The Control Language Programming Guide or your performance specialist can give you guidelines on how to respond to the prompts. The following display shows the most significant parameters of the STRPFRMON command.

Chapter 8. Client/Server Performance Analysis

263

This soft copy for use by IBM employees only.

Start Performance Monitor (STRPFRMON) Type choices, press Enter. Member . . . . . . . . . . . . . Library . . . . . . . . . . . . Text ′ description′ . . . . . . . Time interval (in minutes) . . . Stops data collection . . . . . Days from current day . . . . . Hour . . . . . . . . . . . . . . Minutes . . . . . . . . . . . . Data type . . . . . . . . . . . Trace type . . . . . . . . . . . Dump the trace . . . . . . . . . Job trace interval . . . . . . . Job types . . . . . . . . . . . + for more values Start database monitor . . . . . F3=Exit F4=Prompt F24=More keys F5=Refresh a_name Name, *GEN Customer Name ′ Customer-Name,date,time′ 5 *ELAPSED 0 2 0 *ALL *ALL *NO .5 *DFT *NO F12=Cancel 5, 10, 15, 20, 25, 30, 35.. *ELAPSED, *TIME, *NOMAX 0-9 0-999 0-99 *ALL, *SYS *NONE, *ALL *YES, *NO .5 - 9.9 seconds *NONE, *DFT, *ASJ, *BCH. *YES, *NO More.. F13=How to use this display

Enter a meaningful abbreviated name for the Member name. Similarly, enter a meaningful title for the text description, as this is used to print headings for the performance reports. If you collect several sets of Performance Monitor data, a significant member name and text description can help you remember when and what applications were running for this set of performance data. You may have to create a new library for your data. Ensure that the Trace type parameter is *ALL and Dump the Trace is *NO. Dumping the trace data defaults to occurring when ENDPFRMON is issued. On a very busy system, dumping the trace data during production mode can degrade normal system performance. Therefore, we recommend dumping the trace data at a later time with the DMPTRC command.

8.2.1.2 End Performance Data
If you are using the menu approach with Performance Tools/400 installed, enter GO PERFORM on a command line to display the main Performance menu and then select option 2.

264

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

PERFORM

IBM Performance Tools/400 System: SYSASM01

Select one of the following: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Select type of status Collect_performance_data Print performance report Capacity planning/modeling Programmer performance utilities Configure and manage tools Display performance data System activity Performance graphics Advisor

70. Related commands Selection or command ===> 2 F3=Exit F4=Prompt F9=Retrieve F16=System main menu F12=Cancel F13=Information Assistant

The following display allows you to select option 2 to stop collecting data.

SYSASM01 Collect Performance Data 08/07/95 15:34:27 Performance monitor Status . . . . . Submitter . . . . Job . . . . . . . Library . . . . . Member . . . . . Started . . . . . End scheduled . . status: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . : : : : : : : Running CS15 QPFRMON /QPGMR PFRRES95 Q952431610 07/30/95 16:10:01 07/30/95 18:10:00

/138997

Select one of the following: 1. Start collecting data 2. Stop collecting data 3. Work with performance collection

Selection or command ===> 2 F3=Exit F4=Prompt F5=Refresh F9=Retrieve F12=Cancel

Alternatively, you can use the OS/400 ENDPFRMON command to stop performance data collection and take the default parameters.

Chapter 8. Client/Server Performance Analysis

265

This soft copy for use by IBM employees only.

End Performance Monitor (ENDPFRMON) Type choices, press Enter. Dump the trace . . . . . . . . . DMPTRC User exit program . . . . . . . EXITPGM Library . . . . . . . . . . . *SAME *SAME

F3=Exit F4=Prompt F24=More keys

F5=Refresh

F12=Cancel

Bottom F13=How to use this display

The End Performance Monitor (ENDPFRMON) command stops the collection of performance data. After the collection of performance data stops, the database files are closed, and the performance monitor job (runs in subsystem QCTL as job nnnnnn/QPGMR/QPFRMON) ends. In an ″automated environment″ you can start and end the Performance Monitor automatically. You can do this with job scheduler entries or the Performance Tools/400 Add Performance Collection command. In this environment, you may use the ENDPFRMON EXITPGM parameter to call a program that automatically prints performance reports or some other automated function.

8.2.1.3 Dump Trace Data
After the data collection has completed, you must run the DMPTRC command at a time of low system activity if you collected TRACE(*ALL) data with STRPFRMON. However, this step must be completed before another performance data collection with trace is performed. If not, the trace data is overwritten.

Dump Trace (DMPTRC) Type choices, press Enter. Member . . . . . . Library . . . . . Job queue . . . . Library . . . . Text ′ description′ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . a_name Name Customer Name QCTL Name, *NONE QSYS Name, *LIBL, *CURLIB ′ Customer-Name,date,time′

F3=Exit F4=Prompt F24=More keys

F5=Refresh

F12=Cancel

F13=How to use this display

Ensure that you type the data corresponding to the data entered in the STRPFRMON command display at the beginning of the data collection.

266

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

8.2.2 AS/400 Communications Trace
OS/400 provides a communication trace facility that shows details of information exchanged between the AS/400 system and a remote station/control unit. The station or control unit can be a 3270 or 5250 remote workstation controller, or an APPC controller such as a Client Access/400 Windows 3.1 client workstation. A very large volume of information is gathered, even for a rather short period of time. It is recommended that the trace be activated for the shortest possible period. Users with the correct levels of authority can invoke the function either through System Service Tools (STRSST command) or by using the STRCMNTRC command. Output formatting can include identifying SNA protocol indications such as Definite Response or Exception Response mode, chaining, SNA pacing, and so on. If you select the non-SNA formatted output, the SNA indications are not represented by acronyms, but you get an IOP time stamp in tenths of seconds. This IOP time stamp is separate from and not correlated with the system time of day clock. However, this IOP time stamp can be used to determine exactly the AS/400 response time to a request received from a remote station.

8.2.2.1 Start Communication Trace
Similar to performance data collection, you can use two different approaches to start the communication trace:
• •

Use the STRSST command menu option. Use the STRCMNTRC command.

The communications trace continues until the End Communications Trace (ENDCMNTRC) command ends the trace that is running on the specified line description. If you use the menu approach, type the STRSST command on an AS/400 command line. You may need to have the necessary authority to run this command.

System Service Tools (SST) Select one of the following: 1. 2. 3. 4. Start_a_service_tool Work with active service tools Work with disk units Work with diskette data recovery

Selection 1 F3=Exit F10=Command entry F12=Cancel

The following menu sequence is displayed for STRSST:

Chapter 8. Client/Server Performance Analysis

267

This soft copy for use by IBM employees only.

Start a Service Tool Warning: Incorrect use of this service tool can cause damage to data in this system. Contact your service representative for assistance. Select one of the following: 1. 2. 3. 4. 5. 6. 7. Product activity log Trace Licensed Internal Code Work_with_communications_trace Display/Alter/Dump Licensed Internal Code log Main storage dump manager Hardware service manager

Selection 3 F3=Exit F12=Cancel F16=SST menu

The status of communication traces is shown in the following display:

Work with Communications Traces Type options, press Enter. 2=Stop trace 4=Delete trace 7=Display message 8=Restart trace Configuration Object Type

6=Format and print trace

Opt

Trace Description

Protocol

Trace Status

(No active traces) F3=Exit F5=Ref F6=Start_trace F10=Change size F11=Display buffer size F12=Cancel

Press F6 to display the prompts to start collecting communications trace data:

268

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Start Trace Type choices, press Enter. Configuration object . . . . . . . Type . . . . . . . . . . . . . . Line_name 1 1=Line, 2=Network interface 3=Network server

Trace description Buffer size

. . . . . . . .

′ Meaningful_text′ 3 Y 3 1=128K, 2=256K, 3=2048K 4=4096K, 5=6144K, 6=8192K Y=Yes, N=No 1=Sent, 2=Received, 3=Both

. . . . . . . . . . .

Stop on buffer full Data direction

. . . . . . .

. . . . . . . . .

Number of bytes to trace: Beginning bytes . . . . . . . . Ending bytes . . . . . . . . . F3=Exit F5=Refresh F12=Cancel

*CALC *CALC

Value, *CALC Value, *CALC

Ensure that the buffer size selected is adequate! Typically, you use a size no less than 2048K.
The Work with Communications Traces display appears again with Trace Status of Active for the specified line.

Work with Communications Traces Type options, press Enter. 2=Stop trace 4=Delete trace 7=Display message 8=Restart trace

6=Format and print trace

Opt

Configuration Object Type Trace Description Line_name LINE Meaningful_text

Protocol TRN

Trace Status ACTIVE

F3=Exit F5=Refresh F6=Start trace F11=Display buffer size F12=Cancel

F10=Change size

Alternatively, you may choose to use the STRCMNTRC command from a command line and press F4 for the parameters.

Chapter 8. Client/Server Performance Analysis

269

This soft copy for use by IBM employees only.

Start Communications Trace (STRCMNTRC) Type choices, press Enter. Configuration object Type . . . . . . . . Buffer size . . . . Data direction . . . Trace full . . . . . Number of user bytes Beginning bytes . Ending bytes . . . DDI trace options . Trace description . . . . . . . . . . . . . . . . . . . . . . . . . . to trace: . . . . . . . . . . . . . . . . . . . . . . . . . CFGOBJ CFGTYPE MAXSTG DTADIR TRCFULL USRDTA Line_name *LIN *MAX *BOTH STOPTRC *CALC *ALLDTA Meaningful_text Bottom F13=How to use this display

. . . DDITRCOPTS . TEXT

F3=Exit F4=Prompt F24=More keys

F5=Refresh

F12=Cancel

8.2.2.2 End Communication Trace
If you used the menu driven option to start the communications trace by entering the STRSST command, return to the Work with Communications Traces display:

Work with Communications Traces Type options, press Enter. 2=Stop_trace 4=Delete trace 6=Format and print trace 7=Display message 8=Restart trace Configuration Object Type Line_name LINE

Opt 2

Trace Description Customer_name

Protocol TRN

Trace Status ACTIVE

F3=Exit F5=Refresh F6=Start trace F11=Display buffer size F12=Cancel

F10=Change size

Select option 2 to stop the trace on completion of the test. If you used the STRCMNTRC command, you can end the trace by entering the ENDCMNTRC command:

End Communications Trace (ENDCMNTRC) Type choices, press Enter. Configuration object . . . . . . CFGOBJ Type . . . . . . . . . . . . . . CFGTYPE Line_name *LIN Bottom F13=How to use this display

F3=Exit F4=Prompt F24=More keys

F5=Refresh

F12=Cancel

270

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

We later show how to format and print the output.

8.2.3 AS/400 Job Trace
OS/400 provides a Job Trace facility. The printed trace output is based on program/module/procedure call and return sequences. The information present in the printed report includes:
• •

Time stamp (based on the AS/400 clock - system time). Library/program - indicates the program and library involved. Examples of IBM program names you may see for a job performing SQL functions at the request of an ODBC client include: o o o o o o o o o o o o o o o o o o o o o QQQIMPLE QQQITEMP QQQOPTIM QQQQEXIT QQQQUERY QSQCRTI QSQINS QSQOPEN QSQROUTE QSQRPARS QSQRPSTAB QSQUPDAT QQQSETUP QQQSQCMP QQQTSORT QQQVFMT QZDAINIT QZDASOINIT QZDACMDP QZDANDB QZDAROI implements optimized access, creates temporary indexes creates temporary copies files when processing a UNION query optimization cleans up after query by closing files mainline query creates index insert open first routine called for embedded SQL. parser machine. dictionary services-precompiler or parser. update. creates workspace for query processing. called when subqueries are specified. performs all sort processing. processes fields in a SELECT clause. SNA mainlain database server program TCP/IP mainlain database server program command processor processes native database requests processes object information requests and SQL catalog functions - processes SQL requests - ICF GET operation - ICF PUT operation

o QZDASQL o QICGET o QICPUT

Resource utilization by program - measures the following: − − CPU utilization DB/non-DB reads

OS/400 program ″debug″ information unique to the OS/400 component, such as workstation data management.

8.2.4 Detailed Job Information - Server
It is very important to collect all of the necessary information for a complete analysis of the application program. Recall that for ODBC database serving, Client Access/400 prestarts one or more prestarted jobs with the job name prefix QZDAINIT. To collect job trace information, we recommend a test environment where the performance problem application is run from a known APPC control unit or TCP client IP address or under a specific AS/400 user profile. You can determine the correct fully qualified QZDAINIT job name by either doing a WRKCFGSTS command or WRKDEVSTS command that names the client workstation, or do a WRKOBJLCK command specifying the client workstation′ s ″sign on″ user profile name and object type *USRPRF (user profile). Examples of this are discussed in Chapter 3, “Work Management” on page 67.

Chapter 8. Client/Server Performance Analysis

271

This soft copy for use by IBM employees only.

Note that although the QZDAINIT job continues to show the user id as QUSER in system displays of the job, the job log contains a message indicating the actual user profile the job is running under while connected to a client.

8.2.4.1 Start Trace Job/Debug Mode/Job Log
The following data provides detailed information on the database server job (QZDAINIT):
• •

Job Trace Job Log with DEBUG information The DEBUG function enables the Query Optimizer component to record its decisions on keyed or sequential processing of the data and what index (access path) was used if keyed processing was selected.

The job trace provides information on the program modules called during the running of a job. It also indicates CPU usage and disk reads for each step. The function collects performance statistics for the specified job. A trace record is generated for every external program call and return, exception, message, and workstation wait in the job. At least two, and usually more, trace records are generated for every I/O statement (open, close, read, and write) in a high-level language program.

Tracing has a significant effect on the performance of the current job . Time stamps shown may indicate longer processing within a program than actually occurred, but you can use the time stamps (system clock) for the relative execution time compared to other programs or modules listed in the trace. Job trace also affects the performance of the system in general, but to a lesser extent. The job details of the specific AS/400 job (QZDAINIT) serving the particular run of the client/server application must be determined before attempting to start a job trace .
The data setup includes four steps: 1. Start Service Job (STRSRVJOB). Enter the command STRSRVJOB. The following display is shown:

Start Service Job (STRSRVJOB) Type choices, press Enter. Job name . . . . . . . . . . . . User . . . . . . . . . . . . . Number . . . . . . . . . . . . QZDAINIT QUSER 152492 Name Name 000000-999999

2. Start Debug (STRDBG). Enter the STRDBG command and press F4 for the following display to appear:

272

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Start Debug (STRDBG) Type choices, press Enter. Program . . . . . . . . . . . . Library . . . . . . . . . . . + for more values Default program . . . . Maximum trace statements Trace full . . . . . . . Update production files . . . . . . . . . . . . . . . . *NONE Name, *NONE Name, *LIBL, *CURLIB Name, *PGM, *NONE Number *STOPTRC, *WRAP *NO, *YES

*PGM 200 *STOPTRC *yes

3. Start Trace (TRCJOB). Enter the TRCJOB command and press F4 to see the following display:
Trace Job (TRCJOB) Type choices, press Enter. Trace option setting . . . . . . Trace type . . . . . . . . . . . Maximum storage to use . . . . . Trace full . . . . . . . . . . . Program to call before trace . . Library . . . . . . . . . . . Select procedures to trace: Program . . . . . . . . . . . Library . . . . . . . . . . Type . . . . . . . . . . . . . + for more values *ON *ALL 16000 *STOPTRC *NONE *ALL *ON, *OFF, *END *ALL, *FLOW, *DATA 1-16000 K *WRAP, *STOPTRC Name, *NONE Name, *LIBL, *CURLIB Name, *ALL, *NONE Name, *LIBL, *CURLIB *PGM, *SRVPGM

4. Change Job (CHGJOB) to produce a joblog.

CHGJOB JOB(nnnnnn/QUSER/QZDAINIT) LOG(4 00 *SECLVL)

8.2.4.2 End Trace Job/Debug Mode/Job Log
Enter the following commands to end the trace function and the service job:
• • •

TRCJOB SET(*OFF) to end the trace and print a report. ENDDBG to exit debug mode. ENDSRVJOB to end the service job.

Chapter 8. Client/Server Performance Analysis

273

This soft copy for use by IBM employees only.

8.2.4.3 Performance Tools Job Trace
An alternative approach is to use the Start Job Trace function available with the Performance Tools/400 program product.

Start Job Trace: The Job Trace can be started through menu options from the main Performance Tools menu, or by executing the STRJOBTRC command.

PERFORM

IBM Performance Tools/400 System: SYSASM01

Select one of the following: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Select type of status Collect performance data Print performance report Capacity planning/modeling Programmer_performance_utilities Configure and manage tools Display performance data System activity Performance graphics Advisor

70. Related commands

Selection or command ===> 5 F3=Exit F4=Prompt F9=Retrieve F16=System main menu F12=Cancel F13=Information Assistant

Select option 5 from the PERFORM menu and the following display is shown:

Programmer Performance Utilities Select one of the following: 1. 2. 3. 4. Work_with_job_traces Work with program run statistics Select file and access group utilities Analyze disk activity

Selection or command ===> 1 F3=Exit F4=Prompt F9=Retrieve F12=Cancel

The Job Trace menu appears:

274

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Work with Job Traces Select one of the following: 1. Start_job_trace 2. Stop job trace 3. Print job trace reports

Selection or command ===> 1 F3=Exit F4=Prompt F9=Retrieve F12=Cancel

Selecting option 1 shows the STRJOBTRC command prompt display. You can reach this same display by typing the STRJOBTRC command on a command line:

Start Job Trace (STRJOBTRC) Type choices, press Enter. Maximum storage Job name . . . . User . . . . . Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1024 QZDAINIT QUSER 152492 1-16000 K Name, * Name 000000-999999

F3=Exit F4=Prompt F24=More keys

F5=Refresh

F12=Cancel

Bottom F13=How to use this display

End Job Trace: After the target programs have been run, tracing must be turned off, the collected information recorded in a database file, and optional reports printed for analysis. The Print Job Trace (PRTJOBTRC) command can also be used to print the same report after tracing has stopped.
If you are using menus to perform the job tracing activity, return to the Work with Job Traces menu and select option 2.

Work with Job Traces Select one of the following: 1. Start job trace 2. Stop_job_trace 3. Print job trace reports Selection or command ===> 2 F3=Exit F4=Prompt F9=Retrieve F12=Cancel

Chapter 8. Client/Server Performance Analysis

275

This soft copy for use by IBM employees only.

Alternatively, you may enter the ENDJOBTRC command on any command line:

End Job Trace (ENDJOBTRC) Type choices, press Enter. Output Output Report Report file member . file library type . . . . title . . . . . . . . . . . . . . . . . . . . . . . . . . . . QAJOBTRC Name QPFRDATA Name *NONE *NONE, *DETAIL, *SUMMARY Meaningful_name *FIRST *LAST QT3REQIO QWSGET ENDJOBTRC QPFRJOBD *LIBL 1-999999, *FIRST Number, *LAST Name, QT3REQIO, *BATCH Name Name, ENDJOBTRC, *MBR Name, *NONE Name, *LIBL, *CURLIB F13=How to use this display

Starting sequence number . . . . Ending sequence number . . . . . Transaction ending program . . . Transaction starting program . . Job name . . . . . . . . . . . . Job description . . . . . . . . Library . . . . . . . . . . . F3=Exit F4=Prompt F24=More keys F5=Refresh

F12=Cancel

It is possible to print the Job Trace report at the time the trace is completed by entering Report Type of *DETAIL or *SUMMARY and a Report Title . Please refer to Appendix F of the DB2/400 SQL Programming Manual , SC41-3611, for more information on analyzing Joblog DEBUG information.

8.2.5 Client Access/400 Client Tools - ODBC Trace
An invaluable tool is available with the Client Access/400 Windows 3.1 client that traces the ODBC API calls generated during the execution of the program. The ODBC trace is invoked on the client PC. A knowledge of the programming language used to develop the client application is advantageous. The ODBC trace function creates a log of all ODBC calls made during an application run.

8.2.5.1 Start/End ODBC Trace
Follow these steps to set the ODBC trace option. The trace option is set to automatically end after the job is complete. 1. Open the ODBC Administrator window. • Click on the ODBC icon in the CA/400 Window. 2. Select the Data Source. 3. Click on the Options button. 4. In the ODBC Options window: • Check the Trace ODBC Calls check box. • Check the Stop tracing automatically . 5. Click OK. 6. Click Close . 7. The trace is written into SQL.LOG by default.

276

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

8.2.6 Client Access/400 Client Tools - Start Debug
AS/400 Debug of the ODBC server job QZDAINIT can be started from the client program by calling a stored procedure. You can use debug to trace your job and write the trace information to the joblog. An example using Visual Basic is shown. See 5.3.28, “Using Stored Procedures to Run Commands” on page 164 for a complete discussion of calling AS/400 commands through stored procedures. 1. Add the following code to your program.

Dim aCommand As String Dim a_hdbc As long Dim s_StoredProc As long aCommand = ″call qsys.qcmdexc (′ STRDBG UPDPROD(*YES)′,0000000020.00000)″ ′ Length must be passed as fixed decimal (15,5) ′ note a_hbdc is the connection handle used in SQLAllocConnect rc = SQLAllocStmt (a_hdbc,s_StoredProc) rc = SQLExecDirect(s_StoredProc,aCommand,SQL_NTS) rc = SQLFreeStmt(s_StoredProc,SQL_DROP)
2. Run the program. 3. When the program is active, switch over to an AS/400 emulation session. 4. Enter the following command:

WRKOBJLCK TEAMxx *USRPRF
5. You are shown a list of jobs holding locks; select the QZDAINIT job. There may be several QZDAINIT jobs, you must try each one until you find the currently active job. 6. Enter option 5 for the QZDAINIT job. 7. Enter option 10 from the Work with job display to view the job log. The job log shows you information about access paths, ODPs, and database operations. This may be useful for debugging performance problems.

8.2.7 Client Response Time Log
The performance analyst′s work is aided by the availability of actual response times experienced at the client PC. If this is not available in the application, lines of code are tagged on at the client application to provide a time stamp as the transaction is initiated, and another when the response is received. This gives the analyst a more accurate indication of client/server response time compared to user perceptions of response.

8.2.7.1 PC Response Logging Code
The following example shows code in Visual Basic that you can include in a program to determine the response time experienced by the client. Please note that this code is an example only, and should be used with caution. You should determine the applicability and impact of including code such as this into your application.

Chapter 8. Client/Server Performance Analysis

277

This soft copy for use by IBM employees only.

′ **************************************************************** ′ * General purpose timer module * ′ **************************************************************** Option Explicit Dim Dim Dim Dim Dim Dim m_ccStart As Single m_ccEnd As Single m_start As Single m_end As Single m_DeltaTime As Single m_TotalTime As Single ′ ′ ′ ′ ′ ′ Start time in milliseconds. End time in milliseconds. Start time in seconds. End time in seconds. Delta time in seconds. Total time in seconds.

Sub starttimer (TotalTime As Single) If (TotalTime <> -1) Then m_TotalTime = TotalTime End If m_ccStart = Timer * 1000 End Sub Sub EndTimer () m_ccEnd = (Timer * 1000) m_DeltaTime = (m_ccEnd - m_ccStart) / 1000 m_TotalTime = m_TotalTime + m_DeltaTime m_start = m_ccStart / 1000 m_end = m_ccEnd / 1000 End Sub Function GetDeltaTime () As Single GetDeltaTime = m_DeltaTime End Function Function GetTotalTime () As Single GetTotalTime = m_TotalTime End Function ′ *************************************************************** ′ Segment of code to output a transaction log ′ *************************************************************** Call starttimer(0) aTimeStamp = Now Call *************** Call EndTimer ′ Start the timer. ′ DO THE WORK. ′ End the timer. ′ Stop the clock. ′ Set Total Time. ′ Start the clock.

If ilog <> 0 Then logpath = m_log Open logpath For Append As #ilog 278

′ Open the file.

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Print #ilog, frmNewOrder.txtOrder, Format$(GetDeltaTime(), ″Standard″ ) , ″Start Time = ″ & aTimeStamp Close #ilog ′ Close the file. End If

8.3 Print Reports
It is recommended that the following reports be printed to assist in the analysis:

AS/400 Server Tools − Performance Tools − − − − Advisor (display or printed output) System Component Transaction Summary Transaction Detail (specific job only) Transition Detail (specific job only)

Communications Line Trace Job Trace (Server) Server (QZDAINIT/QZDASOINIT) JOBLOG (STRDBG Query Optimizer messages SQL Package

Client Access/400 Client Application Tools − − ODBC Trace PC log (if available)

8.3.1 Performance Reports
The following topics discuss how to print the various reports on collected data. Examples are provided for some of the reports. A case study of the application is provided in Chapter 10, “Case Study” on page 351. However, the examples shown here should assist you in a client/server performance problem analysis.

8.3.1.1 Performance Tools/400, Advisor Function
Enter the command GO PERFORM and select the option to print performance reports.

Chapter 8. Client/Server Performance Analysis

279

This soft copy for use by IBM employees only.

PERFORM

IBM Performance Tools/400 System: SYSASM01

Select one of the following: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Select type of status Collect performance data Print_performance report Capacity planning/modeling Programmer performance utilities Configure and manage tools Display performance data System activity Performance graphics Advisor

70. Related commands Selection or command ===> 10 F3=Exit F4=Prompt F9=Retrieve F16=System main menu F12=Cancel F13=Information Assistant

Select the Advisor option, which presents a list of Performance Monitor data collections in the specified library as shown in the next display example.

Select Member for Analysis Library . . . . . . Customer

Type option, press Enter. 1=Select 5=Display Option Option 1 Member Member CSPERFBOOK SPEED73101 SPD2602 Text Text Client/Server Performance 1st test on Jul 31 Speed trace server Date 09/14/95 07/31/95 07/26/95 Time 08:00:51 07:21:51 19:16:54 More...

F3=Exit F12=Cancel F19=Sort by date/time

F15=Sort by name

F16=Sort by text

After selecting the performance member, you are presented with a list of summary performance statistics for time intervals. You may select all intervals to be analyzed or a subset of time intervals. The Advisor analyzes the data collected and produces a set of messages under Recommendations and Conclusions headings. An example of the first-level message output is shown in the following display.

280

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Display Recommendations System: Member . . . . . . . : CSPERFBOOK Library . . . . . : System . . . . . . . : SYSASM01 Version/Release. . : Start date . . . . . : 07/19/95 Model . . . . . . : Start time . . . . . : 08:11:16 Serial number . . : Type options, press Enter. 5=Display details Option Recommendations and conclusions Recommendations __ Decrease pool size for listed pools. __ Increase pool size for listed pools. __ ASP space capacity exceeded guideline of 80.0%. __ Separate batch from interactive jobs. Conclusions __ Pools may possibly be removed. __ Pool fault rates exceeded guideline. __ Pool fault rates below guideline. __ Pool fault rate zero in all intervals. F3=Exit F6=Print F9=Tune system F12=Cancel SYSASM01 CUSTOMER 3/ 1.0 D60 10-15181

More... F21=Command line

We recommend using the Advisor as an initial view of overall system performance data. Review the message details to determine what additional performance analysis should be performed. You may also invoke the Advisor function and output by using the ANZPFRDTA command. The System Report and Component Report contain much more overall system performance information, but review of Advisor output can often speed up further analysis of the Performance Tools reports. The Component report assists in identifying relative CPU utilizations as a percentage of the total run time of the job, and can be used to compare the CPU usage and disk I/Os with that of other interactive jobs. However, this comparison is valid only if the client/server application was actively processing for the entire duration of the test.

8.3.1.2 Performance Tools/400, Printed Reports
Enter the command GO PERFORM and select the option to print performance reports.

Chapter 8. Client/Server Performance Analysis

281

This soft copy for use by IBM employees only.

PERFORM

IBM Performance Tools/400 System: SYSASM01

Select one of the following: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Select type of status Collect performance data Print_performance_report Capacity planning/modeling Programmer performance utilities Configure and manage tools Display performance data System activity Performance graphics Advisor

70. Related commands Selection or command ===> 3 F3=Exit F4=Prompt F9=Retrieve F16=System main menu F12=Cancel F13=Information Assistant

When the Print Performance Report display appears, type the library name you collected the data in, and press Enter. The data from the different performance collections is shown:

Print Performance Report Library . . . . . . Customer

Type option, press Enter. 1=System report 2=Component report 3=Transaction_report 4=Lock report 5=Job report 6=Pool report 7=Resource report 8=Batch job trace report Option 3 Member CSPERFBOOK SPEED73101 SPD2602 . F3=Exit F5=Refresh F15=Sort by member Text Client/Server Performance 1st test on Jul 31 Speed trace server Date 09/14/95 07/31/95 07/26/95 Time 08:00:51 07:21:51 19:16:54 More... F12=Cancel

F11=Work with your spooled output files F16=Sort by text

Select the required data collection and print the following reports by selecting the appropriate option value:
• • • • •

System Report (all jobs) Component Report (all jobs) Job Summary Report (all jobs) Transaction Detail Report (selected jobs ONLY) Transition Report (selected few jobs only)

This redbook shows examples of the Component Report, Job Summary Report, Transaction Detail Report, and Transition Report. Other reports such as the System Report and Resource (Interval) Report may also be useful in specific cases. For example, the Resource Report can be used to assess line utilization during the different Performance Monitor time intervals.

282

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

The System Report provides a good overview of overall system resource utilization. Figure 65 on page 284 shows a page out of the Component Report. This page is part of the ″Job Workload Activity″ and includes Client Access/400 database server jobs QZDAINIT/QUSER/nnnnn. Job 152492 is highlighted because it did significant work as indicated by its CPU utilization and disk I/O (Sync, Async, and Logical) counts. Logical I/Os are the I/Os issued specifically by the AS/400 program. They are called logical because they actually read from or write to data already in main storage as well as cause a physical I/O to a disk device. In general, running the same application with the same number of record/rows repetitively should result in the same number of logical I/O counts each run time. Physical I/Os mean data was actually read from a disk device or written to the disk device. Physical I/Os may be synchronous or asynchronous . High synchronous disk I/O counts can be an indicator of poor performance because the application has to wait for the synchronous disk I/O to complete before continuing processing. High synchronous disk I/Os can be an indication of poor application coding or poor use of indexes by SQL. Significantly reducing synchronous disk I/Os may often improve performance. High asynchronous disk I/O counts normally are not responsible for poor performance because they do not usually cause the job to wait for the disk I/O completion. Remember that the Component Report shows the number of disk I/Os over the time the Performance Monitor was collecting data. So a large disk I/O count may not indicate a disk bottleneck if the monitor was active for 30 minutes or more. The Transaction Report (based on trace data) for a specific job shows the disk I/Os per transaction, which can be a more accurate indication of synchronous disk I/O impact on performance. Refer to Chapter 7, “Client/Server Performance Tuning” on page 235 for possible ways to reduce synchronous disk I/Os by use of Expert Cache or the Set Object Access (SETOBJACC) command. You see another QZDAINIT job - job 152568 that did very little work. In an actual customer environment, you may see many QZDAINIT jobs doing significant work and others doing almost no work. Because subsystem QSERVER is defined to support prestarted jobs for database serving, you see all of the currently active QZDAINIT jobs on the component report, even if some of them are just waiting for a remote client to perform ODBC functions with the AS/400 system. When doing capacity planning with BEST/1, you need to ensure you include only the database server jobs that actually did significant work in any workload definitions. Note also, that the busy QZDAINIT job 152492 had no transactions (″Tns″) counted. This is because the system does not identify transactions for non-interactive jobs in the Performance Monitor summary level data, which is used by the Component Report. This is a known restriction with no known removal date. It is important to understand this because BEST/1 uses the same summary level data for its modeling. See Chapter 9, “Client/Server Capacity Planning” on page 313 for additional capacity planning information.

Chapter 8. Client/Server Performance Analysis

283

284
9/14/95 8:15:37 Page 9 . . : . . : EAO PAG Arith Perm Excp Fault Ovrflw Write ----- ----- ------ ----0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 3 0 0 0 3 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 5 0 0 0 3 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 5 0 0 0 0 0 0 0 109 0 11 09/14/95 08:01:49 09/14/95 08:12:37 .3 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 SYSASM01 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 .00 .00 .00 .00 .00 .00 .00 .00 .00 .00 .00 .00 .00 .00 .00 .00 .00 352 6 1 1 1 4 1 1 1 3 2 2 2 2 2 2 2 134 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 965 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 126 19 0 1 1 1 4 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 165 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

AS/400 Client/Server Performance

Component Report Job Workload Activity Client/Server Performance Redbook Member . . . : CSPERFBOOK Model/Serial . : D60 /10-15181 Main storage . . : 80.0 MB Started . . Library . . : CSPERF95 System name . . : SYSASM01 Version/Release : 3/ 1.0 Stopped . . T P Job User Job y t CPU Tns ---------- Disk I/O --------Cmn Name Name Number p Pl y Util Tns /Hour Rsp Sync Async Logical I/O ---------- ---------- ------ -- -- -- ---- -------- ------- ------ --------- --------- --------- --------QTFTP01312 QTCP 152180 B 02 25 .0 0 0 .00 1 0 0 0 QTFTP02261 QTCP 151912 B 02 25 .0 0 0 .00 1 0 0 0 QTFTP02457 QTCP 151913 B 02 25 .0 0 0 .00 1 0 0 0 QTFTP12852 QTCP 151909 B 02 25 .0 0 0 .00 1 0 0 0 QTGTELNETS QTCP 151907 B 02 20 .0 0 0 .00 4 0 0 0 QTGTELNETS QTCP 152017 B 02 20 .0 0 0 .00 4 0 0 0 QTHTT00910 QTMHHTTP 152232 B 02 25 .0 0 0 .00 1 0 0 0 QTHTT01609 QTMHHTTP 152229 B 02 25 .0 0 0 .00 4 0 0 0 QTHTT01861 QTMHHTTP 152231 B 02 25 .0 0 0 .00 1 0 0 0 QTLPD18608 QTCP 151915 B 02 25 .0 0 0 .00 4 0 0 0 QTLPD18734 QTCP 151916 B 02 25 .0 0 0 .00 4 0 0 0 QTMSNMP QTCP 151906 B 02 35 .5 0 0 .00 12 0 0 0 QTMSNMPRCV QTCP 151914 B 02 50 .0 0 0 .00 8 0 0 0 QTQVT00898 QTMT5250 152243 B 02 25 .0 0 0 .00 1 0 0 0 QTQVT03683 QTMT5250 152240 B 02 25 .0 0 0 .00 3 0 0 0 QTQVT03811 QTMT5250 152241 B 02 25 .0 0 0 .00 1 0 0 0 QTSMTPBRCL QTCP 151919 B 02 50 .0 0 0 .00 1 0 0 0 QTSMTPBRSR QTCP 151920 B 02 50 .0 0 0 .00 1 0 0 0 QTSMTPCLNT QTCP 151918 B 02 50 .0 0 0 .00 1 0 0 0 QTSMTPSRVR QTCP 151911 B 02 50 .0 0 0 .00 8 0 0 0 QVARRCV QSVMSS 151852 B 02 20 .0 0 0 .00 0 0 0 0 QVATTMGR QSVMSS 151900 B 02 35 2.9 0 0 .00 274 62 506 0 QZDAINIT__QUSER______152492_C 05 20 5.0 0 0 .00 3683 633 288 0

QZDAINIT QZDSTART QZRCSRVR QZSCSRVR Q1CCOMSRV Q1CDTMSNAH Q1CDTMSNAP Q1CDTMSNAP Q1CDTMSNAP RCCINT SYSASM01 SYSASM02 SYSASM02 SYSASM03 SYSASM03 SYSASM04 SYSASM05

QUSER QSNADS QUSER QUSER QAUTOMON QAUTOMON QAUTOMON QAUTOMON QAUTOMON

152568 151860 151889 151890 152528 152529 152530 152531 152532

QGATE QGATE QSNADS QGATE QSNADS QGATE QSNADS

151871 151873 151872 151875 151874 151876 151877

C A C C B B B B B L B B B B B B B

05 02 02 02 02 02 02 02 02 01 02 02 02 02 02 02 02

20 40 20 20 35 33 33 33 33 00 40 40 40 40 40 40 40

This soft copy for use by IBM employees only.

Figure 65. Component Report Example

This soft copy for use by IBM employees only.

There is other system level information on the component report not shown in this example. The Transaction Report, *SUMMARY option uses Performance Monitor trace data. Typically, you use the transaction report information when system-wide tuning indicates a performance problem still exists, or is localized to one or two application types running on the system. We recommend including all jobs on the *SUMMARY option and compare transaction summary information and the Component Report Job Workload Activity information when focusing on one or more applications. The following example Transaction Report displays are shown following selection of option 3 Transaction Report on the Print Performance Report menu (shown earlier in this chapter).

Print Transaction Report (PRTTNSRPT) Type choices, press Enter. Member . . . . . . . . . . . . . > a_name Name Report title . . . . . . . . . . > Customer_name,date,time′ Report type . . . . . . . . . . *SUMMARY *SUMMARY, *TNSACT, *TRS + for more values + Time period for report: Starting time . . . . . . . . Ending time . . . . . . . . . *FIRST *LAST Time, *FIRST Time, *LAST

Additional Parameters Library . . . . . . . . . . . . > Customer Name F12=Cancel

F3=Exit F4=Prompt F5=Refresh F13=How to use this display

F10=Additional parameters F24=More keys

After specifying *SUMMARY and a meaningful Report title description, press Enter to produce the Job Summary Report for all active jobs. Select Option 3 again to print the Transaction and Transition Reports for the specific QZDAINIT server job that demonstrates the performance problem being analyzed.

Do not press the Enter key until you have read all of the following instructions, or you might have very large reports to print and manage .
When the following display is shown, type

*TNSACT
(for the Transaction Report) and

*TRSIT
(for the Transition Report), and press F10 for additional parameters.

Chapter 8. Client/Server Performance Analysis

285

This soft copy for use by IBM employees only.

Print Transaction Report (PRTTNSRPT) Type choices, press Enter. Member . . . . . . . . . . . . . > a_name Name Report title . . . . . . . . . . > Customer_name,date,time′ Report type . . . . . . . . . . *TNSACT *SUMMARY, *TNSACT, *TRS + for more values *TRSIT Time period for report: Starting time . . . . . . . . Ending time . . . . . . . . . *FIRST *LAST Time, *FIRST Time, *LAST

Additional Parameters Library . . . . . . . . . . . . > Customer Name F12=Cancel

F3=Exit F4=Prompt F5=Refresh F13=How to use this display

F10=Additional parameters F24=More keys

When the following display appears, enter the selected server job number next to the prompt for ″Select Jobs″.

Print Transaction Report (PRTTNSRPT) Type choices, press Enter. Report option . . . . . . . . . + for more values Select jobs . . . . . . . . . . + for more values Omit jobs . . . . . . . . . . . + for more values Select users . . . . . . . . . . + for more values Omit users . . . . . . . . . . . + for more values Select pools . . . . . . . . . . + for more values Omit pools . . . . . . . . . . . + for more values Select functional areas . . . . + for more values_ *SS _ 152492 _ *NONE + *ALL _ *NONE _ *ALL _ *NONE _ *ALL *SS, *SI, *OZ, *EV, *HV Character value, *ALL Character value, *NONE Name, generic*, *ALL Name, generic*, *NONE 1-16, *ALL 1-16, *NONE

F3=Exit F4=Prompt F24=More keys

F5=Refresh

F12=Cancel

More... F13=How to use this display

You may select more than one job, but typically only one job is needed when running the same application.

Press Enter now .
Figure 66 on page 287 is an example of the Transaction Report Job Summary section that includes the QZDAINIT job (152492) previously shown on the Component Report. Note that job 152492 is considered a BJ (batch pre-started job) by the system and, although it did significant work, has no transactions (Tot Nbr Tns) recorded.

286

AS/400 Client/Server Performance

Job Summary Report Job Summary Client/Server Performance Redbook D60 /10-15181 Main storage . . SYSASM01 Version/Release Response Sec CPU Sec ------------- -----------------Avg Max Util Avg Max ------ ------ ---- ------ -----.05 .14 .15 .14 .15 .15 .16 .15 .14 : 80.0 M Started . . . . :09 14 95 08:03:05 : 3/ 1.0 Stopped . . . . :09 14 95 08:12:45 ---- Average DIO/Transaction ----Number K/T ------ Synchronous ----- --Async-Cft /Tns DBR NDBR Wrt Sum Max Sum Max Lck Sze Sec ---- ---- ---- ---- ---- ---- ---- --- --- ----4 8 1 6 7 8 8 7 5 448 39 .2 1.50

This soft copy for use by IBM employees only.

9/14/95

8:16:04 Page 0003

.29 .04 .14

3 5 65 2

2.3

70.5

8.7

1.45

48.39

27

7

34

947

14

404

Member . . . : CSPERFBOOK Model/Serial . : Library . . : CSPERF95 System name . . : *On/Off* T P P Tot Job User Job y t r Nbr Name Name Number Pl p y g Tns ---------- ---------- ------ -- -- -- - ---QTGTELNETS QTCP 152017 02 B 20 QCQSVSRV SAKAI 152087 02 B 35 QCQSVSRV ITSCID50 152090 02 B 35 QCQSVSRV SVDEMO 152091 02 B 35 QCQSVSRV SAKAI 152092 02 B 35 QCQSVSRV SAKAI 152093 02 B 35 QCQSVSRV SAKAI 152094 02 B 35 QCQSVSRV SVDEMO 152095 02 B 35 QCQSVSRV OCCDFTUSR 152096 02 B 35 SVINT QSYS 152099 02 M 00 QIJSSCD SAKAI 152125 02 B 35 P23ZH798C CHILANTI 152133 04 I 20 SCHASM02 QSNADS 152138 02 B 40 QTFTP01231 QTCP 152179 02 B 25 QTFTP01312 QTCP 152180 02 B 25 QTFTP00779 QTCP 152181 02 B 25 GLMS GLMS 152192 02 BE 20 GLMSE GLMS 152193 04 I 20 QTHTT01609 QTMHHTTP 152229 02 B 25 QTHTT01861 QTMHHTTP 152231 02 B 25 QTHTT00910 QTMHHTTP 152232 02 B 25 GOPHER QSYS 152236 02 M 00 GPHR00070 QPGMR 152237 02 B 25 GPHR00070 QPGMR 152238 02 B 25 QTQVT03683 QTMT5250 152240 02 B 25 QTQVT03811 QTMT5250 152241 02 B 25 QTQVT00898 QTMT5250 152243 02 B 25 ITSOSRV3 QSECOFR 152252 02 B 50 GPHR00070 QPGMR 152261 02 B 25 P23LRWNH NICKHUTT 152326 02 BE 20 QSERVER QSYS 152350 02 M 00 QSERVER QPGMR 152352 02 A 20 VP856 CS15 152370 02 BE 20 VP856 CS15 152371 02 BE 20 YESSONG00 YESSONG 152391 02 BE 20 VP856S1 CS15 152394 04 I 20 Y 35 14 P23LRWNHC NICKHUTT 152412 04 I 20 QCNTEDDM QUSER 152450 02 BJ 50 QCNTEDDM QUSER 152451 02 BJ 50 QZDAINIT QUSER 152492 05 BJ 20 QAUTOMON QSYS 152527 02 M 00 Q1CCOMSRV QAUTOMON 152528 02 B 35 Q1CDTMSNAH QAUTOMON 152529 02 BJ 33 5.4 .16 31.55 .01 .13 100 3640 2 4 2 631

Chapter 8. Client/Server Performance Analysis

Figure 66. Transaction Report - Job Summary Example

287

This soft copy for use by IBM employees only.

In our test environment, we know that job 152492 was run from the client workstation named VP856 . Job VP856/CS15/152370 is the APPC BE (batch evoke) router job from the Windows 3.1 client and job VP856S1/CP15/152394 is the RUMBA/400 5250 workstation emulation I (interactive) job. Note the average number of synchronous disk I/Os per transaction for job 152492 (ODBC server) and job 152394. Counts are shown for the 5250 interactive job 152394, but not for the prestarted database serving job 152492. This is a known restriction in the data collected by the Performance Monitor. However, notice that the ″Max″ (per transaction) counts are shown for the ODBC database server job 152492. The abnormally high count of 3640 synchronous disk I/Os and 631 asynchronous disk I/Os indicates further investigation of this job is required. You need to look at the Transaction Report and the Transition Report for job 152492. Although not shown in this example, the Job Summary contains other useful information. We recommend you review at least one other section, ″Longest Seize/Lock Conflict,″ to see if any other jobs are holding locks on objects such as the SQL tables/files used by the ODBC jobs. Seize/Lock times consistently greater than .2 seconds can significantly degrade performance. If you see this, consider inappropriate row/record locking for update by the application or unanticipated use of the table/file by another application. Use of higher levels of database commitment control can be responsible for high or frequent seize/lock time values. See Chapter 1, “Application Design” on page 1 for more database information. The Batch Job Analysis section provides information on the utilization of some of the main AS/400 resources, such as CPU and disk accesses. These values are used to evaluate the workload introduced by the QZDAINIT server jobs. Provided the number of client transactions completed during the test is known, it is possible to calculate the average AS/400 CPU and disk resource usage for each client transaction. This can be compared to guidelines for normal interactive transactions. Also, if the client/server application was actively processing for the entire duration of the test, the average CPU percentage used can also be compared to other interactive users. Figure 67 on page 289 is an example of the Transaction Report-Job Transaction report for QZDAINIT job 152492. Pages of the actual report have been selected and shown in this figure. This transaction report does identify APPC transactions and associated CPU seconds (CPU Sec Per Tns), Physical Disk I/Os per transaction, and Transaction Response Time.

288

AS/400 Client/Server Performance

9/14/95

Member . . . : CSPERFBOOK Library . . : CSPERF95 Job name . . : QZDAINIT

Transaction Report Client/Server Performance Redbook Model/Serial . : D60 /10-15181 Main storage . . : System name . . : SYSASM01 Version/Release : User name . . . : QUSER Job number . . . : 80.0 M 3/ 1.0 152492 Started . . . . Stopped . . . . TDE/Pl/Pty/Prg .

T CPU y Sec p Per e Tns - ------.045 1 2 2 89.4 .0 .1

---- Physical I/O Counts ----- ***** Transaction Response Time (Sec/Tns) ****** ----- Synchronous ------ Async ****** - Activity Level Time - Inel Long DB DB NDB NDB Disk **** Short Seize Time Wait Read Wrt Read Wrt Sum I/O ** Active Wait Cft A-I/W-I Lck/Oth ---- ---- ---- ---- ---- ----- -------- ------- ------- ------- ------- ------18 6 24 1 .676 .676

8:19:51 Page 0001 :09 14 95 08:03:05 :09 14 95 08:12:45 : 1165/ 05/ 20/NO -BMPLC I Seize u n Hold Key/ r l Time Think -- -- ----- -----1

This soft copy for use by IBM employees only.

E x c Program Time p Name -------- - ---------08.03.30_Y_QZDAINIT 25.7 08.05.00 Y QZDAINIT 08.05.03 QZDAINIT 08.05.04 Y QZDAINIT 08.05.07_Y_QZDAINIT 127 6 89 1577 127 6 13 102 22 1603 0 0 69 43 2.960 .186 3.419 17.306 2.957 .186 3.419 17.306

.538 .020 .431 2.118

4

Y Y Y Y Y Y Y Y Y Y Y 2

2 6 2 2 1 2 15 28 8

4

Y

16 5 2 2 2 2 4 14 8 7 2 3 2 4

Y

Y

Y

Y

Y

Y

Y

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

Y Y Y 7 1 7 2 1 8 4

4

08.05.24 08.05.26 08.05.29 08.05.30 08.05.31 08.05.32 08.05.42 08.05.45 08.05.50 08.05.52 08.05.53 08.05.53 08.05.54 08.05.55 08.05.55 08.05.55 08.05.56 08.05.56 08.05.56 08.05.56 08.05.56 08.05.56 08.05.57 08.05.57 08.05.57 08.05.57 08.05.57 08.05.57 08.05.58 08.05.58 08.06.00 08.06.00 08.06.00 08.06.02 08.06.03 08.06.03 SYSASM01

Chapter 8. Client/Server Performance Analysis

Y

QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT QZDAINIT

.801 .834 .558 .633 .435 .685 .693 .717 .487 .430 .047 .368 .046 .155 .043 .107 .043 .105 .042 .104 .042 .106 .043 .106 .044 .106 .043 .107 .042 .494 .275 .045 .335 .194 .040 .040

1 17

45 82 20 42 21 31 87 25 20 32 5 18 2 18 3 3 1 3 2 3 1 4 1 4 1 3 1 4 1 15 11 2 28 5 2 2

63 93 24 46 24 35 106 67 36 39 7 27 4 23 4 4 2 4 3 4 2 5 2 5 2 4 2 5 1 26 13 4 60 10 2 2

8 1 1 1 1 1 15 20 25 4 1 2 1 2 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 22 1 1 10 1 1 1

2.033 2.170 .850 1.179 .767 1.090 2.919 4.322 2.265 1.035 .302 .892 .150 .532 .167 .206 .089 .163 .099 .228 .109 .185 .127 .180 .098 .192 .093 .192 .054 1.857 .479 .174 1.857 .447 .082 .098

2.033 2.170 .850 1.179 .766 1.090 2.919 4.321 2.265 1.035 .302 .892 .150 .532 .167 .204 .089 .163 .098 .227 .109 .184 .126 .180 .097 .192 .092 .192 .054 1.857 .479 .173 1.856 .446 .082 .098

2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

.0 .0 .0 .0 .0 .0 .0 9.5 .0 .1 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0

289

Figure 67. Transaction Report - Job Transaction Report Example

This soft copy for use by IBM employees only.

The Performance Monitor started collecting data for job 152492 at 08:03:30. Program QZDAINIT is the OS/400 Client Access/400 SNA database server mainline program the ″transactions″ are charged to. A Trace Job report shows that program/module QZDACMDP is the primary database server program (″command processor″) that runs just below QZDAINIT. QZDACMDP acts as a ″router″ of incoming requests to OS/400 modules and of outgoing responses to the client. QZDACMDP receives all incoming ODBC function requests and calls the appropriate Client Access/400 program, OS/400 SQL program, or other OS/400 program to perform the specific function. All data to be sent back to the ODBC requesting client is sent by QZDACMDP. The leftmost column under the heading Transaction Response Time (Sec/Tns) indicates the transaction response time within the AS/400 system. In our test results, this response time corresponds within .1 seconds of the IOP time stamps listed in communications trace output showing when the ODBC request was received by the AS/400 system and when the response was sent from the AS/400 system. For each APPC transaction, you can observe the number of disk I/Os and time spent in the activity level. In almost all cases, the Transaction Response Time and Active time should be identical. Differences in these two values can be attributed to conditions such as a high priority LIC task taking over the CPU or abnormally long waits, such as waiting for a lock on a needed object to be released. Note that there may be more APPC transactions than what the end user considers a transaction. How an APPC transaction correlates with an end user transaction is dependent on the client application implementation and, in some cases, the OS/400 implementation. For example, when the client application connects to the AS/400 system, there may be multiple APPC transactions necessary to complete the connection. Another example is a client ODBC implementation that sends each line of an order to the AS/400 server compared to a client implementation that sends a complete screen page of line items to the AS/400 system with a single APPC transaction. From time stamp 08.05.07, you can see a long running transaction on the AS/400 system - lots of disk I/Os and a 17.306 response time. Our job trace and communications trace showed this included creation of the SQL package and insertion of the first SQL Prepare statement into the package. From time stamps 08.05.24 through 08.05.52, you can observe 1, 2, and a single 4 second AS/400 response time. This includes processing multiple Prepare SQL statements received from the client. After the Prepares have been processed, you see very good average response times and minimal disk I/Os in time stamps 08.05.53 through 08.05.58. Through other trace information, we determined time stamps 08.05.58 through 08.06.02 were the results of the client application sending another SQL Prepare request. Figure 68 on page 292 is an example of the Transition Report for QZDAINIT job 152492. A page of the actual report has been selected and shown in this figure. This transaction report does identify APPC transactions and correlates well with the time stamps in the Transaction Report for this job.

290

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Note Both the Transition Report and the ODBC Trace examples shown later in this chapter represent data collected for an ODBC order entry application. A description of the application and communications line trace examples are shown in Chapter 10, “Case Study” on page 351. A more complete communications line trace example is contained in Appendix B, “Communications Trace Examples” on page 429. The ODBC topic 8.3.2, “ODBC API Trace Example” on page 293, shows the full SQL statement syntax (SELECT, INSERT, and so on) represented by ″index″ values such as 00002 and 0004 , shown in the Transition Report and ODBC API trace examples in this chapter. Additional ODBC Trace examples are contained in Chapter 10, “Case Study” on page 351 and Appendix C, “ODBC Trace Example” on page 467 .

The Transition Report time stamps 08.05.07.503 and 08.05.24.808 indicate the AS/400′s response to the first SQL Prepare statement represented by 0002 17.306 seconds. Time stamps 08.05.24.873 and 08.05.26.906 indicate the AS/400 system′s response to the SQL Prepare statement represented by 0004. . Similarly, time stamps 08.05.26.968 and 08.05.29.139 show the AS/400 response to the SQL Prepare statement represented by 0005 . You can see that APPC program QZDAINIT is charged with the transaction by the ---------- QZDAINIT entries in the left portions of the report. You can also see that the program interfacing to the Licensed Internal Code is the command processing program QZDACMDP under the ″Last″ sub-heading under heading ″Last 4 Programs in Invocation Stack″ on the right side of the report. Though not shown in this figure, subsequent transition report entries show AS/400 response times for other SQL Prepare statements and response times for the actual execution of the prepared SQL statement. Refer to 8.3.2, “ODBC API Trace Example” on page 293 for a definition of the actual SQL statements. The 000n ″indexes″ are also correlated in the line trace shown in Appendix B, “Communications Trace Examples” on page 429.

Chapter 8. Client/Server Performance Analysis

291

292
9/14/95 80.0 M 3/ 1.0 152492 8:19:51 Page 0001 Started . . . . :09 14 95 08:03:05 Stopped . . . . :09 14 95 08:12:45 TDE/Pl/Pty/Prg . : 1165 05 20 / / /NO Sync/Async Phy I/O -----------------------DB DB NDB NDB Read Wrt Read Wrt Tot ---- ---- ---- ---- ----MPLC I Last 4 Programs in Invocation Stack u n ------------------------------------------r l Last Second Third Fourth -- -- ---------- ---------- ---------- ---------18 0 0 PAG= 1 XSum= 0 PWrt= 0 0 0 24* 1 1 1 QZDACMDP PAG= 11 PAG= 11 QZDAINIT QZDASINIT ADR=000000 XSum= 0 PWrt= 5 XSum= 0 PWrt= 5 .045 .003 .535 .538 .020 .020 0 2 .431 .431 2 2.118 4 1577 28 0 1577 0 28 45 16 8 16 8 1 .834 6 82 5 1 1 PAG= EAO= 0 16 XSum= Dec = 0 PWrt= 0 Bin = 4 0 Flp = 0 63* 8 PAG= EAO= PAG= EAO= 0 46 0 46 1 0 0 22 1603* 15 43 1 PAG= EAO= PAG= EAO= 40 16 40 16 XSum= Dec = XSum= Dec = 0 0 0 0 22 15 1 0 0 0 0 2 0 2 102* 69 PAG= PAG= 11 11 XSum= XSum= 0 PWrt= 0 PWrt= 10 10 2 0 0 6* 6 6 2 2 0 0 127 0 127* 126 QZDACMDP PAG= 21 PAG= 22 QZDAINIT QZDAINIT ADR=000000 XSum= 0 PWrt= 0 XSum= 0 PWrt= 0 6 1 6 1 18 0 1 1 1 2 89 33 89 33 13 34 13 34 2.118 4 0 2 PWrt= Bin = PWrt= Bin = 11 0 Flp = 11 0 Flp = .801 .801 2 0 0 0 45 0 XSum= Dec = XSum= Dec = 0 0 0 0 PWrt= Bin = PWrt= Bin = 11 0 Flp = 11 0 Flp =

Member . . . : CSPERFBOOK Library . . : CSPERF95 Job name . . : QZDAINIT

Transition Report Client/Server Performance Redbook Model/Serial . : D60 /10-15181 Main storage . . : System name . . : SYSASM01 Version/Release : User name . . . : QUSER Job number . . . :

Job type

. . : BJ

Time -----------08.03.04.824 08.03.30.606 08.03.31.282

Elapsed Time -- Seconds ----------------------State Wait Long Active Inel CPU W A I Code Wait /Rsp* Wait Sec ----- ---- ------- ------- ------- -------*TRACE ON ->A 25.782 W<.676 .045

----------

QZDAINIT

.676*

AS/400 Client/Server Performance

08.05.00.691 ->A 08.05.00.691 A 08.05.03.649 W<-

89.410

.003

2.957

---------- QZDAINIT 2.960* 08.05.03.699 ->A .050 08.05.03.884 W<.186 ---------- QZDAINIT .186* 08.05.04.063 ->A .179 Start_of_SQL_Statement_Preparation_08:05:07:479 08.05.07.482 W<3.419

----------

QZDAINIT

3.419*

08.05.07.503 ->A .020 End_of_Prepare_#0002_08:05:24:706 08.05.24.808 W<17.306

---------- QZDAINIT 17.306* 08.05.24.873 ->A .064 End_of_Prepare_#0004_08:05:26:831 08.05.26.906 W<2.033

----------

QZDAINIT

2.033*

08.05.26.968 ->A .063 End_of_Prepare_#0005_08:05:29:118 08.05.29.139 W<2.170

This soft copy for use by IBM employees only.

Figure 68. Transaction Report - Job Transaction Transition Example

This soft copy for use by IBM employees only.

This ″transaction analysis through the transaction reports″ provides valuable understanding of the a client/server application performance (ODBC database serving in this case). However, in most ODBC applications, enabling the OS/400 Query Optimizer to select the most efficient data access method and minimizing unnecessary ODBC requests from the client remain the most important performance factors in achieving ″best performance possible.″

8.3.2 ODBC API Trace Example
The ODBC trace data is logged in a PC file (SQL.LOG on the C drive by default). This file can be printed directly on an attached PC printer (using COPY SQL.LOG LPT1, for example), or printed on an AS/400 system attached printer using the virtual print facility of CA/400. In the latter case, consider using the QSYSPRT printer file in QSYS to set up the printer connection on the AS/400 system. The ODBC trace provides a good basis for the review because it lists the ODBC API calls during execution of the client/server application from the view of the ODBC API programmer . However, the trace lacks a time stamp, and it must be noted that not all ODBC transactions are sent over the communications line to the AS/400 system. Use the ODBC trace and a knowledge of the application to determine key processing steps. The application program code can also assist in the analysis of the ODBC trace. Examples of key transactions are:
• • •

Connection to the AS/400 system. SQL statement preparation (if any). SQL statements for execution on server.

The corresponding SQL cursor names, for example CSR0002 , have been added to the ODBC Trace example to assist in reconciling this report with the Communications Trace and the Transition Report for job 152492 example shown elsewhere in this chapter. These values do not appear on the actual ODBC trace output. Refer to Chapter 5, “Client/Server Database Serving” on page 119 for additional details on the ODBC defacto standard and Visual Basic APIs used in the trace example shown in Figure 69. The following list discusses key ODBC APIs shown in Figure 69 on page 297.

The allocate APIs SQLAllocENV(phenv68970000) through SQLAllocStmt(hdbc559F0000, phstmt48970000) allocate working area for: − − − The ODBC environment (phenv68970000) The specific connection (phdbc55950000) to AS/400 (SYSASM01) within the ODBC environment. Each specific SQL statement (phstmt65770000 through phstmt48970000) within the connection (hdbc559F0000).

These allocations are required by the ODBC ″standard″.

SQLConnect (hdbc559F0000, phstmt65770000) initiates the connection to the AS/400 data source SYSASM01. In AS/400 terminology, the AS/400 system receives the program start request.
SQLSetCursorName sets the SQL cursor (pointer) for the SQL statement contained in statement handle phstmt65770000.

Chapter 8. Client/Server Performance Analysis

293

This soft copy for use by IBM employees only.

SQLPrepare for SQL statement hstmt6577000 is defined to SELECT columns from table/file CSDB/STOCK for UPDATE OF.
Good performance technique is indicated by using parameter markers for all WHERE values - STWID=? and STIID=?. This SQL Prepare statement is sent directly to the AS/400 server. In Appendix B, “Communications Trace Examples” on page 429, this corresponds to LAN frame number 3340.

SQLPrepare for SQL statement hstmt699F000 is defined to SELECT columns from table/file CSDB.ITEM.
Good performance technique is indicated by using parameter markers for all WHERE values - IID in (?, ?, ...). When analyzing an ODBC trace, one of the most important things to look for is the use of parameter markers. For any statement that is executed multiple times, the statement should be prepared at the beginning of the program and then executed using SQLExecute. If you find a large number of SQLExecuteDirect statements that pass the variables in as literals, it is time to talk to the application programmer. If you are using an application generator to build the ODBC application, then the tool provider must address this issue. Use of SQLExecuteDirect is fine for one-time execution of a statement. This SQL Prepare statement is sent directly to the AS/400 server. In Appendix B, “Communications Trace Examples” on page 429, this corresponds to LAN frame number 3552.

SQLPrepare for SQL statements hstmt439F000 through hstmt665970000 define the Prepares for SELECTs and INSERTs to the other tables/files used in the order entry application.
Good performance technique is indicated by using parameter markers for all WHERE values and INSERT values. These SQL Prepare statements are sent directly to the AS/400 server. These Prepare SQL statements are not shown in Appendix B, “Communications Trace Examples” on page 429. All of these SQL Prepares are sent before the actual order processing functions are enabled for the client workstation user. They follow the guideline that Prepares should be used for SQL statements that are to be used repetitively throughout the application. This is similar to the familiar 5250 workstation application technique of opening files before the workstation user begins the application. Note that this application always sends the Prepares. If the SQL statement is not already in the SQL package, system overhead is incurred to insert an access plan for the SQL statement into the SQL package. If the SQL statement is already in the package, the Prepare function completes quickly.

SQLSetCursorName for SQL statement hstmt65670000 and SQLGetCursorName for hstmt65770000 and hstmt66670000 set up cursor positioning for the specified SQL statements and table/file.
The four SQLBindParm statements for hstmt439F000 (SQL statement CRSR0005 - Select CLAST, CDT, CCREDT, WTAX from CSDB.CSTMR, CSDB.WHRS where CWID=? and CDID=? and CID=?, and WID=?) set up the execution time values for the parameter markers.

294

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Note: Visual Basic moves information around in memory. We recommend not binding the parameters to static storage locations, but passing in the parameters at execution time. In a language such as ″C″ or ″C + + ″, it is acceptable to bind the parameters to storage locations. In this case, the pcbValue on the bind statements contains the value SQL_DATA_AT_EXEC. Please refer to 5.3.21.3, “Executing Prepared Statements” on page 148 for more details on this technique.

The SQLExecute for hstmt439F000 (SQL statement 0005 ) is ready to be sent to the AS/400 system. Because we are using Visual Basic and passing parameters at execution, we use the SQLParamData and SQLPutData combination for each of the four parameter marker variables after the SQL Execute statement to pass in the parameters. The fifth SQLParamData (since all parameters have been passed in) causes the execution of the statement on the AS/400 system.

statement represented by

SQLParamData(hstmt439F0000,_,prgbValue) causes the Execution of the SQL 0005 to be sent to the AS/400 server.

The 0005 is the actual ″index″ value returned from the AS/400 server after successfully processing the Prepare SQL statement represented by CRSR0005 in this example. So, for actual execution of the SQL request, this index is shipped to the AS/400 system (along with any necessary data) rather than the complete SQL statement. This corresponds to communications trace frame 3827.

1 SQLFetch and four SQLGetData statements retrieve the four fields/columns received from the AS/400 server into program variables.
Only one record/row was received. Note: Again, because we are using Visual Basic, we do not Bind the output columns to static storage locations. We instead use SQLGetData after the SQLFetch to move the columns to storage. If we used a language such as ″C″ or ″C + + ″, we use SQLBindCol to automatically cause the SQLFetch statement to place the column data in the proper storage location. See 5.3.21.5, “Retrieving Results” on page 149 for a more detailed discussion of these techniques.

2 15 SQLBindParam statements for hstmt699F000 (Select IID, INAME, IPRICE, IDATA from CSDB.ITEM, where IID in (?, ...) set up the parameter marker values.
The SQLExecute for hstmt699F000 (SQL statement 0004 is ready to be sent to the AS/400 system. Again with Visual Basic, we use the SQLParamData and SQLPutData combination for each of the 15 parameter marker variables.

3 The last SQLParamData(hstmt699F0000, prgbValue) causes the Execution of the SQL statement represented by 0004 to be sent to the AS/400 server.
The 0004 is the actual ″index″ value returned from the AS/400 server after successfully processing the Prepare SQL statement represented by CRSR0004 in this example. So, for actual execution of the SQL request, this index is shipped to the AS/400 system (along with any necessary data) rather than the complete SQL statement. This Execute corresponds to communication trace frame 3850.

Chapter 8. Client/Server Performance Analysis

295

This soft copy for use by IBM employees only.

4 SQLFetch and four SQLGetData statements retrieve the four fields/columns received from the AS/400 server into program variables.
The SQLFetch and four SQLGetData statements are repeated until all rows of the result set are retrieved by the client ODBC program. In this case, 10 rows are retrieved. The eleventh SQLFetch received a ″no more data″ indication.

5 The two SQLBindParm statements for hstmt6577000 (SQL statement CRSR0002 - Select SSSTDI01, STDI02, .....STDATA from CSDB.STOCK where
(STWID=? and STIID=?) for update of ....STREMORD) set up the parameter markers for the SQLExecute.

0002 is the ″index″ for the Prepared SQL statement actually sent to the AS/400 system after the two sets of SQLPutParm/SQLPutData statement combinations required by Visual Basic.
The line trace in Appendix B, “Communications Trace Examples” on page 429 does not include the communication data showing the execution of SQL statement 0002 .
• •

The SQLFetch/SQLGetData sequence shows only one row was returned.

6 The SQLPrepare for hstmt69BF0000 shows that the application did not
send all the Prepare SQL statements before beginning the first order. The Update_CSDB.STOCK_set_STQTY=?,_....._CURRENT_OF_C1 Prepare was not sent earlier because the application designer determined that the update function was optional. In a general case, we recommend that all Prepares be done before the application processing is started.

This ends the ODBC API trace example. Completion of an order includes inserting rows into AS/400 tables. For the complete ODBC trace, see Appendix C, “ODBC Trace Example” on page 467. As coded, the application sends each ODBC insert function in a separate request to the AS/400 server. You can examine the complete ODBC trace to see this. Changing the code to use ″blocked inserts″ improves this area of performance. In order to test the performance impact of the ″blocked insert″, the application was re-coded to send all 10 records as one blocked insert. The result was a 20% improvement in response time. See 5.3.30, “Block Insert” on page 166 for a discussion of the blocked insert technique and the sample program diskette for a coding example.

296

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

SQLAllocEnv(phenv68970000); SQLAllocConnect(henv68970000, phdbc559F0000); SQLConnect (hdbc559F0000, ″ SYSASM01″ , -3, ″ ″ , -3, ″ ″ , -3); SQLAllocStmt(hdbc559F0000, phstmt65770000); SQLAllocStmt(hdbc559F0000, phstmt69BF0000); SQLAllocStmt(hdbc559F0000, phstmt699F0000); SQLAllocStmt(hdbc559F0000, phstmt439F0000); SQLAllocStmt(hdbc559F0000, phstmt65670000); SQLAllocStmt(hdbc559F0000, phstmt65C70000); SQLAllocStmt(hdbc559F0000, phstmt65970000); SQLAllocStmt(hdbc559F0000, phstmt65B70000); SQLAllocStmt(hdbc559F0000, phstmt48970000); SQLSetCursorName(phstmt65770000, ″ C1″ , -3); SQLPrepare (hstmt65770000, ″ Select STDI01, STDI02, STDI03, STDI04, STDI05, STDI06, STDI07, STDI08, STDI09, STDI10, STQTY, STYTD, STORDRS, STREMORD, STDATA from CSDB.STOCK where (STWID=? and STIID=?) for update of STQTY, STYTD, STORDRS, STREMORD″ , -3); CRSR0002 SQLPrepare (hstmt699F0000, ″ Select IID, INAME, IPRICE, IDATA from CSDB.ITEM where IID in ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? )″ , -3); CRSR0004 SQLPrepare (hstmt439F0000, ″ Select CLAST, CDCT, CCREDT, WTAX from CSDB.CSTMR, CSDB.WRHS where CWID=? and CDID=? and CID=? and WID=?″ , -3); CRSR0005 SQLSetCursorName(hstmt65670000, ″ C2″ , -3); SQLPrepare (hstmt65670000, ″ Select DTAX, DNXTOR from CSDB.DSTRCT where (DWID=? and DID=?) for update of DNXTOR″ , -3); CRSR0006 SQLPrepare (hstmt65B70000, ″ Insert into CSDB.ORDERS (OWID, ODID, OCID, OID, OENTDT, OENTTM, OCARID, OLINES, OLOCAL) values (?,?,?,?,?, ?,?,?,?)″ , -3); CRSR0009 SQLPrepare (hstmt48970000, ″ Insert into CSDB.NEWORD (NOOID, NODID, NOWID) values (?, ?, ?)″ , -3); CRSR0010 SQLGetCursorName(hstmt65770000, szCursor, 50, pcbCursor); SQLGetCursorName(hstmt65670000, szCursor, 50, pcbCursor); SQLPrepare (hstmt65970000, ″ Insert into CSDB.ORDLIN (OLOID, OLDID, OLWID, OLNBR, OLSPWH, OLIID, OLQTY, OLAMNT, OLDLVD, OLDLVT, OLDSTI) VALUES (?,?,?,?,?,?,?,?,?,?,?)″ , -3); CRSR0008 SQLBindParam(hstmt439F0000, 1, 1, 1, 1, 4, 0, rgbValue, 4, pcbValue); SQLBindParam(hstmt439F0000, 2, 1, 1, 3, 3, 0, rgbValue, 0, pcbValue); SQLBindParam(hstmt439F0000, 3, 1, 1, 1, 4, 0, rgbValue, 4, pcbValue); SQLBindParam(hstmt439F0000, 4, 1, 1, 1, 4, 0, rgbValue, 4, pcbValue); SQLExecute (hstmt439F0000); #0005 SQLParamData(hstmt439F0000, prgbValue); SQLPutData(hstmt439F0000, rgbValue, 4); SQLParamData(hstmt439F0000, prgbValue); SQLPutData(hstmt439F0000, rgbValue, 4); SQLParamData(hstmt439F0000, prgbValue); SQLPutData(hstmt439F0000, rgbValue, 4); SQLParamData(hstmt439F0000, prgbValue); SQLPutData(hstmt439F0000, rgbValue, 4); SQLParamData(hstmt439F0000, prgbValue);

Chapter 8. Client/Server Performance Analysis

Figure 69 (Part 1 of 4). Trace of ODBC SQL SELECT Prepare and Execute

297

298

AS/400 Client/Server Performance

SQLFetch(hstmt439F0000); 1 SQLGetData(hstmt439F0000, 1, 1, rgbValue, 16, pcbValue); SQLGetData(hstmt439F0000, 2, 7, rgbValue, 0, pcbValue); SQLGetData(hstmt439F0000, 3, 1, rgbValue, 2, pcbValue); SQLGetData(hstmt439F0000, 4, 7, rgbValue, 0, pcbValue); SQLBindParam(hstmt699F0000, 1, 1, 1, 1, 6, 0, rgbValue, 6, pcbValue); 2 SQLBindParam(hstmt699F0000, 2, 1, 1, 1, 6, 0, rgbValue, 6, pcbValue); SQLBindParam(hstmt699F0000, 3, 1, 1, 1, 6, 0, rgbValue, 6, pcbValue); SQLBindParam(hstmt699F0000, 4, 1, 1, 1, 6, 0, rgbValue, 6, pcbValue); SQLBindParam(hstmt699F0000, 5, 1, 1, 1, 6, 0, rgbValue, 6, pcbValue); SQLBindParam(hstmt699F0000, 6, 1, 1, 1, 6, 0, rgbValue, 6, pcbValue); SQLBindParam(hstmt699F0000, 7, 1, 1, 1, 6, 0, rgbValue, 6, pcbValue); SQLBindParam(hstmt699F0000, 8, 1, 1, 1, 6, 0, rgbValue, 6, pcbValue); SQLBindParam(hstmt699F0000, 9, 1, 1, 1, 6, 0, rgbValue, 6, pcbValue); SQLBindParam(hstmt699F0000, 10, 1, 1, 1, 6, 0, rgbValue, 6, pcbValue); SQLBindParam(hstmt699F0000, 11, 1, 1, 1, 6, 0, rgbValue, 6, pcbValue); SQLBindParam(hstmt699F0000, 12, 1, 1, 1, 6, 0, rgbValue, 6, pcbValue); SQLBindParam(hstmt699F0000, 13, 1, 1, 1, 6, 0, rgbValue, 6, pcbValue); SQLBindParam(hstmt699F0000, 14, 1, 1, 1, 6, 0, rgbValue, 6, pcbValue); SQLBindParam(hstmt699F0000, 15, 1, 1, 1, 6, 0, rgbValue, 6, pcbValue); SQLExecute (hstmt699F0000); #0004 SQLParamData(hstmt699F0000, prgbValue); SQLPutData(hstmt699F0000, rgbValue, 6); SQLParamData(hstmt699F0000, prgbValue); SQLPutData(hstmt699F0000, rgbValue, 6); SQLParamData(hstmt699F0000, prgbValue); SQLPutData(hstmt699F0000, rgbValue, 6); SQLParamData(hstmt699F0000, prgbValue); SQLPutData(hstmt699F0000, rgbValue, 6); SQLParamData(hstmt699F0000, prgbValue); SQLPutData(hstmt699F0000, rgbValue, 6); SQLParamData(hstmt699F0000, prgbValue); SQLPutData(hstmt699F0000, rgbValue, 6); SQLParamData(hstmt699F0000, prgbValue); SQLPutData(hstmt699F0000, rgbValue, 6); SQLParamData(hstmt699F0000, prgbValue); SQLPutData(hstmt699F0000, rgbValue, 6); SQLParamData(hstmt699F0000, prgbValue); SQLPutData(hstmt699F0000, rgbValue, 6); SQLParamData(hstmt699F0000, prgbValue); SQLPutData(hstmt699F0000, rgbValue, 6); SQLParamData(hstmt699F0000, prgbValue);

This soft copy for use by IBM employees only.

Figure 69 (Part 2 of 4). Trace of ODBC SQL SELECT Prepare and Execute

3 7, pcbValue); 30, pcbValue); 0, pcbValue); 60, pcbValue); 7, pcbValue); 30, pcbValue); 0, pcbValue); 60, pcbValue); 7, pcbValue); 30, pcbValue); 0, pcbValue); 60, pcbValue); 7, pcbValue); 30, pcbValue); 0, pcbValue); 60, pcbValue); 7, pcbValue); 30, pcbValue); 0, pcbValue); 60, pcbValue); 7, pcbValue);

This soft copy for use by IBM employees only.

SQLPutData(hstmt699F0000, rgbValue, 6); SQLParamData(hstmt699F0000, prgbValue); SQLPutData(hstmt699F0000, rgbValue, 6); SQLParamData(hstmt699F0000, prgbValue); SQLPutData(hstmt699F0000, rgbValue, 6); SQLParamData(hstmt699F0000, prgbValue); SQLPutData(hstmt699F0000, rgbValue, 6); SQLParamData(hstmt699F0000, prgbValue); SQLPutData(hstmt699F0000, rgbValue, 6); SQLParamData(hstmt699F0000, prgbValue); SQLFetch(hstmt699F0000); 4 SQLGetData(hstmt699F0000, 1, 1, rgbValue, SQLGetData(hstmt699F0000, 2, 1, rgbValue, SQLGetData(hstmt699F0000, 3, 7, rgbValue, SQLGetData(hstmt699F0000, 4, 1, rgbValue, SQLFetch(hstmt699F0000); SQLGetData(hstmt699F0000, 1, 1, rgbValue, SQLGetData(hstmt699F0000, 2, 1, rgbValue, SQLGetData(hstmt699F0000, 3, 7, rgbValue, SQLGetData(hstmt699F0000, 4, 1, rgbValue, SQLFetch(hstmt699F0000); SQLGetData(hstmt699F0000, 1, 1, rgbValue, SQLGetData(hstmt699F0000, 2, 1, rgbValue, SQLGetData(hstmt699F0000, 3, 7, rgbValue, SQLGetData(hstmt699F0000, 4, 1, rgbValue, SQLFetch(hstmt699F0000); SQLGetData(hstmt699F0000, 1, 1, rgbValue, SQLGetData(hstmt699F0000, 2, 1, rgbValue, SQLGetData(hstmt699F0000, 3, 7, rgbValue, SQLGetData(hstmt699F0000, 4, 1, rgbValue, SQLFetch(hstmt699F0000); SQLGetData(hstmt699F0000, 1, 1, rgbValue, SQLGetData(hstmt699F0000, 2, 1, rgbValue, SQLGetData(hstmt699F0000, 3, 7, rgbValue, SQLGetData(hstmt699F0000, 4, 1, rgbValue, SQLFetch(hstmt699F0000); SQLGetData(hstmt699F0000, 1, 1, rgbValue,

Chapter 8. Client/Server Performance Analysis

Figure 69 (Part 3 of 4). Trace of ODBC SQL SELECT Prepare and Execute

299

300
C1″ ,

AS/400 Client/Server Performance

SQLGetData(hstmt699F0000, 2, 1, rgbValue, 30, pcbValue); SQLGetData(hstmt699F0000, 3, 7, rgbValue, 0, pcbValue); SQLGetData(hstmt699F0000, 4, 1, rgbValue, 60, pcbValue); SQLFetch(hstmt699F0000); SQLGetData(hstmt699F0000, 1, 1, rgbValue, 7, pcbValue); SQLGetData(hstmt699F0000, 2, 1, rgbValue, 30, pcbValue); SQLGetData(hstmt699F0000, 3, 7, rgbValue, 0, pcbValue); SQLGetData(hstmt699F0000, 4, 1, rgbValue, 60, pcbValue); SQLFetch(hstmt699F0000); SQLGetData(hstmt699F0000, 1, 1, rgbValue, 7, pcbValue); SQLGetData(hstmt699F0000, 2, 1, rgbValue, 30, pcbValue); SQLGetData(hstmt699F0000, 3, 7, rgbValue, 0, pcbValue); SQLGetData(hstmt699F0000, 4, 1, rgbValue, 60, pcbValue); SQLFetch(hstmt699F0000); SQLGetData(hstmt699F0000, 1, 1, rgbValue, 7, pcbValue); SQLGetData(hstmt699F0000, 2, 1, rgbValue, 30, pcbValue); SQLGetData(hstmt699F0000, 3, 7, rgbValue, 0, pcbValue); SQLGetData(hstmt699F0000, 4, 1, rgbValue, 60, pcbValue); SQLFetch(hstmt699F0000); SQLGetData(hstmt699F0000, 1, 1, rgbValue, 7, pcbValue); SQLGetData(hstmt699F0000, 2, 1, rgbValue, 30, pcbValue); SQLGetData(hstmt699F0000, 3, 7, rgbValue, 0, pcbValue); SQLGetData(hstmt699F0000, 4, 1, rgbValue, 60, pcbValue); SQLFetch(hstmt699F0000); SQLBindParam(hstmt65770000, 1, 1, 1, 1, 4, 0, rgbValue, 4, pcbValue); 5 SQLBindParam(hstmt65770000, 2, 1, 1, 1, 6, 0, rgbValue, 6, pcbValue); SQLExecute (hstmt65770000); #0002 SQLParamData(hstmt65770000, prgbValue); SQLPutData(hstmt65770000, rgbValue, 4); SQLParamData(hstmt65770000, prgbValue); SQLPutData(hstmt65770000, rgbValue, 6); SQLParamData(hstmt65770000, prgbValue); SQLFetch(hstmt65770000); SQLGetData(hstmt65770000, 11, 5, rgbValue, 0, pcbValue); SQLGetData(hstmt65770000, 12, 5, rgbValue, 0, pcbValue); SQLGetData(hstmt65770000, 13, 5, rgbValue, 0, pcbValue); SQLGetData(hstmt65770000, 14, 5, rgbValue, 0, pcbValue); SQLGetData(hstmt65770000, 15, 1, rgbValue, 55, pcbValue); SQLPrepare (hstmt69BF0000, ″ Update CSDB.STOCK set STQTY=?, STYTD=?, STORDRS=?, STREMORD=? WHERE CURRENT OF -3); CRSR0003 6 SQLBindParam(hstmt69BF0000, 1, 1, 5, 3, 5, 0, rgbValue, 0, pcbValue); SQLBindParam(hstmt69BF0000, 2, 1, 5, 3, 9, 0, rgbValue, 0, pcbValue); SQLBindParam(hstmt69BF0000, 3, 1, 5, 3, 5, 0, rgbValue, 0, pcbValue);

This soft copy for use by IBM employees only.

Figure 69 (Part 4 of 4). Trace of ODBC SQL SELECT Prepare and Execute

This soft copy for use by IBM employees only.

8.3.3 SQL Package
When a client/server application runs, it selects a data source that contains the location of the AS/400 database. The data source is defined in the ODBC.INI file, and the SQL package name used by the application is described in the PACKAGE specification. The AS/400 object type for SQL packages is *SQLPKG. For a complete discussion of the naming convention for SQL packages and the other parameters in the ODBC.INI file, see 5.3.33, “Performance Tuning IBM′ s ODBC Driver” on page 171. Use the PRTSQLINF command with object type *SQLPKG to print the package.

Print SQL Information (PRTSQLINF) Type choices, press Enter. Object . . . . . . . . . . . . . Library . . . . . . . . . . . Object type . . . . . . . . . . SPEEDFBA QGPL *SQLPKG Name Name, *LIBL, *CURLIB *PGM, *SQLPKG, *SRVPGM

Checking the description of the SQL Package indicates when the package was created. In a production client/server application, the creation date of the package should reflect when the application went ″live″. The time stamp against the access plan represents when the plan was last updated. In a stable environment, this should correspond to the time the application was first run. In an environment where a significant number of records/rows have been added or deleted, or index fields have been updated, the time stamp changes when the file is next opened if the Query Optimizer determines the access plan is no longer efficient. The access method identified in the access plan may help to identify the need to create new logical views, for example, to improve performance. If the access method chosen at program create time is already known to not be appropriate, you can create the appropriate index/logical view before you go into production mode. Figure 70 shows a sample SQL Package for the order entry application.

Chapter 8. Client/Server Performance Analysis

301

302
AS/400 Client/Server Performance

5763SS1 V3R1M0 940909 Print SQL information Object name............... VBLIB/SPEEDFBA Object type............... *SQLPKG

SQL package VBLIB/SPEEDFBA

09/14/95 08:14:19

Page

1

CRTSQL*** PGM(VBLIB/SPEEDFBA) SRCFILE( / ) SRCMBR( ) COMMIT(*NONE) OPTION(*SQL *PERIOD) TGTRLS(*PRV) ALWCPYDTA(*OPTIMIZE) CLOSQLCSR(*ENDPGM) #0002 Select STDI01, STDI02, STDI03, STDI04, STDI05, STDI06, STDI07, STDI08, STDI09, STDI10, STQTY, STYTD, STORDRS, STREMORD, STDATA from CSDB.STOCK where (STWID=? and STIID=?) for update of STQTY, STYTD, STORDRS, STREMORD SQL4021 Access plan last saved on 09/14/95_at_08:05:50 . SQL4020 Estimated query run time is 1 seconds. SQL4017 Host variables implemented as reusable ODP. SQL4005 Query optimizer timed out for file 1. SQL4008 Access path STOCK used for file 1. SQL4011 Key row positioning used on file 1. #0004 Select IID, INAME, IPRICE, IDATA from CSDB.ITEM where IID in ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) SQL4021 Access plan last saved on 09/14/95_at_08:05:45 . SQL4020 Estimated query run time is 1 seconds. SQL4017 Host variables implemented as reusable ODP. SQL4008 Access path ITEM used for file 1. SQL4011 Key row positioning used on file 1. #0005 Select CLAST, CDCT, CCREDT, WTAX from CSDB.CSTMR, CSDB.WRHS where CWID=? and CDID=? and CID=? and WID=? SQL4021 Access plan last saved on 09/14/95_at_08:05:42 . SQL4020 Estimated query run time is 1 seconds. SQL4017 Host variables implemented as reusable ODP. SQL4007 Query implementation for join position 1 file 2. SQL4006 All access paths considered for file 2. SQL4010 Arrival sequence access for file 2. SQL4007 Query implementation for join position 2 file 1. SQL4008 Access path CSTMR used for file 1. SQL4014 0 join field pair(s) are used for this join position. SQL4011 Key row positioning used on file 1. SYSASM01

This soft copy for use by IBM employees only.

Figure 70 (Part 1 of 2). Print SQL Package Example

This soft copy for use by IBM employees only.

5763SS1 V3R1M0 940909

Print SQL information

SQL package VBLIB/SPEEDFBA

09/14/95 08:14:19

Page

2

#0006 Select DTAX, DNXTOR from CSDB.DSTRCT where (DWID=? and DID=?) for update of DNXTOR SQL4021 Access plan last saved on 09/14/95_at_08:05:58 . SQL4020 Estimated query run time is 1 seconds. SQL4017 Host variables implemented as reusable ODP. SQL4008 Access path DSTRCT used for file 1. SQL4011 Key row positioning used on file 1. #0009 Insert into CSDB.ORDERS (OWID, ODID, OCID, OID, OENTDT, OENTTM, OCARID, OLINES, OLOCAL) values (?,?,?,?,?,?,?,?,?) SQL4021 Access plan last saved on 09/14/95_at_08:06:04 . SQL4020 Estimated query run time is 1 seconds. SQL4010 Arrival sequence access for file 1. #0010 Insert into CSDB.NEWORD (NOOID, NODID, NOWID) values (?, ?, ?) SQL4021 Access plan last saved on 09/14/95 at 08:06:05. SQL4020 Estimated query run time is 1 seconds. SQL4010 Arrival sequence access for file 1. #0008 Insert into CSDB.ORDLIN (OLOID, OLDID, OLWID, OLNBR, OLSPWH, OLIID, OLQTY, OLAMNT, OLDLVD, OLDLVT, OLDSTI) VALUES (?,?,?,?,?,?,?,?,?,?,?) SQL4021 Access plan last saved on 09/14/95_at_08:06:01 . SQL4020 Estimated query run time is 1 seconds. SQL4010 Arrival sequence access for file 1. #0003 Update CSDB.STOCK set STQTY=?, STYTD=?, STORDRS=?, STREMORD=? WHERE CURRENT OF C1 #0007 Update CSDB.DSTRCT set DNXTOR=? where current of C2 * * * * * E N D O F L I S T I N G * * * * * SYSASM01

Chapter 8. Client/Server Performance Analysis

Figure 70 (Part 2 of 2). Print SQL Package Example

303

This soft copy for use by IBM employees only.

Remember, when using AS/400 Extended Dynamic SQL, the order of the SQL statements within the package are the order in which they were received by the AS/400 Query Optimizer. The time stamps represent the last time the access plan for the same statement was updated. This update occurs during file/table open processing.

8.3.4 Communications Trace Reports
If you are using the menu option to run communications data, you should return to the Work with Communications Traces display and select option 2 to stop the trace:

Work with Communications Traces Type options, press Enter. 2=Stop trace 4=Delete trace 6=Format_and_print_trace 7=Display message 8=Restart trace Configuration Object Type Line_name LINE

Opt 2

Trace Description Customer_name

Protocol TRN

Trace Status ACTIVE

F3=Exit F5=Refresh F6=Start trace F11=Display buffer size F12=Cancel

F10=Change size

When the trace has stopped (press F5 to refresh the display), select option 6 to format and print the trace data:

Work with Communications Traces Type options, press Enter. 2=Stop trace 4=Delete trace 6=Format_and_print_trace 7=Display message 8=Restart trace Configuration Object Type Line_name LINE

Opt 6

Trace Description Customer_name

Protocol TRN

Trace Status STOPPED

F3=Exit F5=Refresh F6=Start trace F11=Display buffer size F12=Cancel

F10=Change size

The following display is shown. For a SNA connection, enter the controller name of the client PC where the application ran, enter N for the ″Format SNA Data only″ prompt ( this prints the time stamp), and press Enter:

304

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Format Trace Data Configuration object . . . . : Type . . . . . . . . . . . . : Type choices, press Enter. Controller . . . . . . . . . . Data representation . . . . . 3 N N N N N PC_name *ALL, name 1=ASCII, 2=EBCDIC, 3=*CALC Y=Yes, N=No Y=Yes, N=No Y=Yes, N=No Y=Yes, N=No Y=Yes, N=No **** Format Broadcast data F3=Exit F5=Refresh . . . . Y F12=Cancel Y=Yes, N=No Line_name LINE

Format SNA data only . . . . . Format RR, RNR commands Format TCP/IP data only Format UI data only . . . . . .

. . . . . .

Format MAC or SMT data only

Alternatively, you can use the PRTCMNTRC command and press F4 for the parameter options:

Print Communications Trace (PRTCMNTRC) Type choices, press Enter. Configuration object . . . . . . Type . . . . . . . . . . . . . . Output . . . . . . . . . . . . . Line_name *LIN *PRINT Name *LIN, *NWI *PRINT, *OUTFILE

F3=Exit F4=Prompt F24=More keys

F5=Refresh

F12=Cancel

F13=How to use this display

When you press Enter, the following additional prompts appear:

Print Communications Trace (PRTCMNTRC) Type choices, press Enter. Configuration object . . Type . . . . . . . . . . Output . . . . . . . . . Character code . . . . . Controller description . Format SNA data only . . Format RR, RNR commands Format TCP/IP data only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Line_name *LIN *PRINT *EBCDIC *ALL *NO *NO *NO Name *LIN, *NWI *PRINT, *OUTFILE *EBCDIC, *ASCII, *CALC Name, *ALL *NO, *YES *NO, *YES *NO, *YES

Chapter 8. Client/Server Performance Analysis

305

This soft copy for use by IBM employees only.

For a TCP/IP connection, enter the IP address of the client PC where the application ran and the IP address of the AS/400 system, and Y for the ″Format TCP/IP data only″ prompt and press Enter:

Format Trace Data Configuration object . . . . : Type . . . . . . . . . . . . : Type choices, press Enter. Controller . . . . . . . . . . Data representation . . . . . *ALL 3 N N *ALL, name 1=ASCII, 2=EBCDIC, 3=*CALC Y=Yes, N=No Y=Yes, N=No Line_name LINE

Format SNA data only . . . . . Format RR, RNR commands . . .

Format TCP/IP data only . . . IP address . . . . . . . . IP address . . . . . . . . Format UI data only . . . . . .

Y Y=Yes, N=No AS/400_address PC_address N N Y=Yes, N=No Y=Yes, N=No **** Y=Yes, N=No

Format MAC or SMT data only Format Broadcast data F3=Exit F5=Refresh

. . . . Y F12=Cancel

Alternatively, you can use the PRTCMNTRC command and press F4 for the parameter options:

Print Communications Trace (PRTCMNTRC) Type choices, press Enter. Configuration object . . . . . . Type . . . . . . . . . . . . . . Output . . . . . . . . . . . . . Line_name *LIN *PRINT Name *LIN, *NWI *PRINT, *OUTFILE

F3=Exit F4=Prompt F24=More keys

F5=Refresh

F12=Cancel

F13=How to use this display

When you press Enter, the following additional prompts appear:

306

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Print Communications Trace (PRTCMNTRC) Type choices, press Enter. Configuration object . . Type . . . . . . . . . . Output . . . . . . . . . Character code . . . . . Controller description . Format SNA data only . . Format RR, RNR commands Format TCP/IP data only IP address . . . . . IP address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Line_name Name *LIN *LIN, *NWI *PRINT *PRINT, *OUTFILE *EBCDIC *EBCDIC, *ASCII, *CALC *ALL Name, *ALL *NO *NO, *YES *NO *NO, *YES *YES *NO, *YES AS/400_address PC_address

Appendix B, “Communications Trace Examples” on page 429 contains an example of SNA communications trace and an example of TCP/IP communications trace. Both show data collected while running the ODBC order entry application. The SNA communications trace corresponds to job QZDAINIT/QPGRM/152492 and Client Access/400 Windows 3.1 client support. This communications trace is coordinated with the example of the Client Access/400 Windows 3.1 client ODBC API trace presented in 8.3.2, “ODBC API Trace Example” on page 293. The TCPIP communications trace corresponds to job QZDASOINIT/QUSER/018282 and Client Access/400 for Windows 95 client support. Study the communications trace and identify the key frame transmissions corresponding to the key transactions identified in the ODBC trace. The communications trace can be used to correlate times with the ODBC trace report. Unfortunately, the communications trace does not use the AS/400 system time, but an arbitrary start value up to 6553.5 seconds. The ODBC API trace does not provide client workstation time stamps. Matching the communications trace frames to the corresponding ODBC trace assists in associating a relative time stamp to the ODBC statements. The time difference between a send and a receive indicates the client response time while the time between a receive at the AS/400 system and send by the AS/400 system provides a measure of the AS/400 system response time. However, it should be noted that these ″response times″ are for each communications flow, and many flows are involved in a single client transaction.

8.3.4.1 Job Trace Report
You can print the Job Trace Report using menus. Select option 5 from the PERFORM menu.

Chapter 8. Client/Server Performance Analysis

307

This soft copy for use by IBM employees only.

PERFORM

IBM Performance Tools/400 System: SYSASM01

Select one of the following: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Select type of status Collect performance data Print performance report Capacity planning/modeling Programmer_performance_utilities Configure and manage tools Display performance data System activity Performance graphics Advisor

70. Related commands Selection or command ===> 5 F3=Exit F4=Prompt F9=Retrieve F16=System main menu F12=Cancel F13=Information Assistant

The Programmer Performance Utilities menu is displayed. Select option 1 on this menu.

Programmer Performance Utilities Select one of the following: 1. 2. 3. 4. Work_with_job_traces Work with program run statistics Select file and access group utilities Analyze disk activity

Selection or command ===> 1 F3=Exit F4=Prompt F9=Retrieve F12=Cancel

On the next display, select option 3 to print job trace reports.

Work with Job Traces Select one of the following: 1. Start job trace 2. Stop job trace 3. Print_job_trace_reports Selection or command ===> 3 F3=Exit F4=Prompt F9=Retrieve F12=Cancel

Respond to the prompts on the next display, and press Enter.

308

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Print Job Trace (PRTJOBTRC) Type choices, press Enter. Data base file Data base file Report type . Report title . member library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . QAJOBTRC Name QPFRDATA Name *BOTH *BOTH, *DETAIL, *SUMMARY Customer_name *FIRST *LAST QT3REQIO QWSGET *CUR PRTJOBTRC QPFRJOBD *LIBL 1-999999, *FIRST Number, *LAST Name, QT3REQIO, *BATCH Name Character value, *CUR Name, PRTJOBTRC, *MBR Name, *NONE Name, *LIBL, *CURLIB

Starting sequence number . . Ending sequence number . . . Transaction ending program . Transaction starting program System model code . . . . . Job name . . . . . . . . . . Job description . . . . . . Library . . . . . . . . . F3=Exit F4=Prompt F24=More keys F5=Refresh

F12=Cancel

F13=How to use this display

You can also use the Print Job Trace (PRTJOBTRC) command to print the job trace report. The Job Trace is an exceptionally long report, but it is possible to identify the QZDACMDP module to which the application frequently returns. The QZDACMDP module is the router for the server job and handles the transactions to and from the communications link. The time stamp on the RETURN to QZDACMDP module is just prior to the Active-to-Wait transition in the Transition Report. Thus, it is possible to approximate the transaction count in the Transaction and Transition Reports with the number of communication flows occurring during application execution.

8.4 Data Collection Checklist
This checklist is provided as a worksheet for collecting performance information for analysis. The four columns enable information for up to four client workstations.

Chapter 8. Client/Server Performance Analysis

309

This soft copy for use by IBM employees only.

Table 19. Data Collection Checklist
PC Name (Controller) - RTLN APPN in CONFIG.STS - LOCALLUNAME in NSD.INI - Status bar on Rumba - WRKOBJLCK for *USRPRF 1.Synchronize time - AS/400 system - PC 2.AS/400 environment - Pool size - Activity level - Expert cache 3.PC environment - Processor - Disk - ODBC.INI - ODBC trace 4.STRPFRMON TRACE(*ALL) MBR(name) 5.DLTCMNTRC CFGTYP(*LIN) CFGOBJ(name) 6.STRCMNTRC CFGTYP(*LIN) CFGOBJ(name) 7.Client/Server connection 8.WRKCFGSTS *CTL (appc-ctl) (QZDAINIT job#) 9.(a) STRSRVJOB JOB(nnnnnn/QUSER/QZDAINIT) (b) STRDBG UPDPROD(*yes) (c) TRCJOB SET(*ON) MAXSTG(16000) TRCFULL(*STOPTRC) (d) CHGJOB LOG(4 00 *SECLVL) 10.Execute application - (Name) - # of transactions ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________

___________ ___________

___________ ___________

___________ ___________

___________ ___________

___________ ___________ ___________

___________ ___________ ___________

___________ ___________ ___________

___________ ___________ ___________

___________

___________

___________

___________

Comments:

310

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

PC Name (Controller) 11.ENDCMNTRC CFGTYP(*LIN) CFGOBJ (name) 12.(a) TRCJOB SET(*OFF) (b) ENDDBG (c) ENDSRVJOB 13.ENDPFRMON 14.Print reports - System - Component - Transaction summary - Transaction detail - Transition 15.PRTCMNTRC CFGTYPE(*LIN) FMTSNA (*NO) CFGOBJ(linename) SLTCTLD(PC-name) 16.Print JOBLOG - QZDAINIT job 17.PRTSQLINF OBJTYPE(*SQLPKG) OBJ(name) 18.Print PC response 19.Print ODBC trace 20.Report Checklist - System - Component - Transaction summary - Transaction detail - Transition - Communication trace - Job Log - Job Trace - SQL Package Info. - Package Object description - ODBC Trace - PC resp log (if any) ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________ ___________

___________ ___________ ___________ ___________

___________ ___________ ___________ ___________

___________ ___________ ___________ ___________

___________ ___________ ___________ ___________

___________

___________

___________

___________

Comments:

Chapter 8. Client/Server Performance Analysis

311

This soft copy for use by IBM employees only.

312

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Chapter 9. Client/Server Capacity Planning
This chapter covers the following capacity planning aspects: 1. Model creation using user-defined job classification 2. Growth analysis This chapter uses performance data collected when running the client/server order entry benchmark described in this redbook. The data was collected on a D60 CISC system, which has an RPR rating of 8.1 (CPW rating of 23.9). There were four active client workstations running the application. This ensured that any resource contention, such as data area or record/row locking delays, were included in the performance data collected.

9.1 Client/Server Modeling
Achieving satisfactory performance results, which are usually thought of in terms of response time, often takes a thorough review of good performance techniques discussed elsewhere in this redbook, and some trial and error and performance analysis. When that work has completed, the next task is to determine what AS/400 resources are required to handle the anticipated number of client/server workstations active during the same time period. Typically, this is not a simple task and requires the following to be successful:

OS/400 Performance Monitor data collected when two to four clients are running the tuned application. Analysis of the collected performance data and use of portions of this data as input to the Performance Tools/400 capacity planning support - BEST/1. Time recordings of actual end user response time. This is necessary because the Performance Monitor data does not provide complete performance information for a ″client/server transaction″, and a transaction to the AS/400 system may actually be ″sub-transactions″ of a transaction from the end user′s viewpoint. The situation of a sub-transaction sometimes exists with 5250-based transactions; for example, having multiple Enter keys to complete an order, where an order is viewed as the business transaction and the sub-transactions are the individual Enter keys. For 5250-based applications these transactions are easier to account for because the Performance Monitor records the transaction - receipt of data from a 5250 workstation and AS/400 response data sent to the workstation - and records a response time. The AS/400 Performance Monitor does not record non-5250 transactions nor determine response times for requests and responses exchanged over a WAN (Wide Area Network) or a LAN (Local Area Network).

Familiarity with the BEST/1 capacity planning interfaces and options and ″artistic adjustment″ by the BEST/1 user to define a non-interactive transaction within a BEST/1 workload.

This chapter helps readers familiarize themselves with the BEST/1 facilities that can be used to model client/server applications. The approach suggested assumes that performance data collected with the Performance Monitor is available to build the model.
© Copyright IBM Corp. 1996

313

This soft copy for use by IBM employees only.

If measured data is not available, the predefined BEST/1 workloads may be used to get reasonable results; it should be noted that the model is more accurate if it is built from real data collected from your system while the application is running. Note This chapter presumes the reader has at least a moderate understanding and level of usage experience with BEST/1. Even so, access to the manual BEST/1 Capacity Planning Tool , SC41-3341, is recommended when reading this chapter. We used V3R6 BEST/1, which included BEST/1 PTFs, to more accurately model RISC server models and to default to upgrading a CISC system to a RISC system when growth analysis is performed. These PTFs were available in October 1996. Corresponding PTFs for V3R0M5, V3R1, and V3R2 were also made available in October 1996. Any cumulative PTF package available after November 1996 should contain these PTFs. V3R7 BEST/1 support already contains the support available under these PTFs. You may have experience with client/server modeling techniques different than those illustrated in this chapter. If you have been successful using these techniques for doing capacity planning, keep using them. However, the techniques used in this chapter have also proven successful. They can be used if you have no prior experience with client/server capacity planning. The key to successful client/server capacity planning is a thorough understanding of the application implementation and use of performance data collected when more than one client workstation is actively using the application.

9.1.1.1 Objectives of Client/Server Modeling
There are many ways to implement a client/server application. This chapter uses the frequently-used ODBC API client-to-server interfaces. The objective of this chapter is to give you the capability to identify appropriate performance data for creating an accurate Client Access/400 ODBC application model and use that model for modeling workload growth on both a CISC and a RISC system. In order to simplify the exercises, we have already collected the performance data that is located in library PFRRES95. The environment in which we collected the data is summarized in Table 20.
Table 20 (Page 1 of 2). ODBC Modeling Performance Data Environment
Application Pool Number of clients Number of transactions per client Key+Think time SPEED Order Entry (ODBC) 5 (6 MB) 4 10 10 secs.

314

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Table 20 (Page 2 of 2). ODBC Modeling Performance Data Environment
Elapsed time Transactions per client per hour 3 minutes (180 secs.) 200

The only jobs active were the order entry applications and we collected performance monitor data for only 3 minutes. This was sufficient to get performance data with enough work to use for capacity planning with BEST/1, because our three minutes included a total of 40 orders or 10 orders per client. We multiplied 10 orders for 3 minutes by 20 to get 200 orders per client for 60 minutes (one hour), or a total of 800 orders per hour with 4 clients. If you were running a different application or had few orders completed within 3 minutes, you should collect performance monitor data for longer than three minutes. Twenty minutes is a reasonable time when the number of completed numbers is not known before hand. You should ensure performance monitor data does not include ″one of a kind″ abnormally high resource usage transactions, unless you want to include these transactions in your capacity planning efforts. To collect the performance data, we issued the following command:

STRPFRMON
Start Performance Monitor (STRPFRMON) Type choices, press Enter. Member . . . . . . . . . . . . . > SPEED80102 Name, *GEN Library . . . . . . . . . . . . > PFRRES95 Name Text ′ description′ . . . . . . . > ′ Performance Data ODBC application′ Time interval (in minutes) . . . > 5 5, 10, 15, 20, 25, 30, 35... Stops data collection . . . . . *ELAPSED *ELAPSED, *TIME, *NOMAX Days from current day . . . . . 0 0-9 Hour . . . . . . . . . . . . . . 2 0-999 Minutes . . . . . . . . . . . . 0 0-99 Data type . . . . . . . . . . . *ALL *ALL, *SYS Trace type . . . . . . . . . . . *NONE *NONE, *ALL Dump the trace . . . . . . . . . *YES *YES, *NO Job trace interval . . . . . . . .5 .5 - 9.9 seconds Job types . . . . . . . . . . . *DFT *NONE, *DFT, *ASJ, *BCH... + for more values More... F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel F13=How to use this display F24=More keys

We issued the command ENDPFRMON after waiting approximately 1 minute after the application stopped running in order to adjust the elapsed time to the application running time. Although we analyzed several of the Performance Tools/400 reports, we used the ″Job Workload Activity″ section of the Component Report to assign OS/400 ODBC jobs to a BEST/1 Workload. Figure 71 on page 316 shows the pages of this section used to identify the ODBC server QZDAINIT jobs that actually performed work while the Performance Monitor was collecting data. Note that our Performance Monitor data was collected on a 9406 Model D60 running V3R1.

Chapter 9. Client/Server Capacity Planning

315

316
8/07/95 17:39:56 Page 9 . . : . . : EAO PAG Arith Perm Excp Fault Ovrflw Write ----- ----- ------ ----0 0 0 0 45 24 0 17 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 7 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 1 0 0 0 8 0 14 08/01/95 08:13:30 08/01/95 08:16:49 Component Report Job Workload Activity speed K/T 10 secs - 4 jobs /10-15181 Main storage . . : SYSASM01 Version/Release : 112 116 794 0 113 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 4 0 1 0 0 0 0 0 0 .0 .0 .0 .0 .0 0 0 0 0 0 0 0 0 0 0 .00 .00 .00 .00 .00 2 0 0 0 0

Model/Serial . : D60 80.0 MB Started . . System name . . : 3/ 1.0 Stopped . . T P Job User Job y t CPU Tns ---------- Disk I/O --------Cmn Name Name Number p Pl y Util Tns /Hour Rsp Sync Async Logical I/O ---------- ---------- ------ -- -- -- ---- -------- ------- ------ --------- --------- --------- --------QPFRADJ QSYS 139062 S 02 00 .0 0 0 .00 0 0 0 0 QPFRMON QPGMR 139621 B 02 00 1.3 0 0 .00 492 46 22 0 QROUTER QSNADS 139108 B 02 40 .0 0 0 .00 5 0 0 0 QSERVER QPGMR 139279 A 02 20 .0 0 0 .00 0 0 0 0 QSERVER QSYS 139277 M 02 00 .0 0 0 .00 0 0 0 0 QSNADS QSYS 139104 M 02 00 .0 0 0 .00 0 0 0 0 QSNMPSA QTCP 139158 B 02 50 .0 0 0 .00 0 0 0 0 QSPL QSYS 139082 M 02 00 .0 0 0 .00 0 0 0 0 QSPLMAINT QSYS 139063 S 02 20 .0 0 0 .00 0 0 0 0 QSYSARB QSYS 139052 S 02 00 .0 0 0 .00 26 11 0 0 QSYSSCD QPGMR 139086 B 02 10 .0 0 0 .00 2 0 0 0 QSYSWRK QSYS 139072 M 02 00 .0 0 0 .00 0 0 0 0 QTCPIP QTCP 139148 B 02 20 .0 0 0 .00 0 0 0 0 QTFTP01117 QTCP 139154 B 02 25 .0 0 0 .00 0 0 0 0 QTFTP01229 QTCP 139156 B 02 25 .0 0 0 .00 0 0 0 0 QTFTP10470 QTCP 139152 B 02 25 .0 0 0 .00 4 0 0 0 QTGTELNETS QTCP 139151 B 02 20 .0 0 0 .00 2 0 0 0 QTLPD14741 QTCP 139160 B 02 25 .0 0 0 .00 4 0 0 0 QTLPD14856 QTCP 139161 B 02 25 .0 0 0 .00 4 0 0 0 QTMSNMP QTCP 139150 B 02 35 .0 0 0 .00 0 0 0 0 QTMSNMPRCV QTCP 139155 B 02 50 .0 0 0 .00 1 0 0 0 QTSMTPBRCL QTCP 139163 B 02 50 .0 0 0 .00 0 0 0 0 QTSMTPBRSR QTCP 139164 B 02 50 .0 0 0 .00 0 0 0 0 QTSMTPCLNT QTCP 139162 B 02 50 .0 0 0 .00 0 0 0 0 QTSMTPSRVR QTCP 139153 B 02 50 .0 0 0 .00 12 0 0 0 QVARRCV QSVMSS 139564 B 02 20 .0 0 0 .00 1 0 0 0 QVATTMGR QSVMSS 139567 B 02 35 .3 0 0 .00 28 0 10 0 ************** 6 QZDAINIT JOBS, USE ONLY 4 THAT DID REAL WORK 1 QZDAINIT QUSER 139512 C 05 20 1.8 0 0 .00 65 197 329 0 2 QZDAINIT QUSER 139588 C 05 20 1.9 0 0 .00 307 251 329 0 3 QZDAINIT QUSER 139590 C 05 20 2.4 0 0 .00 362 447 326 0 QZDAINIT QUSER 139607 C 05 20 .0 0 0 .00 0 0 0 0 4 QZDAINIT QUSER 139608 C 05 20 1.9 0 0 .00 285 260 329 0 QZDAINIT QUSER 139609 C 05 20 .0 0 0 .00 1 0 0 0

Member . . . : SPEED80102 Library . . : PFRRES95

AS/400 Client/Server Performance

QZDSTART QZRCSRVR QZSCSRVR QZSSNTMGR RCCINT

QSNADS QUSER QUSER QPGMR

139105 139129 139130 139553

A C C B L

02 02 02 02 01

40 20 20 50 00

1 , 2 , 3 , 4 represent the four Database Server jobs that did real work.

This soft copy for use by IBM employees only.

Figure 71. Component Report Example for BEST/1.

This soft copy for use by IBM employees only.

9.2 Creating a Model Using User-defined Job Classification
We want to create a BEST/1 model to use as the base for modeling growth in workload on the system. We also want to model ODBC work separate from any other work on the system, so we had to assign ODBC work to a specific workload and define our own BEST/1 Job Classification. This enables the ODBC workload to be isolated from other applications on the system if we want to specify different workload growths among other workloads on the system. This way you can move the workload that represents that application to another BEST/1 model and analyze the effect of adding that workload to a different system. The remainder of this chapter shows the steps in building a BEST/1 model, validating (calibrating) a model, and doing different capacity planning efforts based on percentage of growth and staying within the CISC Advanced System family or upgrading to the PowerPC RISC System family. These steps are grouped into related steps called ″exercises″. Not every BEST/1 display is shown in the steps within an exercise. Only the most significant ones are shown.

9.2.1 Exercise 1: Creating the Model from Performance Data
1. Sign on the system. 2. Start BEST/1 by typing the command STRBEST and pressing F4, which prompts for input.
Start BEST/1 (STRBEST) Type choices, press Enter. BEST/1 data library . . Performance data library Log member . . . . . . . Log library . . . . . . . . . . . . . . . . . . . >SPEED80102 . >PFRRES95 . *NONE . *BESTDTAL Name, *CURLIB Name Name, *NONE Name, *BESTDTAL

F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys

3. On the BEST/1 for the AS/400 menu, select Option 1 (Work with BEST/1 models). 4. On the Work with BEST/1 Models menu, select Option 1 (Create).

Opt Model 1. Model_name
where Model_name is the name of the model to be created. Use CSPMDLBAS for these exercises. 5. On the Create BEST/1 Model menu, select Option 1 (Create from performance data). 6. On the Create Model from Performance Data menu, specify the performance member containing the data you want to model.
Chapter 9. Client/Server Capacity Planning

317

This soft copy for use by IBM employees only.

Create Model from Performance Model . . . . . . . . . . . . . . : CSPMDLBAS Type choices, press Enter. Text . . . . . . . . . . . . . . ODBC BASE Performance member . . . . . . . SPEED80102 Library . . . . . . . . . . . . PFRRES95 Start time . . . . . . . . . . *FIRST Start date . . . . . . . . . . *FIRST Stop time . . . . . . . . . . . *LAST Stop date . . . . . . . . . . . *LAST F3=Exit F4=Prompt F12=Cancel

Data V31 on V36 F4 for list *FIRST, *SELECT *FIRST *LAST *LAST

Name, Name Time, Date, Time, Date,

Press Enter. 7. On the Classify Jobs menu, select Option 2 (Classify jobs into workloads). We want to specifically define our own workload so we can model ODBC work separately from other work on the measured system. Note: The BEST/1 Classify Jobs menu also has the option to ″Use default job classifications″. This has ease of use benefits, with default workloads such as INTERACTIV, CLIENTAC4, NONINTER, and QDEFAULT. However, ″CLIENTAC″ groups 5250 emulation work under Client Access/400 RUMBA/400 jobs and all other Client Access/400 jobs, such as for ODBC and File Transfer, into this same workload. Because we are interested in modeling ODBC workloads separately from the 5250 work, we did not choose this default job classification option. 8. In the Specify Job Classification Category, select category 3 (Job name). 9. Press F9 (Display values from data). This runs a query against the collected data (SPEED80102), and produces a list of OS/400 jobs, application jobs, and Licensed Internal Code tasks. 10. You should be able to identify the ODBC server jobs either sorting jobs by name (F16) or by CPU seconds (F18). Use only the QZDAINIT jobs that did actual work, as shown in Figure 71 on page 316. (Use QZDASOINIT jobs if using TCP/IP.) 11. Assign ODBC Server jobs to Workload ODBCWL. Eventually you want to assign other jobs to different workloads. It is easier to remove the workloads that are not part of the normal workload of your system later; for example, you probably want to remove the QPFRMON job. The jobs that you do not assign to any workload are included in the QDEFAULT workload. Figure 72 on page 319 shows an example of assigning work to BEST/1 workloads ODBCWL, PFRMON, and QDEFALUT, based on job name. Be careful while assigning workloads because when you leave the field to the right of the ″Workload ...″ prompt and press the Enter key, all unassigned work is assigned to QDEFAULT.

318

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Assign Jobs to Workloads Workload . . . . . . . . . . . . . . . . . ______________ Type options, press Enter. Unassigned jobs become part of workload QDEFAULT. 1=Assign to above workload 2=Unassign Number of CPU I/O Opt Workload Job Name Transactions Seconds Count _ ODBCWL QZDAINIT 0 15.351 2175 _ P6034500C 40 3.983 4 _ PFRMON QPFRMON 0 2.607 538 _ R41290 0 1.178 0 _ R41292 0 1.177 0 _ R41291 0 1.159 1 _ QVATTMGR 0 .640 28 _ P23ADGH3 0 .390 91 _ P23ADFT8 0 .351 76 _ QCQSVSRV 0 .292 45 _ SFTR 0 .281 34 More... F3=Exit F12=Cancel F15=Sort by workload F16=Sort by job name F17=Sort by transactions F18=Sort by CPU seconds F19=Sort by I/O count
Figure 72. Assign Jobs to BEST/1 Workloads

12. Press Enter after assigning work to ODBCWL and PFRMON. Note: We do not recommend explicitly assigning *LIC tasks to a specific workload. BEST/1 has internal algorithms to allocate *LIC tasks to the appropriate workload. 13. On the Specify Paging Behaviors menu, accept the default *GENERIC for all workloads - ODBCWL, PFRMON, and QDEFAULT. 14. Define a non-interactive transaction. Figure 73 on page 320 shows the Define Non-Interactive Transactions menu. It is very important that you define the ODBC (client/server) transaction in terms acceptable to the customer. On this display, you define a unit of work that corresponds to a meaningful business unit of work. BEST/1 defaults to 100 *LGLIO (Logical I/Os) for a non-interactive transaction. By typing *NONE in the Type column for workload ODBC, you can specify a number of business or user transactions in the Total Transactions column. This value is in transactions per hour. In our model, we know that each user did 10 transactions in a elapsed time of 3 minutes. This is 200 transactions per hour per user, so we define our workload with 800 transactions per hour for a workload with the four users (four active QZDAINIT jobs) combined.

Chapter 9. Client/Server Capacity Planning

319

This soft copy for use by IBM employees only.

Define Non-Interactive Transactions Job classification category . . . . . . : Job Name Type choices, press Enter. ---Activity Counted as Transaction--Total Transactions Workload Type Quantity when Type = *NONE QDEFAULT *LGLIO 100.0 0 PFRMON *LGLIO 100.0 0 ODBCWL *NONE 100.0 800

Bottom Type: *LGLIO, *CMNIO, *CPUSEC, *PRINT, *NONE F3=Exit F12=Cancel
Figure 73. Defining Client/Server Transactions

Press Enter and the Save Job Classification Member display is shown. 15. Save the job classification member.
Save Job Classification Member desired, press Enter. . . . . . . . . CSPODBCJC_ Name . . . . . . . . Your_library Name . . . . . . . . Job class ODBC jobs_________ . . . . . . . . N Y=Yes, N=No

Change values if Member . . . . Library . . Text . . . . . Replace . . .

F12=Cancel

Press Enter and the Confirm Creation of BEST/1 Model display is shown. 16. Confirm the creation of the BEST/1 model. On this display, you are prompted with your previously specified model name (in this exercise CSPMDLBAS) and descriptive text. You may change these values if you want to.
Confirm Creation of BEST/1 Type choices, press Enter. Model . . . . . . . . . . . . CSPMDLBAS Library . . . . . . . . . . Your_library Text . . . . . . . . . . . . . . ODBC BASE Replace . . . . . . . . . . . N Job name . . . . . . . . . . . CRTBESTMDL Job description . . . . . . . QPFRJOBD Library . . . . . . . . . . QPFR F12=Cancel Model Name Name V31 on V36 Y=Yes, N=No Name, *JOBD Name, *NONE, *USRPRF Name, *LIBL, *CURLIB

BEST/1 submits the model creation to job queue QBATCH. 17. Now you are back in the Work with BEST/1 Models menu. The CRTBESTMDL job is running. You may issue the Work with Submitted Job (WRKSBMJOB) command to find out when your create model job completes, or issue DSPMSG and look for a model completion message or repetitively use the F5 key (Refresh) until you see your new model name appear on the Work with BEST/1 Models menu. After you see your new model on this display, proceed to the next step.

320

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

18. On the Work with BEST/1 Models menu, select Option 5 (Work with), for your model (CSPMDLBAS). 19. On the Work with BEST/1 Model menu, select Option 5 (Analyze current model). Typically, it takes a few minutes on CISC systems to produce the analysis results and present the Work with Results menu. On RISC systems, this is much faster. When BEST/1 analysis is complete, the Work with Results display is shown.

Work with Results Printed report text . . . . . . Type options, press Enter. 5=Display 6=Print Opt __ __ __ __ __ __ __ __ Report Name Analysis Summary Recommendations Workload Report ASP and Disk Arm Report Disk IOP and Disk Arm Report Main Storage Pool Report Communications Resources Report All of the above ODBC BASE V31 on V36

F3=Exit F12=Cancel F14=Select saved results F15=Save current result F18=Graph current results F19=Append saved results F24=More keys CPU cannot handle specified load

Figure 74. Create Model Work with Results Example

On the Work with Results menu, you get a message indicating that the current system CPU cannot handle the measured workload. Because previous analysis of our performance data indicated the system was working well, we have to make an adjustment to the BEST/1 model later during the Calibrating the Model exercise. For now, proceed to the next step. 20. On the Work with Results menu, select Option 5 (Display the Analysis Summary Report). This shows a column of model performance statistics that should be compared to existing performance data. 21. On the Display Analysis Summary menu, the CPU shows a highlighted field indicating CPU is approximately 99% utilized. This indication is associated with the message CPU cannot handle specified load and needs to be adjusted during model calibration. 22. On the Display Analysis Summary menu, press F11 (Compare against measured values).

Chapter 9. Client/Server Capacity Planning

321

This soft copy for use by IBM employees only.

Total CPU util . . . . Disk IOP util . . . . Disk arm util . . . . Disk IOs per second . LAN IOP util . . . . . LAN line util . . . . WAN IOP util . . . . . WAN line util . . . . Interactive: CPU util . . . . . . . Int rsp time (seconds) Transactions per hour Non-interactive thruput F3=Exit F6=Print

Compare Against Measured Values Measured Predicted . . . . . . . : 16.4 98.4 . . . . . . . : 2.9 16.4 . . . . . . . : 2.2 13.2 . . . . . . . : 26.0 148.6 . . . . . . . : 4.8 1.8 . . . . . . . : .8 .3 . . . . . . . : 1.2 5.3 . . . . . . . : .0 .0 . . . . . . . . . . . . . . . . . . . . . . . . : : : : 2.1 .1 763 867 2.1 .6 763 9221

F9=Work with spooled files

F12=Cancel

It is important to understand how BEST/1 came up with the predicted CPU utilization of approximately 99%. This can affect not only modeling client/server applications, but also ″server jobs″ within the AS/400 system that do no communication with another ″system″ outside of the AS/400 system. This is why the following information is contained within the following labeled box. Modeling ″Server Jobs″ BEST/1 assigns a workload type of *BATCHJOB when all jobs in the BEST/1 workload have met all of the following criteria:
• •

All jobs are non-interactive jobs All jobs were active for at least 95% of the time the performance monitor collected data All storage pools used by the jobs service only non-interactive jobs

The *BATCHJOB workload type was designed to assist in modeling traditional batch job run time modeling, which gives all additional CPU capabilities to an *BATCHJOB workload. However, this algorithm has problems when the actual non-interactive environment contains ″server type″ jobs that periodically wait for work to do, similar to an interactive workstation job. Server jobs include an invoice print server that waits for notification of an order completion, or our client/server order entry benchmark that waits for order entry information to be sent from the client workstation. Because the measured environment had little or no interactive work running at priority 20 and the database serving jobs were running at priority 20, BEST/1 assigned all available CPU not used by workloads PFRMON and QDEFAULT to the ODBCWL workload. This caused predicted CPU utilization to be 98.4%. As a result, the BEST/1 user must understand this ″server job″ implementation is included in the performance monitor data, and then manually change the workload attribute from *BATCHJOB to *NORMAL. Making this change and other changes to the model are called calibrating the model and are discussed in the following exercise. Note: The V3R7 version of the BEST/1 manual - BEST/1 Capacity Planning Tool , SC41-3341-02, discusses the adjustments required when modeling ″server jobs.″ These considerations apply to all releases if server jobs are

322

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

run in environments that cause BEST/1 to default to an *BATCHJOB workload type. When a BEST/1 model is properly calibrated, the Measured data column values have response times within .5 seconds of the corresponding Predicted data values, and the resource utilizations between the Measured and Predicted are within 20% of each other. Now that we have noted the wide differences shown here, we have to rely on our experience with BEST/1, and our understanding of the application, and the OS/400 operating environment to properly adjust or calibrate the model.

9.2.2 Exercise 2: Calibrating the Model
You now have to verify workloads and objectives. Analysis showed that BEST/1′s calculations for some important ″base model″ parameters did not coincide with our real life observations:
• •

Total CPU utilization (Total CPU util: 16.4 (measured) versus 98.4 (predicted)). Number of non-interactive transactions per hour (Non-interactive thruput: 867 (measured) versus 9221 (predicted)).

The discrepancies of the Disk I/O prediction value with measured values are corrected when we adjust total CPU utilization and non-interactive transactions per hour in the calibration exercise.

9.2.2.1 Verifying Workloads and Objectives
You may make BEST/1 adjustments for non-interactive transactions per hour and total CPU utilization in either order. We chose to do non-interactive transactions and then total CPU utilization. 1. From the Work with BEST/1 Model display, select Option 1 (Work with workloads). This shows all three workloads we defined - ODBCWL, PFRMON, and QDEFAULT. 2. On the Work with Workloads menu, select Option 5 (Display) to display the ODBCWL Workload. 3. On the Work with Functions menu, select Option 5 (Display) to display the ODBCWL Function.
Display Function Workload . . . . : ODBCWL ODBC BASE CALIBRATED V31 on V36 Function . . . . : ODBCWL Function of ODBCWL Type options, press Enter. 5=Display Transaction Pool Transactions CPU Time Total Opt Type ID Priority per Function (Secs) I/Os _ 2 5 20 133.33 2.891 53.3 _ 2 2 0 .17 127.521 .0 _ 2 1 0 .17 58.105 8351.9 Bottom Transaction Type: 1=Interactive, 2=Non-interactive F3=Exit F12=Cancel

Chapter 9. Client/Server Capacity Planning

323

This soft copy for use by IBM employees only.

Adjusting Transactions Note that the number of transactions per function does not match what we said it has to be: ″Transactions per client per hour = 200″ as shown in Table 20 on page 314. The Display Function display also shows BEST/1 determined that we ran most of the application functions and transactions in main storage pool 5. This correlates with our sample Component Report page, shown in Figure 71 on page 316. Additionally, BEST/1 always takes some of the unassigned OS/400 system work (pool 2) and Licensed internal Code (LIC) work (pool 1) and ″adds″ a portion to all BEST/1 workloads because some system and LIC work was required to do things, such as manage the job and perform disk and communications I/O. What we have to do is make sure the user application transaction values (in this case pool 5) reflect what we have defined as a non-interactive transaction. 4. Go back to the Work with BEST/1 Model display and select Option 2 (Specify objectives and active jobs).

Specify Objectives and Active Jobs Model/Text: ODBC BASE CALIBRATED Type changes, press Enter. Workload ODBCWL PFRMON QDEFAULT QDEFAULT Workload Connect Type *LOCAL *LOCAL *LOCAL *LAN *BATCHJOB *NORMAL *NORMAL *NORMAL V31 on V36 Non-inter Thruput ______0 ______0 _____0 ______0

Active ----Interactive----Jobs Rsp Time Thruput ___6.0 ___1.0 ____.1 ___1.0 ____.0 ____.0 ____.0 ____.0 ____0 ____0 ____0 ____0

F3=Exit F11=Show all quantities F19=Work with workloads

F12=Cancel

Bottom F15=Sort by connect type

The number of jobs assigned by BEST/1 to the ODBCWL should match the number of jobs we know we have run. On the Specify Objectives and Active Jobs menu, you see 6 active jobs for ODBCWL and *LOCAL for Connect type, and *BATCHJOB for Workload Type. All three of these values need to be evaluated.

Six active jobs: Figure 71 on page 316 showed six active QZDAINIT jobs, but only four of them did ″real work″ consuming CPU and disk resources. BEST/1 sees the six active jobs and we need to change the value to four so BEST/1 can properly spread CPU and disk resources across four jobs instead of six jobs. We make the appropriate change later in this exercise.

*LOCAL connect type:

324

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

The Performance Monitor data does not indicate these QZDAINIT jobs were active over a communications or LAN line. Therefore, BEST/1 has to assume *LOCAL. Since this ″definition″ is not critical to our CPU, disk, and main storage resource utilization capacity planning efforts, we leave Connect type as *LOCAL.

Workload type: BEST/1 has assigned our ODBCWL workload the *BATCHJOB attribute rather than the typical *NORMAL attribute. This is because when BEST/1 detects that all ″user work″ ran in its own storage pool (pool 5) and only non-interactive work ran in that pool, it automatically assigns the workload type *BATCHJOB. Because we have already discussed what the *BATCHJOB workload type does to BEST/1 predicted CPU utilizations for server jobs, and the ODBC database server jobs are these kinds of jobs, we must change the workload type to *NORMAL. Remember that BEST/1 defaults to making its CPU utilization recommendations and conclusions based on CPU utilization of priority 20 or higher jobs. The ODBC database server jobs run at priority 20 when IBM shipped defaults for subsystem QSERVER routing entry for QZDAINIT and class description QSYS/QPWFSERVER are used.

5. Go back to the Work with BEST/1 Model display and press F22 (Calibrate model) to start the manual calibration mode. Model Calibration Note While you are in manual calibration mode, any changes to workload type, active jobs, or functions per user do not affect total workload transactions per hour. This is because values for transactions per function are modified to offset changes to either active jobs or functions per user. In addition, all paging coefficients are recalculated during analysis. 6. Select Option 2 (Specify objectives and active jobs). 7. Change the number of Active Jobs for the ODBCWL to 4. 8. From the Work with BEST/1 Model display, select Option 1 (Work with workloads). 9. Select Option 5 (Display) to display the ODBCWL Workload. 10. Select Option 5 (Display) to display the ODBCWL Function.

Display Function Workload . . . . : ODBCWL ODBC BASE CALIBRATED V31 on V36 Function . . . . : ODBCWL Function of ODBCWL Type options, press Enter. 5=Display Transaction Pool Transactions CPU Time Total Opt Type ID Priority per Function (Secs) I/Os _ 2 5 20 200.00 2.891 53.3 _ 2 2 0 .25 127.521 .0 _ 2 1 0 .25 58.105 8351.9 Bottom Transaction Type: 1=Interactive, 2=Non-interactive F3=Exit F12=Cancel

Chapter 9. Client/Server Capacity Planning

325

This soft copy for use by IBM employees only.

As you can see, changing ODBCWL to four active jobs from six causes BEST/1 to properly identify the 200_Transactions_per_Function . Since Transactions per function for pool 5 went from 133.33 to 200.00, the system overhead for these transactions was slightly increased, as you expect. 11. Go back to the Work with BEST/1 Model menu and press F22 to go back to ″What if″ mode.

9.2.2.2 Calibrating the *BATCHJOB Workloads
As discussed previously, BEST/1 assigns *BATCHJOB type workload to jobs that meet certain characteristics. We do not recommend using *BATCHJOB if the predicted throughput is not within the 10% of the measured transaction throughput. In this case, the difference between predicted and measured throughput is enormous, so we must change the workload type to *NORMAL. 1. From the Work with BEST/1 Model display, select Option 1 (Work with workloads). 2. Select 2 (Change) for the ODBCWL workload. 3. Change the workload type to *NORMAL. 4. Press Enter twice to go back to the Work with BEST/1 Model menu. 5. Select Option 5 (Analyze current model). We want to find out if our calibration efforts for ODBCWL Transactions per Function and workload type *NORMAL enable the Predicted values of a base model to be within the acceptable range of differences to the Measured values. 6. On the Work with Results menu, select Option 5 (Display) to display the Analysis Summary Report. 7. Press F11 (Compare) against measured values.
Compare Against Measured Values Measured Predicted . . . . . . . : 16.4 16.1 . . . . . . . : 2.9 2.4 . . . . . . . : 2.2 1.9 . . . . . . . : 26.0 21.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . : : : : : : : : 4.8 .8 1.2 .0 2.1 .1 763 867 F12=Cancel 1.8 .3 .8 .0 2.1 .1 763 865

Total CPU util . . . . Disk IOP util . . . . Disk arm util . . . . Disk IOs per second . 1 LAN IOP util . . . . . . LAN line util . . . . . WAN IOP util . . . . . . WAN line util . . . . . 1 Interactive: CPU util . . . . . . . Int rsp time (seconds) Transactions per hour Non-interactive thruput F3=Exit F6=Print

F9=Work with spooled files

The Measured and Predicted values for CPU, disk, and transactions per hour are now close enough that we can consider this model as a good base to do growth analysis. However, the communication 1 LAN IOP, LAN line, and WAN IOP are not very close to each other. We chose not to be concerned about LAN IOP utilization or LAN line utilization. However, we do have an objective to represent ODBC client/server capacity planning output not only in system resource utilization and transactions per hour, but also in terms of response time because most customers view

326

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

performance from this viewpoint. So we also adjust BEST/1′s representation of response time in the following steps. Our objective is to include client/server work response time as perceived by the client workstation operator in capacity planning because most customers think of performance in terms of response time. We do this ″manually,″ since the performance monitor data does not identify client/server (non-interactive) transactions and response time. 8. On the Work with Results menu, select Option 5 (Display the Workload Report). Then use F11 (Response time) detail to determine BEST/1′s view of the non-interactive response time components associated with the ODBCWL workload.
Display Workload Report Period: Workload QDEFAULT QDEFAULT ODBCWL PFRMON QDEFAULT QDEFAULT Analysis Total --------Rsp Time Secs spent in-------Type Connect Rsp Time CPU I/O Pool Comm Other 1 *LOCAL .2 .1 .0 .0 .0 .0 1 *LAN .6 .1 .0 .0 .4 .0 2 2 *LOCAL .9 .5 .4 .0 .0 .0 2 *LOCAL 211.6 27.4 183.6 .0 .0 .0 2 *LOCAL 14.0 2.9 11.1 .0 .0 .0 2 *LAN 14.0 2.9 11.1 .0 .0 .0 Bottom Type: 1=Interactive, 2=Non-interactive, 3=*BATCHJOB Performance estimates -- Press help to see disclaimer. F3=Exit F11=Workload summary F12=Cancel

This ODBCWL .9 second response time 2 should be considered as an internal response time because it does not take into account the communication line time nor the client time. Recall that the performance monitor is not able to assign WAN or LAN communications bytes sent and received to client/server applications, though it does do this for 5250/3270-type workloads and transactions. If you want to be more accurate, you should try to find out the “User Perceived Response Time.” The user response time is:
User Perceived Response Time = Server Internal Time + Communications Time + Client Time

In our case and based on our measurements, we know that the user response time is 3.5 to 4 seconds. So there is a difference of approximately 3 seconds. We manually add the 3 seconds to the model to get a reasonable response time. 9. Go back to the Work with BEST/1 Model menu and select Option 1 (Work with workloads). 10. Select Option 2 (Change) for ODBCWL workload. 11. Press F6 (Work with functions). 12. Select Option 2 (Change) for ODBCWL function. 13. Add an additional delay of 3 seconds.

Chapter 9. Client/Server Capacity Planning

327

This soft copy for use by IBM employees only.

Change Function Workload . . . . : ODBCWL ODBC BASE V31 on V36 Function . . . . : ODBCWL Change fields, press Enter. Function text . . . . . . . . . . Function of ODBCWL Key/Think time . . . . . . . . . N/A Seconds Additional delays . . . . . . . . 3.0 Seconds Transaction Pool Transactions CPU Time Total Type ID Priority per Function (Secs) I/Os 2 5 20 200.00 2.891 53.3 2 2 0 .25 127.521 .0 2 1 0 .25 58.105 8351.9 Bottom Transaction Type: 1=Interactive, 2=Non-interactive F3=Exit F6=Work with transactions F12=Cancel

14. Go back to the Work with BEST/1 Model menu and select menu option 5 to analyze the model again. 15. On the Work with Results menu, display the Workload Report. 16. Use F11 (Response time detail) to again show the response time components for workload ODBCWL.
Display Workload Report Period: Workload QDEFAULT QDEFAULT ODBCWL PFRMON QDEFAULT QDEFAULT Analysis Total --------Rsp Time Secs spent in-------Type Connect Rsp Time CPU I/O Pool Comm Other 1 1 2 2 2 2 *LOCAL *LAN *LOCAL *LOCAL *LOCAL *LAN .2 .6 3.9 211.0 14.0 14.0 .1 .1 .5 27.4 2.9 2.9 .0 .0 .4 183.6 11.1 11.1 .0 .0 .0 .0 .0 .0 .0 .4 .0 .0 .0 .0 .0 .0 3.0 .0 .0 .0 Bottom Type: 1=Interactive, 2=Non-interactive, 3=*BATCHJOB Performance estimates -- Press help to see disclaimer. F3=Exit F10=Re-analyze F11=Workload summary F15=Configuration menu F17=Analyze multiple points F12=Cancel F24=More keys

You can see that BEST/1 has added the 3.0 seconds to the Other heading, based on our ″Additional delays″ change of 3.0 seconds to the ODBCWL workload function definition. We can now consider this model as a base to do growth analysis.

9.2.3 Exercise 3: Saving the ODBCWL User-Defined Workload
At this point, we can consider that the ODBCWL workload accurately reflects the ODBC client/server workload. We want to save this workload so that we can use it later for modeling growth of this workload and for modeling it with other measured data workloads. This is especially important since we had to make some manual adjustments to the model and do not want to have to do this every time you want to model this workload, either separately or with other applications that you have measured. 1. Go back to the Work with BEST/1 Model menu and select Option 1 (Work with Workloads).

328

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

2. Select Option 8 (Save workload to workload member) for ODBCWL workload and save the workload.
Save Workload to Workload Member Change values if desired, press Enter. Member . . . . . . . . . . . . ODBCWL Name Library . . . . . . . . . . Your_library Name Text . . . . . . . . . . . . . ODBCWL calibrated workload Replace . . . . . . . . . . . N Y=Yes, N=No CPU architecture . . . . . . . *CISC *CISC, *RISC

F12=Cancel

Note that the RISC versions of BEST/1 have the additional parameter ″CPU architecture ... *CISC or *RISC.″ This enables BEST/1 to more accurately perform growth analysis by knowing if the workload was based on a CISC system or a RISC system. If you have a BEST/1 model based on CISC system performance monitor data that included applications that have excessive disk I/O operations compared to CPU utilization or a CPU intensive application (no or very few disk I/O operations), BEST/1 provides modeling options to more accurately portray the CISC work on a RISC system. For example, a CISC system that is CPU intensive should run faster on a RISC system with similar RPR values. For more information on these ″disk I/O intensive″ and ″CPU intensive″ modeling considerations refer to Appendix D, “BEST/1 CISC to RISC Conversion Example” on page 489. In most cases, taking the workload type default of *NORMAL results in satisfactory modeling results, but you should review the appendix to understand the disk I/O and CPU considerations. The workload member is saved to a file called QACYWKLS in your library. You can use this workload in any other system, or you can add this workload to any other model from any other system. The following steps shows how to add an existing workload when building a new model. Do not actually add the saved workload to the current exercise because it automatically doubles the workload in the next capacity planning exercises. a. To add a saved workload, select Option 1 (Work with workloads) from the Work with BEST/1 Model menu. b. Press F6 (Add saved workload) and you are prompted with all the saved workloads you have available to add. Once you have added a workload, you can work with it in the same way as any other workload. 3. On the Work with BEST/1 Model menu, select F15 (Save the current model), changing the name (Member) of the model to CSPMDLCAL and changing the text field as shown to indicate the model has been calibrated. We will use CSPMDLCAL in the following growth exercises.

Chapter 9. Client/Server Capacity Planning

329

This soft copy for use by IBM employees only.

Save Current Model Change values if desired, press Enter. Save to Model member: Member . . . . . . . Library . . . . . Text . . . . . . . . Replace . . . . . . Externally described Save . . . . . . . Member . . . . . . Library . . . . Text . . . . . . . Replace . . . . . . . . . . . . . . . . . . . . . CSPMDLCAL Name your library Name ODBC BASE CALIBRATED V31 on V36 N Y=Yes, N=No

member information: . . . . . N Y=Yes, N=No . . . . . *MEMBER Name, *MEMBER . . . . . *LIB Name, *LIB . . . . . ODBC BASE CALIBRATED V31 on V36 . . . . . *REP Y=Yes, N=No, *REP

Generating an ″Externally described member″ is for experienced BEST/1 users who want to save the model in a format that can be downloaded to a personal computer for later processing by user-written programs. Discussing these capabilities is beyond the scope of this redbook. So we select the Save No option. This ends the exercise on building a model that reflects your measured data. Next, we are going to do a ″what if″ growth analysis of this client/server application.

9.3 Growth Analysis
Now that we have a calibrated model, we can do various ″what-if″ exercises including workload growth analysis using different system types (CISC to CISC growth, CISC to RISC growth, and traditional model to server model).

9.3.1 Exercise 1: Increasing the ODBCWL Number of Users
We grow the ODBCWL workload by increasing the number of currently active users. 1. Get to the Work with BEST/1 models display and select your calibrated model. Select Option 5 (Work with) for CSPMDLCAL.
Work with BEST/1 Models Library . . . . . your library Name 5=Work with 6=Print 7=Rename Date (V31 on V36) (V31 on V36) Time

Type options, press Enter. 1=Create 3=Copy 4=Delete Opt _ _ 5 Model _____ CSPMDLBAS CSPMDLCAL Text

ODBC BASE ODBC BASE CALIBRATED

09/08/96 14:40:14 09/08/96 15:01:02

Command ===> F3=Exit F4=Prompt F15=Sort by model

F5=Refresh F16=Sort by text

F9=Retrieve F12=Cancel F19=Sort by date and time

330

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

2. From the Work with BEST/1 Model menu, select Option 7 (Specify workload growth and analyze model). You see the Specify Growth of Workload Activity display. Make the changes as specified in the following steps. 3. Change the Determine New configuration parameter to N ( N = N o ) . This tells BEST/1 to perform the growth analysis specified, but do not automatically add main storage, disks, or upgrade to a faster CPU. We want first to see what happens with our current CPU (a 9406 Model D60) on which the performance data was collected. 4. Select five periods to analyze. This is usually sufficient for growth analysis and reduces CPU impact of a session running BEST/1. BEST/1 CPU impact is a concern on CISC systems, but less so on RISC systems. This is because RISC BEST/1 has been changed and recoded on RISC to use ILE C, which is much faster on RISC systems. 5. You can decide to grow all of the workloads or only the ODBCWL if you only plan to increase the number of users of that particular application. This is what we are doing in this exercise. Press F11 (Specify growth by workload). 6. Fill the Percent Change in Workload Activity for every period to get 4, 8, 12, 16, and 20 users. The following display shows all the parameter changes specified in this exercise.
Specify Growth of Workload Activity Type information, press Enter to analyze model. Determine new configuration . . . . . . . . . . N Y=Yes, N=No Periods to analyze . . . . . . . . . . . . . . 5 1 - 10 Period 1 . . . . . . . . . . . . . 4 users Name Period 2 . . . . . . . . . . . . . 8 users Name Period 3 . . . . . . . . . . . . . 12 users Name Period 4 . . . . . . . . . . . . . 16 users Name Period 5 . . . . . . . . . . . . . 20 users Name ------Percent Change in Workload Activity------Workload Period 1 Period 2 Period 3 Period 4 Period 5 ODBCWL .0 100.0 50.0 33.3 25.0 PFRMON .0 .0 .0 .0 .0 QDEFAULT .0 .0 .0 .0 .0 F3=Exit F11=Specify total growth F17=Analyze using ANZBESTMDL F12=Cancel Bottom F13=Display periods 6 to 10

7. Press Enter Status messages are issued as growth analysis is performed. When analysis is complete, the following display is shown.

Chapter 9. Client/Server Capacity Planning

331

This soft copy for use by IBM employees only.

Work with Results Printed report text . . . . . . ODBC BASE CALIBRATED (V31 on V36)

Type options, press Enter. 5=Display 6=Print Opt 5 _ _ _ _ _ _ _ Report Name Analysis Summary Recommendations Workload Report ASP and Disk Arm Report Disk IOP and Disk Arm Report Main Storage Pool Report Communications Resources Report All of the above

Bottom F3=Exit F12=Cancel F14=Select saved results F15=Save current results F18=Graph current results F19=Append saved results F24=More keys Model has been analyzed

8. Review all of the reports in order to find out the non-interactive response time for the ODBCWL workload and the total expected CPU utilization. Always include reviewing the Recommendations report when examining the results of growth analysis. We show only the ″Display Analysis Summary,″ and the extended ″Display Workload Report″ for four users through 20 users.

Display Analysis Summary Period 4USERS 8USERS 12USERS 16USERS 20USERS Period 4USERS 8USERS 12USERS 16USERS 20USERS CPU Model D60 D60 D60 D60 D60 Stor (MB) 80 80 80 80 80 CPU Util 16.1 24.6 33.5 42.1 50.7 -Disk IOPs-Nbr Util 3 3 3 3 3 2.4 3.8 5.9 7.6 9.4 -Disk Ctls-Nbr Util 13 13 13 13 13 .4 .6 .9 1.1 1.4 -Disk Arms-Nbr Util 27 27 27 27 27

1.9 3.1 4.8 6.1 7.6 Bottom ----Inter Rsp Time---- -------Inter-------- -----Non-Inter-----Local LAN WAN CPU Util Trans/Hr CPU Util Trans/Hr .2 .2 .2 .2 .2 .6 .6 .6 .6 .7 .0 .0 .0 .0 .0 2.1 2.1 2.1 2.1 2.1 763 763 763 763 763 14.0 22.5 31.3 39.9 48.6 F12=Cancel F24=More keys 865 1667 2469 3270 4072 Bottom

F3=Exit F10=Re-analyze F15=Configuration menu

F11=Alternative view F17=Analyze multiple points

As you can see, the current Model D60 system RPR 8.1 (Relative internal Performance Rating), disk configuration, and main storage configuration can handle up to 20 users of this ODBC order entry application.

332

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Display Workload Report Period: Workload QDEFAULT ODBCWL PFRMON QDEFAULT 4USERS Type 1 2 2 2 CPU Util 2.1 8.3 1.5 4.3 Thruput -------Response Times (Secs)------per Hour Internal Local LAN WAN 763 .1 .2 .6 .0 800 3.9 3.9 .0 .0 0 211.0 211.0 .0 .0 60 14.0 14.0 14.0 .0

------------------------------------------------------------------------Period: Workload QDEFAULT ODBCWL PFRMON QDEFAULT 8USERS Type 1 2 2 2 CPU Util 2.1 16.8 1.4 4.3 Thruput -------Response Times (Secs)------per Hour Internal Local LAN WAN 763 .2 .2 .6 .0 1600 4.2 4.2 .0 .0 0 202.8 202.8 .0 .0 60 13.9 13.9 13.9 .0

------------------------------------------------------------------------Period: 12USERS CPU Thruput -------Response Times (Secs)------Workload Type Util per Hour Internal Local LAN WAN QDEFAULT 1 2.1 763 .2 .2 .6 .0 ODBCWL 2 25.6 2400 4.6 4.6 .0 .0 PFRMON 2 1.4 0 197.2 197.2 .0 .0 QDEFAULT 2 4.3 60 14.0 14.0 14.0 .0 ------------------------------------------------------------------------Period: Workload QDEFAULT ODBCWL PFRMON QDEFAULT 16USERS Type 1 2 2 2 CPU Util 2.1 34.3 1.4 4.2 Thruput -------Response Times (Secs)------per Hour Internal Local LAN WAN 763 .2 .2 .6 .0 3199 4.7 4.7 .0 .0 0 192.1 192.1 .0 .0 60 14.2 14.2 14.2 .0

------------------------------------------------------------------------Period: 20USERS CPU Thruput -------Response Times (Secs)------Workload Type Util per Hour Internal Local LAN WAN QDEFAULT 1 2.1 763 .2 .2 .7 .0 ODBCWL 2 43.0 3999 4.9 4.9 .0 .0 PFRMON 2 1.4 0 188.2 188.2 .0 .0 QDEFAULT 2 4.2 60 14.4 14.4 14.4 .0 -------------------------------------------------------------------------

While the system resources can sustain up to 20 users, you can see the predicted ODBCWL workload response time increased from 3.9 to 4.9 seconds for 20 users. Since 3.0 seconds of this response time was added to account for line speed and client processor speed, 1.9 seconds of response time within the AS/400 CPU appears acceptable from the AS/400 resource viewpoint. Remember, 3.9 seconds total response time was our starting point for the base performance data collection. At some point in time the customer may consider the 3.9 second to 4.9 second response times modeled by BEST/1 to be too high. BEST/1 cannot model the client workstation. But before replacing all the client workstations, you should get a single faster client workstation and do some performance testing to verify that the client workstation speed is the primary differentiator in response time. In this redbook we accept response time of 3.9 to 4.9 seconds for a complete order as acceptable. This assumption continues in the growth exercises.

Chapter 9. Client/Server Capacity Planning

333

This soft copy for use by IBM employees only.

Changing the application to use blocked insert processing or stored procedures should also improve order processing performance, but these programming techniques cannot be modeled. The following graphs show you the evolution of CPU utilization and non-interactive response time when growing number of users within the ODBCWL workload. The graphs shown in this and other exercises within this chapter were not generated by BEST/1. BEST/1 does provide several graphic output options, but does not have a graph format defined for non-interactive response time. The BEST/1 printed reports of the capacity planning analysis were used to manually key into the Freelance Graphics support, which was used to produce these graphic figures. Figure 75 shows CPU total CPU utilizations for all workloads. Figure 76 shows response times for the ODBCWL workload.

Figure 75. BEST/1 Graph of CPU Utilization

Figure 76. BEST/1 Graph of Response Time

334

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

The following is a short description on how to use BEST/1 graphics support in case you want to use this support. To print the CPU utilization graphic per period, follow these steps:
• •

On the Work with Results display, press F18 (Graph current results). Specify your library and select Option 8 (Print graph) for the CPUBYWKL (CPU utilization by workload) member. Select the printer device according to the printer you have. The spooled file is now ready to be printed.

• •

After generating any printed or graphic output, you may save the results for later display with other growth or hardware change results. 9. Press F15 (Save current results) to save the current results of your growth analysis.
Save Current Results Change values if desired, press Enter. Save to Results member: Member . . . . . . . . . . . ODBCMDLRE1 Name Library . . . . . . . . . Your_library Name Text . . . . . . . . . . . . Result growing from 4 to 20 users Replace . . . . . . . . . . N Y=Yes, N=No Period name . . . . . . . . 4 users Name Externally described member information: Save . . . . . . . . . . . . N Y=Yes, N=No Member . . . . . . . . . . . *MEMBER Name, *MEMBER Library . . . . . . . . . *LIB Name, *LIB Text . . . . . . . . . . . . ODBC MDL Growth Results 1 Replace . . . . . . . . . . *REP Y=Yes, N=No, *REP F12=Cancel

?

10. Since we previously saved the calibrated model, there is no need to save the current model. However, in another scenario we may have made additional changes to the calibrated model or not saved the calibrated model. So the Save Current Model display is shown again if you want to save the model now. If you saved the model before you must change the Replace parameter to Y=Yes on the following display. If you made no changes saving with the Replace Y=Yes works fine as well.
Save Current Model Change values if desired, press Enter. Save to Model member: Member . . . . . . . . . . . CSPMDLCAL Name Library . . . . . . . . . Your_library Name Text . . . . . . . . . . . . ODBC BASE CALIBRATED V31 on V36 Replace . . . . . . . . . . N Y=Yes, N=No Externally described member information: Save . . . . . . . . . . . . N Y=Yes, N=No Member . . . . . . . . . . . *MEMBER Name, *MEMBER Library . . . . . . . . . *LIB Name, *LIB Text . . . . . . . . . . . . ODBC MDL BASE - Calibrated Replace . . . . . . . . . . *REP Y=Yes, N=No, *REP F12=Cancel

11. Press Enter twice to go back to Main BEST/1 Menu. If you are presented with the Exit BEST/1 Model menu you can select Option 2 (Exit without

Chapter 9. Client/Server Capacity Planning

335

This soft copy for use by IBM employees only.

saving) provided you are sure you have saved the calibrated model at some time.

9.3.2 Exercise 2: Manually Modeling CISC Traditional to RISC Server
Now that you have seen the effect of adding users to your system, you probably want to know the effect of changing to another AS/400 system, or maybe to a server model. BEST/1 growth analysis normally models a traditional system to traditional system or a server system to a server system. It does not automatically upgrade between a traditional system and a server system. In this exercise, we manually tell BEST/1 to model our CISC data from a traditional system (D60) to a RISC server model. We do this because the assumption is that we are doing almost exclusively the client/server order entry application using ODBC and very little interactive work - just what a server model is for! 1. From the BEST/1 for AS/400 menu select Option 1 (Work with BEST/1 models). 2. Select Option 5 (Work with) to work with the CSPMDLCAL model. 3. Select menu Option 10 (Configuration menu). 4. Select menu Option 1 (Change CPU and other resource values). 5. Prompt (F4) in the CPU Model parameter. 6. Select (1) the 2111 server model (40S). This is a 40S processor speed feature announced September 1996. It has a BEST/1 RPR of 5.7 for interactive work and 17.10 for non-interactive work. Although 5.7 is less than the D60 8.12 rating, 17.10 is slightly more than twice the RPR for non-interactive work. BEST/1 automatically changes your 2111 main storage from the 80MB you had on the D60 to 64MB, the minimum on a 2111 system. You expect BEST/1 to at least double your CISC main storage size on a RISC system. BEST/1 does not do this here because the performance data used to generate your calibrated ODBC model indicated main storage of 80MB was hardly being used. Therefore you must manually add storage to the model for your RISC system. 7. Place your cursor on the Main storage prompt and use F4 (List). Select 128MB. Because previous analysis of your D60 performance data indicated you were not using much main storage, we selected only 1.5 times your CISC main storage for this exercise. 8. Press Enter. You get a message that says your current D60 with 7 local workstation controllers is too many for a 2111 system. The server model 40S (2111) supports up to 3 local workstation controllers. Because we have assumed there is little or no interactive work to be concerned about, we accept that three local workstation controllers are sufficient for twinax connections because most of your client/server workload runs over a LAN.

336

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

9. Change the number of local workstation controllers to 3. If you know there is never a need for more than one workstation controller, you can set the number to 1. 10. Press Enter. 11. You now get a message Sum of main storage pool sizes (81920 KB) must equal system storage size because BEST/1 needs to distribute your additional storage, 80MB to 128MB, to existing storage pools defined in the measured data. 12. Press F17 (Re-scale pool sizes) to let BEST/1 adjust the pool sizes. In a real life situation you may not want the additional main storage distributed the way BEST/1 does. That is up to you. However, remember when you actually install a new system or add more main storage, you must manually set the pool sizes. BEST/1 can only help you decide what pool sizes to use. It cannot change them on actual system for you! 13. Press Enter. You get the message:

RISC CPU SELECTED. CHECK THAT EACH WORKLOAD HAS APPROPRIATE WORKLOAD TYPE
14. Read the help text for the message. This message calls your attention to the CISC to RISC consideration for migrating CPU intensive or disk I/O intensive applications to RISC. We include the considerations in this redbook in Appendix D, “BEST/1 CISC to RISC Conversion Example” on page 489. 15. Press Enter. Now you get another set of messages, the first one says:

Number of disk IOPs exceeds CPU limit of 1
If you roll through the remaining messages you see a lot of disk and communication hardware that does not ″migrate to″ a RISC server model. You can complete a lot of manual delete and add hardware configuration steps with BEST/1, or... 16. On the Work with BEST/ Model display, select 10 (Configuration menu). The following display is shown.

Chapter 9. Client/Server Capacity Planning

337

This soft copy for use by IBM employees only.

Configuration CPU Model . . . . . . . . . : Disk Disk Disk ASPs IOPs ctls arms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . : : : : 2111 3 13 27 1 Main stor (MB) . . . . . . : 128 Main stor pools . . . . . . : 5 Comm IOPs . . Comm lines . Local WS ctls LAN ctls . . WAN WS ctls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . : : : : : 3 2 3 2 0

Select one of the following: 1. Change CPU and other resource values 2. Work with disk resources 3. Edit ASPs 4. Edit main storage pools 5. Work with communications resources Selection or command ===> __________________________________________________________ F3=Exit F4=Prompt F9=Retrieve F12=Cancel F13=Check configuration F17=Correct configuration F24=More keys

17. On the Configuration menu, select F17 (Correct configuration). Let BEST/1 do your work. You get the following summary of changes!
Configuration Changes The following changes have been made to your configuration: 1 2615 IOP(s) deleted 1 918A IOP(s) created 1 6112 IOP(s) deleted 23 arm(s) removed from ASP 1 1 6500 IOP(s) deleted 1 6502 IOP(s) created 4 2800 disk ctl(s) deleted 4 6602 disk ctl(s) created 4 6602 arm(s) created 1 9337-020 disk ctl(s) deleted 1 2626 communications IOP(s) removed 1 2619 communications IOP(s) created

It takes a long time for you to make all these changes manually. Anytime BEST/1 does a major set of changes like this, make sure you review the BEST/1 Recommendations report. You may find you want to make a few manual changes. 18. Return to the Work with BEST/1 Model display. 19. Select menu Option 5 (Analyze current model). 20. Select Option 5 (Display) in the Analysis Summary report and compare measured versus predicted results (F11).

338

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Compare Against Measured Values Total CPU util . . . Disk IOP util . . . Disk arm util . . . Disk IOs per second LAN LAN WAN WAN IOP util . line util IOP util . line util . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . : : : : : : : : : : : : Measured 16.4 2.9 2.2 26.0 4.8 .8 1.2 .0 2.1 .1 763 867 Predicted 7.7 2.6 7.1 21.8 1.8 .3 5.3 .0 1.0 .1 763 865

Interactive: CPU util . . . . . . . Int rsp time (seconds) Transactions per hour Non-interactive thruput

Observe that the 40S-2111 CPU total utilization is less than half (7.7 versus 16.4) of the D60, that disk IOP utilization arm utilization is up (though very low), and that interactive and non-interactive throughput is roughly equivalent. Essentially, you are doing the same amount of work but consuming less CPU resource. 21. Press Enter three times to go back to the Work with BEST/1 Model menu and select menu Option 7 (Specify workload growth and analyze model). 22. Press F11 (Specify growth by workload). 23. Introduce the parameters as shown in the following display:
Specify Growth of Workload Activity Type information, press Enter to analyze model. Determine new configuration . . . . . . . . . . N Y=Yes, N=No Periods to analyze . . . . . . . . . . . . . . 5 1 - 10 Period 1 . . . . . . . . . . . . . 4 users Name Period 2 . . . . . . . . . . . . . 8 users Name Period 3 . . . . . . . . . . . . . 12 users Name Period 4 . . . . . . . . . . . . . 16 users Name Period 5 . . . . . . . . . . . . . 20 users Name ------Percent Change in Workload Activity------Workload Period 1 Period 2 Period 3 Period 4 Period 5 ODBCWL .0 100.0 50.0 33.3 25.0 PFRMON .0 .0 .0 .0 .0 QDEFAULT .0 .0 .0 .0 .0 F3=Exit F11=Specify total growth F17=Analyze using ANZBESTMDL F12=Cancel Bottom F13=Display periods 6 to 10

These are the same ″growth parameters″ we used previously when modeling the CISC D60 system. 24. Press Enter to analyze. 25. Review every result by typing 5 (Display) in All of the above Report. Look at the Analysis Summary and Workload Report, especially. The Analysis Summary report is shown below.

Chapter 9. Client/Server Capacity Planning

339

This soft copy for use by IBM employees only.

Display Analysis Summary Period 4 users 8 users 12 users 16 users 20 users CPU 40S 40S 40S 40S 40S Model 2111 2111 2111 2111 2111 Stor (MB) 128 128 128 128 128 CPU -Disk IOPs- -Disk Ctls- -Disk ArmsUtil Nbr Util Nbr Util Nbr Util 7.7 2 2.6 4 .1 4 7.1 11.8 2 4.6 4 .1 4 12.5 16.2 2 7.6 4 .2 4 20.6 20.3 2 9.7 4 .2 4 26.4 24.5 2 12.0 4 .2 4 32.6

Bottom ----Inter Rsp Time---- -------Inter-------- -----Non-Inter-----Period Local LAN WAN CPU Util Trans/Hr CPU Util Trans/Hr 4 users .1 .5 .0 1.0 763 6.7 865 8 users .1 .5 .0 1.0 763 10.8 1667 12 users .1 .5 .0 1.0 763 15.2 2469 16 users .1 .5 .0 1.0 763 19.3 3270 20 users .1 .5 .0 1.0 763 23.5 4072 Bottom F3=Exit F10=Re-analyze F15=Configuration menu F11=Alternative view F17=Analyze multiple points F12=Cancel F24=More keys

The Workload report for all five growth periods is shown in the following display:
Display Workload Report Period: Workload QDEFAULT ODBCWL PFRMON QDEFAULT Period: Workload QDEFAULT ODBCWL PFRMON QDEFAULT Period: Workload QDEFAULT ODBCWL PFRMON QDEFAULT Period: Workload QDEFAULT ODBCWL PFRMON QDEFAULT Period: Workload QDEFAULT ODBCWL PFRMON QDEFAULT 4 users Type 1 2 2 2 8 users Type 1 2 2 2 12 users Type 1 2 2 2 16 users Type 1 2 2 2 20 users Type 1 2 2 2 CPU Util 1.0 20.8 .7 2.0 Thruput -------Response Times (Secs)------per Hour Internal Local LAN WAN 763 .1 .1 .5 .0 3999 4.4 4.4 .0 .0 0 133.0 133.0 .0 .0 60 9.3 9.3 9.3 .0 CPU Util 1.0 16.6 .7 2.0 Thruput -------Response Times (Secs)------per Hour Internal Local LAN WAN 763 .1 .1 .5 .0 3199 4.2 4.2 .0 .0 0 128.1 128.1 .0 .0 60 8.8 8.8 8.8 .0 CPU Util 1.0 12.5 .7 2.0 Thruput -------Response Times (Secs)------per Hour Internal Local LAN WAN 763 .1 .1 .5 .0 2400 4.2 4.2 .0 .0 0 125.3 125.3 .0 .0 60 8.5 8.5 8.5 .0 CPU Util 1.0 8.1 .7 2.0 Thruput -------Response Times (Secs)------per Hour Internal Local LAN WAN 763 .1 .1 .5 .0 1600 3.8 3.8 .0 .0 0 120.8 120.8 .0 .0 60 8.0 8.0 8.0 .0 CPU Util 1.0 3.9 .7 2.0 Thruput -------Response Times (Secs)------per Hour Internal Local LAN WAN 763 .1 .1 .5 .0 800 3.5 3.5 .0 .0 0 121.1 121.1 .0 .0 60 7.9 7.9 7.9 .0

340

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Note While the interactive response times may be equal to or improved for the server model, the non-interactive response time is significantly better and at much lower CPU utilizations. At low interactive work CPU utilization levels, the RISC server models deliver faster response time than the interactive performance rating (RPR) indicates. You must be careful not to assume this improved performance is maintained as interactive job CPU utilization increases. See 9.4.1, “Impact of Interactive Work on Server Model Performance” on page 349 for more information on the impact of interactive work on server models. The following table shows a comparison of the total CPU utilization and ODBCWL workload response time for the same growth rates on the CISC D60 and RISC 40S server model we used.
Table 21. CISC, RISC Growth Results Worksheet: D60, 40S Comparison
Workload Growth Desc CPU Utilization All Workloads by Period 16.1, 24.6, 33.5, 42.1, 50.7 07.7, 11.8, 16.2, 20.3, 24.5 ODBCWL Resp Time by Period 3.9, 4.2, 4.6, 4.7, 4.9 3.5, 3.8, 4.2, 4.2, 4.4

CISC GROWTH, NO UPGRADE MANUAL CISC TO RISC, DET. NEW CONFIG=NO

The following two graphs represent the two columns of information contained in Table 21. Figure 77 represents CPU utilization for all workloads and Figure 78 represents ODBCWL workload response time for all five growth periods for both the CISC D60 and the RISC 40S.

Figure 77. BEST/1 Graph of CPU Utilization

Chapter 9. Client/Server Capacity Planning

341

This soft copy for use by IBM employees only.

Figure 78. BEST/1 Graph of Response Time

You can save the results of this traditional system compared to server system modeling for later analysis. However, this exercise does not show those steps. Remember to use a meaningful name for any saved results. This is especially important if you save multiple sets of results and want to compare them later. Note: You just completed an exercise where you manually told BEST/1 to do growth analysis for a 40S server model instead of your original performance data CISC D60. The next exercise shows how to get BEST/1 to do growth analysis and ″automatically″ (with a little help from you) upgrade from your CISC traditional model to a RISC server model. If you were to exit BEST/1 after doing the manual upgrade to the 40S RISC server and later wanted to do the next exercise, you must be careful how you reply to the following BEST/1 exit display.

Exit BEST/1 Model Type choice, press Enter. Option . . . . . . . . . . . _ 1=Save and exit 2=Exit without saving 3=Resume

F12=Cancel

You must choose Option 2 (Exit without saving). This means your CSPMDLCAL model continues to be a CISC D60, which is what we want to start with in the next exercise. If you choose Option 1 (Save and exit), and replace CSPMDLCAL with your current ″working copy″ of the model, the save model remains calibrated, but the base system becomes the 40S RISC server system.

342

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

You can, of course, do the save and give a new model name, such as CSPMDL40S. However, if you replace the existing CISC model, you can no longer see and use the original CISC D60 in your capacity planning exercises. We use the original CISC D60 model in the next exercise.

9.3.3 Exercise 3: Automatically Upgrading CISC to RISC
This section shows how to let BEST/1 automatically upgrade a CISC model built on a traditional system to a RISC server system. As we discussed under 9.3.2, “Exercise 2: Manually Modeling CISC Traditional to RISC Server” on page 336, you need to understand your CISC workload type *NORMAL, *BATCHJOB, or if you have a ″disk I/O intensive″ or ″CPU intensive workload.″ These CPU intensive and disk I/O intensive considerations are discussed in Appendix D, “BEST/1 CISC to RISC Conversion Example” on page 489 where an example of our ODBCWL workload is used. 1. Do one of the following:

Continue from the previous exercise by returning to the BEST/1 for the following AS/400 display:
BEST/1 for the AS/400 Select one of the following: 1. Work with BEST/1 models 10. Work with results 50. About BEST/1 51. Moving from MDLSYS to BEST/1 60. More BEST/1 options Selection or command ===> _______________________________________________________________ ___________________________________________________________________ F3=Exit F4=Prompt F9=Retrieve F12=Cancel

Issue the STRBEST command using F4:
Start BEST/1 (STRBEST) Type choices, press Enter. BEST/1 data library . . Performance data library Log member . . . . . . . Log library . . . . . . . . . . . . . . . . . . . >your library . >PFRRES95 . *NONE . *BESTDTAL Name, *CURLIB Name Name, *NONE Name, *BESTDTAL

F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys

The BEST/1 for the AS/400 display is shown. We have previously done growth analysis for only workload ODBCWL on the original CISC system. With growth rates of 100%, 50%, 33.3%, and 25% we found the current D60 can handle these growth rates. We also manually converted the D60 to a RISC 40S server model with processor feature 2111.

Chapter 9. Client/Server Capacity Planning

343

This soft copy for use by IBM employees only.

So this last exercise starts with the original CISC D60 but uses growth rates that cause you to grow out of the D60 capacities and almost automatically (not manually) upgrade to a RISC server system. If you have September, 1996 BEST/1 PTFs applied to your system, BEST/1 automatically upgrades a CISC model to a RISC system when growth requires this. Prior to these PTFs, the BEST/1 user had to manually change the D60 CPU in the current model ″Upgrade to family″ parameter to *POWERAS, and manually set each RISC system to ″Currently available=Yes.″. This was very tedious to say the least. The new PTFs set all RISC models to ″Currently available=yes.″ If you are modeling a traditional system, all automatic growth to a RISC system defaults to a RISC traditional model. If you are modeling a server model system, all automatic growth to a RISC system is to a RISC server system model. If you are modeling a traditional system and you want to have BEST/1 automatically upgrade to a RISC server system, you must manually change your CISC CPU model ″Upgrade to family″ parameter to a ″Power Server″ value. Once you make this change, BEST/1 can automatically upgrade from a traditional system to a server system when doing growth modeling. 2. On the BEST/1 for AS/400 display take Option 60 (More BEST/1 options). 3. Select Option 10 (Hardware characteristics). 4. Select Option 1 (Work with CPU models). 5. Find the measured data RISC CPU - D60. 6. For this CPU and Model, select 2 (Change). 7. Ensure that the D60 specifies *POWERAS for Upgrade to family. It should, if you are on V3R7 or on a V3R1, V3R2, V3R6 BEST/1 and you have the September 1996 BEST/1 PTFs installed that automatically causes CISC upgrades to RISC systems. Presuming you see *POWERAS, change the Upgrade to family parameter to *POWERSRV as shown below:
Change CPU Model CPU model . . . . . . . . . . . . . . . . : Min/Max storage size (MB) . . . . . . . . : Type information, press Enter. System unit . . . . . . . . . . . Architecture . . . . . . . . . . Relative performance (B10 = 1.0): Normal . . . . . . . . . . . . Server . . . . . . . . . . . . Number of processors . . . . . . Currently available . . . . . . . Family . . . . . . . . . . . . . Upgrade to family . . . . . . . . . . . . . . . . . . D60 64 192 9402, 9404, 9406 *CISC, *RISC (Blank if not Server) Y=Yes, N=No Name *NONE, name

9406 *CISC

. 8.12 . . 1 . N . *SYS . *POWERSRV

Disk IOPs . . . . . . . . . . . . . . . . . Multifunction IOPs . . . . . . . . . . . . . F3=Exit F6=Specify storage sizes F11=Specify connections to disk drives

Maximum 16 1 More... F9=Specify connections to disk IOPs F12=Cancel F24=More keys

Minimum 0 1

344

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

8. Press Enter. 9. Return to the BEST/1 for the AS/400 display. 10. Select menu Option 1 (Work with BEST/1 models). 11. Select Option 5 (Work with).

Opt Model 5 CSPMDLCAL
This should be your base calibrated CISC model. 12. On the Work with BEST/1 Model display, select Option 5 (Analyze current model). 13. On the Work with Results display, select Option 5 for Analysis Summary and then use F11 to compare measured to predicted. Confirm that the model is still calibrated! 14. Return to the Work with BEST/1 Model display and select Option 7 (Specify workload growth) and analyze model. Fill in the parameter values as shown in the following display that combines the actual displays you see to specify the example growth values for up to eight periods.
Specify Growth of Workload Activity Type information, press Enter to analyze model. Determine new configuration . . . . . . . . . . Periods to analyze . . . . . . . . . . . . . . Period Period Period Period Period Workload ODBCWL PFRMON QDEFAULT : Workload ODBCWL PFRMON QDEFAULT 6 7 8 9 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 users 60 users 72 users Period 9 Period10 Y 8 Y=Yes, N=No 1 - 10 Name Name Name Name Name

------Percent Change in Workload Activity------Period 1 Period 2 Period 3 Period 4 Period 5 0.0 100.0 50.0 33.3 25.0 .0 .0 .0 .0 0.0 .0 .0 .0 .0 0.0 : : : Period 6 Period 7 Period 8 Period 9 Period10 100.0 50.0 20.0 0.0 0.0 .0 .0 .0 .0 0.0 .0 .0 .0 .0 0.0 F12=Cancel 1 to 5, F13=Display periods 6 to 10

F3=Exit F11=Specify total growth F17=Analyze using ANZBESTMDL

15. Look at several of the result reports. The Display Analysis Summary is shown with two panels rolled into one display (four through 40 users and 60 through 72 users).

Chapter 9. Client/Server Capacity Planning

345

This soft copy for use by IBM employees only.

Display Analysis Summary Period 4 users 8 users 12 users 16 users 20 users 40 users 60 users 72 users CPU D60 D60 D60 D60 D60 50S 50S 50S Model Stor (MB) 80 80 80 80 96 128 128 128 CPU -Disk IOPs- -Disk Ctls- -Disk ArmsUtil Nbr Util Nbr Util Nbr Util 16.1 3 2.4 13 .4 27 1.9 24.6 3 3.8 13 .6 27 3.1 33.5 3 5.9 13 .9 27 4.8 42.1 3 7.6 13 1.1 27 6.1 49.1 3 6.5 13 1.0 27 5.2 37.4 3 12.5 7 1.0 27 3.6 54.7 3 18.9 7 1.5 27 5.4 65.1 3 23.1 7 1.8 27 6.6

2120 2120 2120

Period 4 users 8 users 12 users 16 users 20 users 40 users 60 users 72 users

--Non-Inter Rsp Time-- -----Non-Inter-----Local LAN WAN CPU Util Trans/Hr 4.5 14.0 .0 14.0 865 4.5 13.9 .0 22.5 1667 4.8 14.0 .0 31.3 2469 4.9 14.2 .0 39.9 3270 4.3 13.9 .0 47.0 4072 3.5 5.1 .0 36.5 8081 3.6 5.4 .0 53.8 12090 3.7 5.7 .0 64.3 14495 F11=Alternative view F17=Analyze multiple points F12=Cancel F24=More keys

F3=Exit F10=Re-analyze F15=Configuration menu

You can see BEST/1 automatically increased main storage to 96MB in period 5 (20 users). When it did this, non-interactive response time improved in period 5 from period 4 (4.9 to 4.3 seconds). Based on our growth rates BEST/1 upgraded our traditional CISC system to a RISC 50S-2120 Server model in growth period 6 (40 users). The 50S-2120 continues to handle the 72 users in growth period 8, though the total CPU utilization of 64.3 for our non-interactive work (mostly the ODBCWL workload) is approaching the 70% CPU utilization threshold that BEST/1 uses for priority 20 or higher work to trigger a CPU upgrade. The Display Workload Report-Response Time Detail is shown below with all growth periods rolled into a single display.

346

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Display Workload Report Period: Workload QDEFAULT ODBCWL PFRMON QDEFAULT Period: Workload QDEFAULT ODBCWL PFRMON QDEFAULT Period: Workload QDEFAULT ODBCWL PFRMON QDEFAULT Period: Workload QDEFAULT ODBCWL PFRMON QDEFAULT Period: Workload QDEFAULT ODBCWL PFRMON QDEFAULT Period: Workload QDEFAULT ODBCWL PFRMON QDEFAULT Period: Workload QDEFAULT ODBCWL PFRMON QDEFAULT Period: Workload QDEFAULT ODBCWL PFRMON QDEFAULT 4 users Type 1 2 2 2 8 users Type 1 2 2 2 12 users Type 1 2 2 2 16 users Type 1 2 2 2 20 users Type 1 2 2 2 40 users Type 1 2 2 2 60 users Type 1 2 2 2 72 users Type 1 2 2 2 CPU Util .9 62.0 .6 1.7 Thruput -------Response Times (Secs)------per Hour Internal Local LAN WAN 763 .1 .1 .6 .0 14396 3.7 3.7 .0 .0 0 61.2 61.2 .0 .0 60 5.7 5.7 5.7 .0 CPU Util .9 51.5 .6 1.7 Thruput -------Response Times (Secs)------per Hour Internal Local LAN WAN 763 .1 .1 .5 .0 11997 3.6 3.6 .0 .0 0 62.0 62.0 .0 .0 60 5.4 5.4 5.4 .0 CPU Util .9 34.2 .6 1.7 Thruput -------Response Times (Secs)------per Hour Internal Local LAN WAN 763 .1 .1 .5 .0 7998 3.5 3.5 .0 .0 0 65.4 65.4 .0 .0 60 5.1 5.1 5.1 .0 CPU Util 2.1 41.3 1.4 4.2 Thruput -------Response Times (Secs)------per Hour Internal Local LAN WAN 763 .2 .2 .7 .0 3999 4.2 4.2 .0 .0 0 183.4 183.4 .0 .0 60 13.9 13.9 13.9 .0 CPU Util 2.1 34.3 1.4 4.2 Thruput -------Response Times (Secs)------per Hour Internal Local LAN WAN 763 .2 .2 .6 .0 3199 4.7 4.7 .0 .0 0 192.1 192.1 .0 .0 60 14.2 14.2 14.2 .0 CPU Util 2.1 25.6 1.4 4.3 Thruput -------Response Times (Secs)------per Hour Internal Local LAN WAN 763 .2 .2 .6 .0 2400 4.6 4.6 .0 .0 0 197.2 197.2 .0 .0 60 14.0 14.0 14.0 .0 CPU Util 2.1 16.8 1.4 4.3 Thruput -------Response Times (Secs)------per Hour Internal Local LAN WAN 763 .2 .2 .6 .0 1600 4.2 4.2 .0 .0 0 202.8 202.8 .0 .0 60 13.9 13.9 13.9 .0 CPU Util 2.1 8.3 1.5 4.3 Thruput -------Response Times (Secs)------per Hour Internal Local LAN WAN 763 .1 .2 .6 .0 800 3.9 3.9 .0 .0 0 211.0 211.0 .0 .0 60 14.0 14.0 14.0 .0

Type: 1=Interactive, 2=Non-interactive, 3=*BATCHJOB Performance estimates -- Press help to see disclaimer. F3=Exit F10=Re-analyze F11=Response time detail F12=Cancel F13=Previous period F14=Next period F24=More keys

Chapter 9. Client/Server Capacity Planning

347

This soft copy for use by IBM employees only.

16. From Workload report response time details you can see the range of response times for the ODBCWL workload order entry application from four clients up to 72 clients. Table 22 is a summary of our previous D60 growth, 40S growth, and just completed D60 to 50S growth results for total CPU utilization for all workloads and for response time for the ODBCWL workload.
Table 22. CISC, RISC Growth Results Worksheet: D60, 40S, D60 Growth to 50S
Workload Growth Desc CPU Utilization All Workloads by Period 16.1, 24.6, 33.5, 42.1, 50.7 07.7, 11.8, 16.2, 20.3, 24.5 16.1, 24.6, 33.5, 42.1, 49.9, 37.4*, 54.7*, 65.1* ODBCWL Resp Time by Period 3.9, 4.2, 4.6, 4.7, 4.9 3.5, 3.8, 4.2, 4.2, 4.4 3.9, 4.2, 4.6, 4.7, 4.2, 3.5*, 3.6*, 3.7*

CISC GROWTH, NO UPGRADE (D60) MANUAL CISC TO RISC, UPGRADE (40S) AUTO CISC TO RISC, UPGRADE (50S) Note:

* = RISC system CPU utilization and ODBCWL workload response times when doing automatic growth from CISC traditional to RISC server family.

You have now used BEST/1 to create a model from performance monitor data. Your performance data contained work that was primarily the client/server order entry application we have used throughout the redbook. You have validated and calibrated the model for ″server jobs″ that ran in their own storage pool and had to adjust for the fact that the Performance Monitor does not fully support ″non-interactive transactions,″ even though you want BEST/1 to do this. You have also seen how to specify growth without upgrading the current hardware configuration and have done upgrading for this client/server workload from a traditional system to a RISC server model system. You should now be able to model other client/server applications with BEST/1. It is very important to collect the right performance monitor data and carefully calibrate the model based on analysis of the original CRTBESTMDL results and real life observations of the application.

9.4 AS/400 Performance in a Server Environment
The performance characteristics of AS/400 server models work in favor of client/server workloads (may also be referred to as ″non-interactive″ or ″batch″) at the expense of interactive workloads. Therefore, for the same investment, a customer expects to see better throughput for client/server and batch work, and less throughput for interactive, compared to a traditional AS/400 model. Server model performance characteristics are more suited to a number of customer environments:

348

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Table 23. AS/400 Server models - w h e n to choose
Model Customer Environment Client/Server On-line Transaction Processing (OLTP) applications Client/Server Executive Information System (EIS), using Client/Server end-user query products AS/400 Application Development (especially for batch, such as compiles submitted to batch) Long running commercial batch applications Workload is largely interactive using non-programmable workstations or PC′s with 5250 emulation (such as RUMBA). AS/400 Advanced System Have a large number of twinax connected devices. Interactive workload exceeds the maximum throughput of a server model. Note: This table considers database server applications, not file server applications. While a server model may run applications using Shared Folders or IFS faster than the equivalent AS/400 Advanced System model, the best choice may be to add an FSIOP to the existing AS/400. See 6.8, “FSIOP Performance Monitor Query - Cache” on page 215.

AS/400 Advanced Server

9.4.1 Impact of Interactive Work on Server Model Performance
As stated earlier, the server model performance is optimized for client/server and batch environments at the expense of interactive environments. This means that as interactive work is added to the server models, the overall system performance decreases. In environments where only client/server or batch work is present, the effective performance of a server model is represented by the non-interactive RPR. However, for mixed environments, the effective performance of the server models is represented by the range of the non-interactive and interactive RPRs, depending on the amount of interactive work present and on the particular AS/400 server model. If there is only one interactive job running on the server model, its performance is closer to the server model′s higher non-interactive performance rating than its interactive performance rating. Depending on the model, if the interactive CPU utilization is in the range of 2% to 10%, the overall performance is more closely represented by the non-interactive RPR. This means that the performance of both interactive and non-interactive work is represented by the non-interactive RPR. As the interactive CPU utilization moves above the 2% to 10% range, the overall system performance decreases for both interactive and non-interactive work until the performance of both is represented by the interactive RPR. Therefore, maximum price/performance of the server models is achieved when interactive work is kept to a minimum. (BEST/1 has tables to model this ″constraint on interactive CPU utilization″ on RISC server models.) The methodology used to vary the performance on the AS/400 CISC and RISC server models varies. On the RISC models, a SLIC interrupt handling task does additional processing for interactive work and therefore uses more CPU time as
Chapter 9. Client/Server Capacity Planning

349

This soft copy for use by IBM employees only.

the amount of interactive work increases. These tasks are named CFINTn, where n is the processor number (1, 2, 3, or 4). On the server models there is more work for this task to perform and therefore, as the interactive workload increases, the CFINTn task(s) use more of the CPU on the server models than on the non-server models. See AS/400 Performance Management V3R6 , GG24-4735, for more information about CFINTn task(s). For server models a job is considered interactive if it has done at least one workstation I/O operation since it started. This includes not only your typical workstation jobs started by a workstation user or a pass-through or TELNET session, but also a job submitted to batch that acquires a workstation and does a single I/O operation to that workstation.

9.5 Client/Server Capacity Planning Summary
This chapter has calibrated the original BEST/1 model based on:
• •

Measured CISC V3R1 performance monitor data based an ODBC application. An understanding of the application and the actual response times observed by up to four active clients running the application. Understanding the application uses Visual Basic interfaces to the Client Access/400 Windows 3.1 client ODBC support. Understanding the OS/400 operating environment under which the QZDAINIT database server job performs its functions. If TCP/IP is used, replace the QZDAINIT jobs with QZASOINIT jobs and do the same BEST/1 processing.

After calibrating the base model to correspond ″close enough″ to the actual customer environment, we performed various BEST/1 growth modeling exercises, keeping within the CISC family of systems and then exercises modeling growth to the RISC family of systems. We request feedback on your experiences so that we can update this redbook to make modeling client/server applications as simple as possible.

350

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Chapter 10. Case Study
Understanding the performance issues in a client/server environment is a complex task. In this chapter, we present a case study of an AS/400 client/server application. In the following section, the sequence of examining the performance reports is discussed. We start with an overview of the application, which is followed by an analysis of performance data collected from the application. In this chapter, we use an AS/400 client/server order entry application, which is based on the primary application of the CPW (Commercial Processing Workload) Benchmark set of applications. This CPW benchmark includes interactive and batch work and is used by the Rochester Development Lab to determine performance ratings of the various AS/400 processor features. The CPW benchmark itself is a modified implementation of the TPC-C (Transaction Processing Commercial-benchmark C) workload. Since CPW is not the actual TCP-C benchmark, performance metrics based upon CPW applications are not representative of IBM′s or other vendor′ s optimized implementations. However, CPW is representative of sophisticated commercial applications and industry standard benchmarks and is more complex than the original AS/400 performance rating RAMP-C benchmark, which was used in AS/400 releases prior to November 1996. The client/server order entry application used in this case study does only the order entry CPW application and uses the CPW database. The primary server functions done on the AS/400 are database accesses, and the primary client requests use SQL interfaces; however, other client interfaces, such as APPC program to program and Client Access data queue interfaces are alternative choices. In the non-SQL cases AS/400 ″native database operations″ (for example, READ record and WRITE record rather than SQL SELECT and SQL INSERT operations) are used. This section introduces the application, specifies the database layout, and shows an example of the order entry application.

10.1 Overview of the Application
This section provides an overview of the application and a description of how the application database is used.

10.1.1 The Company
The Company is a wholesale supplier with one warehouse and 10 sales districts. Each district serves 3000 customers (30 000 total customers for the Company). The warehouse maintains stock for the 100 000 items sold by the Company. The following diagram illustrates the company structure (warehouse, district, and customer).

© Copyright IBM Corp. 1996

351

This soft copy for use by IBM employees only.

┌───────────┐ │ Company │ └─────┬─────┘ │ │ ┌─────┴─────┐ │ Warehouse │ └─────┬─────┘ │ ┌────────────────────┬──┴────────────────┐ │ │ │ ┌──────┴────────┐ ┌──────┴────────┐ ┌──────┴────────┐ │ District-1 │ │ District-2 │ │ District-10 │ └──────┬────────┘ └──────┬────────┘ └──────┬────────┘ │ │ │ ┌──────┴────────┐ ┌──────┴────────┐ ┌──────┴────────┐ │ 30K Customers │ │ 30K Customers │ │ 30K Customers │ └───────────────┘ └───────────────┘ └───────────────┘

Figure 79. Company Structure

10.1.1.1 The Company Database
The Company runs its business with a database. This database is used in a mission critical, OLTP (on-line transaction processing) environment. The database includes tables with the following data:
• • • • • •

District information (next available order number, tax rate, and so on). Customer information (name, address, telephone number, and so on). Order information (date, time, shipper, and so on). Order line information (quantity, delivery date, and so on). Item information (name, price, item ID, and so on). Stock information (quantity in stock, warehouse ID, and so on.)

10.1.1.2 A Customer Transaction
1. Customers telephone one of the 10 district centers to place an order. 2. The district customer service representative answers the telephone, gets the following information, and enters it into the application: a. Customer number b. Item Numbers of the items the customer wants to order. c. The quantity required for each item. 3. The customer service representative enters the district number into the application. 4. The application then: a. From the Customer Table, reads the customer last name, customer discount rate, and customer credit status. b. From the Item Table, reads the item names, item prices, and item data for each item ordered by the customer. c. Reads the District Table for the district tax and the next available district order number. The next available district order number is incremented by one and updated. d. Inserts a new row into both the New Order Table and the Order Table to reflect the creation of the new order. e. Checks if the quantity of ordered items is in stock by reading the quantity in the Stock Table. The quantity is reduced by the quantity ordered and the new quantity is written into Quantity. f. A new row is inserted into the Order Line Table to reflect each item in the order. g. Writes a shipping record of the order (used to ship order).

352

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

10.1.1.3 Database Table Structure
The CSDB database has nine tables:
• • • • • • • • •

Warehouse District Customer New order Order Order line Item Stock History (not used)

The relationships among these tables are shown in the following diagram:
┌──────────────┐ │ Warehouse ├────────────────────── │ 1 table │ └─────┬────────┘ │ ┌──────────────┐ │ │ History │ │ │ 30K+ records │ │ └──────────────┘ ┌──────────────┐ │ Districts │ │ 10 records │ └─────┬────────┘ │ │ ─┐ │ │ │ │ ┌──────────────┐ ┌──────────────┐ ┌┴─────────────┐ │ Stock │ │ New-Order │ │ Customer │ │ 100k records ├─┐ │ 9k+ records │ │ 30K records │ └──────────────┘ │ └──────────────┘ └─────┬────────┘ │ │ │ │ └───────┐ │ │ │ │ ┌─────┴────────┐ │ ┌──────────────┐ ┌┴─────────────┐ │ Item │ └── │ Order-line │ ─┤ Order │ │ 100k records │ │ 300K+records │ │ 30K+records │ └──────────────┘ └──────────────┘ └──────────────┘

Figure 80. CSDB Database Table Relationships

10.2 CPW Benchmark Database Layout
The sample application uses the following tables of the CPW benchmark database:
• • • • • •

District Customer Order Order line Stock Item (catalog)

The following sections describe in detail the layout of the database tables.

10.2.1 District
Table 24 (Page 1 of 2). District Table Layout (DSTRCT)

Field Name DID DWID DNAME DADDR1

Real Name District ID Warehouse ID District Name Address Line 1

Type Decimal Character Character Character

Length 3 4 10 20

Chapter 10. Case Study

353

This soft copy for use by IBM employees only.

Table 24 (Page 2 of 2). District Table Layout (DSTRCT)

Field Name DADDR2 DCITY DSTATE DZIP DTAX DYTD DNXTOR

Real Name Address Line 2 City State Zip Code Tax Year to Date Balance Next Order Number

Type Character Character Character Character Decimal Decimal Decimal

Length 20 20 2 10 5 13 9

Note: Primary Key: DID, DWID

10.2.2 Customer
Table 25. Customer Table Layout (CSTMR)

Field Name CID CDID CWID CFIRST CINIT CLAST CLDATE CADDR1 CCREDT CADDR2 CDCT CCITY CSTATE CZIP CPHONE CBAL CCRDLM CYTD CPAYCNT CDELCNT CLTIME CDATA

Real Name Customer ID District ID Warehouse ID First Name Middle Initials Last Name Date of Last Order Address Line 1 Credit Status Address Line 2 Discount City State Zip Code Phone Number Balance Credit Limit Year To Date Quantity Quantity Time of Last Order Customer Information

Type Character Decimal Character Character Character Character Numeric Character Character Character Decimal Character Character Character Character Decimal Decimal Decimal Decimal Decimal Numeric Character

Length 4 3 4 16 2 16 8 20 2 20 5 20 2 10 16 7 7 13 5 5 6 500

Note: Primary Key: CID, CDID, CWID

354

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

10.2.3 Order
Table 26. Orders Table Layout (ORDERS)

Field Name OWID ODID OCID OID OENTDT OENTTM OCARID OLINES OLOCAL

Real Name Warehouse ID District ID Customer ID Order ID Order Date Order Time Carrier Number Number of Order Lines Local

Type Character Decimal Character Decimal Numeric Numeric Character Decimal Decimal

Length 4 3 4 9 8 6 2 3 1

Note: Primary Key: OWID, ODID, OID

10.2.4 Order Line
Table 27. Order Line Table Layout (ORDLIN)

Field Name OLOID OLDID OLWID OLNBR OLSPWH OLIID OLQTY OLAMNT OLDLVD OLDLVT OLDSTI

Real Name Order ID District ID Warehouse ID Order Line Number Supply Warehouse Item ID Quantity Ordered Amount Delivery Date Delivery Time District Information

Type Decimal Decimal Character Decimal Character Character Numeric Numeric Numeric Numeric Character

Length 9 3 4 3 4 6 3 7 8 6 24

Note: Primary Key: OLWID, OLDID, OLOID, OLNBR

10.2.5 Item (Catalog)
Table 28 (Page 1 of 2). Item Table Layout (ITEM)

Field Name IID INAME IPRICE

Real Name Item ID Item Name Price

Type Character Character Decimal

Length 6 24 5

Chapter 10. Case Study

355

This soft copy for use by IBM employees only.

Table 28 (Page 2 of 2). Item Table Layout (ITEM)

Field Name IDATA

Real Name Item Information

Type Character

Length 50

Note: Primary Key: IID

10.2.6 Stock
Table 29. Stock Table Layout (STOCK)

Field Name STWID STIID STQTY STDI01 STDI02 STDI03 STDI04 STDI05 STDI06 STDI07 STDI08 STDI09 STDI010 STYTD STORDRS STREMORD STDATA

Real Name Warehouse ID Item ID Quantity in Stock District Information District Information District Information District Information District Information District Information District Information District Information District Information District Information Year To Date Quantity Quantity Item Information

Type Character Character Decimal Character Character Character Character Character Character Character Character Character Character Decimal Decimal Decimal Character

Length 4 6 5 24 24 24 24 24 24 24 24 24 24 9 5 5 50

Note: Primary Key: STWID, STIID

10.3 Database Terminology
This redbook concentrates on the use of the AS/400 system as a database server in a client/server environment. In some cases, we use SQL to access the AS/400 databases; in other cases, we use native database access. The terminology used for the database access is different in both cases. In Table 30, you find the correspondence between the different terms.
Table 30 (Page 1 of 2). Database Terminology

AS/400 Native Library

SQL Collection

356

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Table 30 (Page 2 of 2). Database Terminology

AS/400 Native Physical File Field Record Logical File

SQL Table Column Row View or Index

10.4 CPW (New Order) Application Example
This section shows windows captured from the CPW application; the application was developed using Visual Basic. The applications have been developed to run in an automatic mode. In automatic mode, the application uses random numbers to enter data for the order entry window. Automatic mode assisted us in collecting performance monitor data without requiring multiple human operators.

Figure 81. Setup

The preceding figure shows a blank order entry window. If the program is run automatically, the order entry information is generated by the program. The program generates random numbers for the customer number, item number, and order quantity. The customer number is a number between 1 and 30 000. The item number is a number between 1 and 100 000. This allows for easy repetitive testing of the performance scenarios.

Chapter 10. Case Study

357

This soft copy for use by IBM employees only.

Figure 82. Run-Time Data

The preceding figure shows the result of selecting Cyclic (automatic) mode. The user can enter the number of transactions to run, number of minutes to run the program, and operator think time. Automatic mode is included in the program to provide a convenient method to do repetitive performance testing. The example shows four transactions to be run. Each transaction is equivalent to the end user entering an order on the order entry window that consists of 10 items.

358

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Figure 83. Application Complete

The preceding figure shows a completed order entry window. It also shows a dialog box indicating that all the requested transactions have been completed. Clicking on the OK button displays a run-time log with the response times for each of the four orders as shown in the following figure.

Figure 84. Run-Time Log

Chapter 10. Case Study

359

This soft copy for use by IBM employees only.

10.5 Case Study Tests
In our tests we have used OS/400 V3R1 and the following hardware configuration:

AS/400 - 9402 20S - 96MB Note that a 20S server model has an AS/400 non-interactive performance rating of 5.9 - not a system that should be used by a large number of active clients.

• •

IBM PC - 350 PI33 - 32MB Token-ring LAN - speed 4MB

This order entry application was developed in Visual Basic (client) and RPG (AS/400). There are three different implementations with the same logic:
• • •

ODBC APIs ODBC APIs calling Stored Procedure Visual Basic Database Control

We have run each application three times with four orders for each application execution. We used the methodology outlined in Chapter 8, “Client/Server Performance Analysis” on page 255 to obtain the performance information. The communication and job traces shown in this chapter correspond to the first test and the Performance Tools reports include all three tests. This chapter shows only a few excerpts of the complete traces and Performance Tools reports. More complete communications traces are shown in Appendix B, “Communications Trace Examples” on page 429. A more complete ODBC trace is shown in Appendix C, “ODBC Trace Example” on page 467. The response time of each application is shown in the following table:

┌─────────────────┬────────────┬─────────┬─────────────────┐ │ │ Stored │ APIs │ Visual Basic │ │ │ Procedure │ │ DB Control │ ├─────────────────┼────────────┼─────────┼─────────────────┤ │Test 1 │ │ │ │ │ Order #1 │ 6.81 │ 9.67 │ 48.50 │ │ Order #2 │ 3.18 │ 5.88 │ 53.71 │ │ Order #3 │ 3.08 │ 6.21 │ 48.01 │ │ Order #4 │ 3.41 │ 6.10 │ 50.31 │ │Test 2 │ │ │ │ │ Order #1 │ 1.04 │ 2.37 │ 49.05 │ │ Order #2 │ 0.88 │ 2.80 │ 50.26 │ │ Order #3 │ 0.99 │ 2.30 │ 51.30 │ │ Order #4 │ 1.10 │ 2.19 │ 48.56 │ │Test 3 │ │ │ │ │ Order #1 │ 0.88 │ 2.25 │ 50.64 │ │ Order #2 │ 0.82 │ 2.31 │ 48.61 │ │ Order #3 │ 0.88 │ 2.37 │ 56.57 │ │ Order #4 │ 0.99 │ 2.86 │ 50.20 │ └─────────────────┴────────────┴─────────┴─────────────────┘
Figure 85. Response Time

360

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Note that the Stored Procedure implementation was the fastest and Visual Basic Database Control, the slowest. As minimizing synchronous disk I/Os (application must wait for the disk I/O to complete) and communication I/Os are critical performance factors, we counted these through use of Component Reports for the different client interfaces used. The stored procedure (properly implemented) did less synchronous disk I/Os and less communication I/Os for the connection and attribute negotiation, to prepare the Call statement and to execute it. With Visual Basic Database Control implementation many disk I/Os and communication I/Os are done. The table in Figure 86 shows the synchronous disk I/Os per QZDAINIT job extracted from the Component Report. (If TCP/IP had been used, the job name would have started with QZDASOINIT.)

┌─────────────┬───────────────────┬─────────────┬──────────────────┐ │ │ Stored │ APIs │ Visual Basic │ │ │ Procedure │ │ DB Control │ ├─────────────┼───────────────────┼─────────────┼──────────────────┤ │ Sync I/Os │ 685 │ 869 │ 4083 │ └─────────────┴───────────────────┴─────────────┴──────────────────┘
Figure 86. Disk I/O count

10.6 Case Study Analysis
We analyze the ODBC APIs implementation application. Parts of some reports are reproduced here to explain the analysis, but all reports are complete in the Appendix.

10.6.1 Response Time Log
The application was executed three times during the investigation with four orders each one. While the response time for the first transaction of the first test was particularly high, the response time for subsequent transactions were approximately 6 seconds but this test included the impact of running logging and job/communication traces . The response times for the second and third run were consistently under 3 seconds and averaging 2.43 seconds. When reviewing the performance of an application, make allowance for the effect of the job trace on performance.

10.6.2 Performance Reports
The Performance Reports are produced from the data collected with the STRPFRMON command. They provide a good indication of the efficiency of the client/server application and the general levels of AS/400 resources used for each transaction.

Chapter 10. Case Study

361

This soft copy for use by IBM employees only.

10.6.2.1 System Report
The System Report uses Performance Monitor ″summary level data″ (STRPFRMON ... TRACE(*NONE)). Figure 87 shows a sample System Report. The System Report should be reviewed for overall CPU and disk resource utilization and main storage page faulting rates.
System Report Workload ODBC API - 3 TIMES - 4 TRANSACTIONS Model/Serial . : 20S-2010/10-1053A Main storage . . : System name . . : SYSASM07 Version/Release : 6/04/96 16:27:09 Page 0001

Member . . . : ODBCAPI 96.0 M Started . . . . : 05/22/96 17:32:00 Library . . : USERID39 3/ 1.0 Stopped . . . . : 05/22/96 17:34:28 Interactive Workload Job Number Average Logical DB ------ Printer ------Communications MRT Type Transactions Response I/O Count Lines Pages I/O Count Max Time ---------------------------------------------------------------------------1 Client Access 26 .65 47 0 0 0 Total/Average 26 .65 47 0 0 0 ----------------------------------------------------------------------------------------------------------------------------------Non-Interactive Workload Job Number Logical DB ------ Printer -----Communications CPU Per Logical Type Of Jobs I/O Count Lines Pages I/O Count Logical I/O I/O /Second ---------------------------------------------------------------------------Batch 49 7 0 0 0 .4898 .0 Spool 3 0 0 0 0 .0000 .0 AutoStart 2 0 0 0 0 .0000 .0 Total/ 54 7 0 0 0 .4975 .0 Average Total CPU Utilization . . . . . . : 20.6 3

System Report 6/04/96 16:27:09 Resource Utilization Page 0002 ODBC API - 3 TIMES - 4 TRANSACTIONS Member . . . : ODBCAPI Model/Serial . : 20S-2010/10-1053A Main storage . . : 96.0 M Started . . . . : 05/22/96 17:32:00 Library . . : USERID39 System name . . : SYSASM07 Version/Release : 3/ 1.0 Stopped . . . . : 05/22/96 17:34:28 -------------------- Average Per Transaction -------------------Sync Async Job Response CPU Disk Disk DB Type Seconds Seconds I/O I/O I/O ----------------------------------------1 Client Access .6 .87 22.1 3.5 1.8 Total/Average .6 .87 22.1 3.5 1.8 ----------------------------------------------------------------------------------------------------------------------------------Tns Active -------------------------- Disk I/O Per Second --------------------------Job CPU /Hour Jobs Per Total --------- Synchronous --------------- Asynchronous -------Type Util Rate Interval I/O DBR DBW NDBR NDBW DBR DBW NDBR NDBW ---------------------------------------------------------------------1 Client Access 15.6 645 27 4.6 .3 .0 3.4 .1 .0 .2 .3 .0 Total/Average 15.6 645 27 4.6 .3 .0 3.4 .1 .0 .2 .3 .0

Figure 87 (Part 1 of 5). System Report

362

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

System Report 6/04/96 16:27:09 Resource Utilization Expansion Page 0003 ODBC API - 3 TIMES - 4 TRANSACTIONS Member . . . : ODBCAPI Model/Serial . : 20S-2010/10-1053A Main storage . . : 96.0 M Started . . . . : 05/22/96 17:32:00 Library . . : USERID39 System name . . : SYSASM07 Version/Release : 3/ 1.0 Stopped . . . . : 05/22/96 17:34:28 -------------------------------------------- Average Per Transaction --------------------------------------------------------------- Physical Disk I/O ----------------------------- Logical --------- Communications -Job ------- Synchronous -------------- Asynchronous ------------Data Base I/O ----I/O Type DBR DBW NDBR NDBW DBR DBW NDBR NDBW Read Write Other Get Put -------------- ----------------------------------------------------1 Client Access 1.8 .1 19.0 1.0 .0 1.2 1.8 .4 .9 .4 .4 .0 .0 Total/Average 1.8 .1 19.0 1.0 .0 1.2 1.8 .4 .9 .4 .4 .0 .0 ---------------------------------------------------------------------------------------------------------------------------------Job CPU Cum ------- Disk I/O ----------- CPU Per I/O ------- DIO /Sec -Priority Type Util Util Sync Async Sync Async Sync Async -----------------------------------------------------------------------000 Batch 1.2 1.2 386 5 .0046 .3582 2.6 .0 System 3.2 4.4 694 13 .0067 .3586 4.7 .0 009 System .0 4.4 0 0 .0000 .0000 .0 .0 010 Batch .0 4.4 8 0 .0063 .0000 .0 .0 015 Spool .0 4.4 0 0 .0000 .0000 .0 .0 016 System .0 4.4 0 0 .0000 .0000 .0 .0 2 020 Client Access 15.6 20.0 575 93 .0393 .2434 3.9 .6 Batch .1 20.2 26 6 .0065 .0285 .1 .0 AutoStart .0 20.2 0 0 .0000 .0000 .0 .0 System .0 20.2 0 0 .0000 .0000 .0 .0 025 Batch .7 21.0 81 0 .0141 .0000 .5 .0 035 Batch .0 21.0 4 0 .0040 .0000 .0 .0 040 Batch .0 21.0 32 0 .0024 .0000 .2 .0 AutoStart .0 21.1 6 0 .0090 .0000 .0 .0 System .0 21.1 0 0 .0000 .0000 .0 .0 050 Batch .1 21.2 36 0 .0048 .0000 .2 .0 052 System .0 21.2 0 0 .0000 .0000 .0 .0 4 060 System .0 21.2 0 0 .0000 .0000 .0 .0 Total/Average 1,848 117 12.7 .8

Figure 87 (Part 2 of 5). System Report
System Report 6/04/96 16:27:09 Storage Pool Utilization Page 0004 ODBC API - 3 TIMES - 4 TRANSACTIONS Model/Serial . : 20S-2010/10-1053A Main storage . . : 96.0 M Started . . . . : 05/22/96 17:32:00 System name . . : SYSASM07 Version/Release : 3/ 1.0 Stopped . . . . : 05/22/96 17:34:28 ---------- Avg Per Second ------------ Avg Per Minute ---CPU Number Average ------ DB --------- Non-DB ---ActWaitActUtil Tns Response Fault Pages Fault Pages Wait Inel Inel ------------------------ ------------- ------- ------- ------- ------6 6 1.5 0 .0 .0 .0 .1 1.0 3 0 0 3.5 0 .0 .0 .0 3.6 56.3 36 0 0 .0 0 .0 .0 .0 .0 .0 0 0 0 9.4 26 .6 .0 .0 1.3 18.7 11 0 0 6.6 0 .0 .1 1.5 1.8 2.8 24 0 0 21.2 26 .6 .2 1.6 6.8 79.0 76 0 0 Pool identifier Size of the pool in kilobytes at the time of the first sample interval Activity level at the time of the first sample interval Percentage of available CPU time used Number of transactions processed by jobs in this pool Average transaction response time Average number of data base faults per second Average number of data base pages per second Average number of non-data base faults per second Average number of non-data base pages per second Average number of active to wait job state transitions per minute Average number of wait to ineligible job state transitions per minute Average number of active to ineligible job state transitions per minute

Member . . . : ODBCAPI Library . . : USERID39 Pool ID ---Size (K) --------Act Lvl --000 040 002 015 015

01 22,000 02 20,204 03 100 04 12,000 05 44,000 Total/ 98,304 Average Pool ID Size (K) Act Lvl CPU Util Number Tns Average Response DB Fault DB Pages Non-DB Fault Non-DB Pages Act-Wait Wait-Inel Act-Inel

--------------

Figure 87 (Part 3 of 5). System Report

Chapter 10. Case Study

363

This soft copy for use by IBM employees only.

Member . . . : ODBCAPI Library . . : USERID39 Size Unit Type (M) -----------0001 6602 1,031 0002 6602 1,031 0003 6602 1,031 0004 6602 1,031 Average Unit Type Size (M) IOP Util IOP ID ASP ID CSS ID Percent Full Percent Util Op per Second K Per I/O Average Service Time Average Wait Time Average Response Time

---------------

System Report Disk Utilization ODBC API - 3 TIMES - 4 TRANSACTIONS Model/Serial . : 20S-2010/10-1053A Main storage . . : 96.0 M System name . . : SYSASM07 Version/Release : 3/ 1.0 IOP IOP ASP CSS --Percent-Op Per K Per Util ID ID ID Full Util Second I/O ------------------------- -------5 3.0 0-01 01 00 87.3 5.1 2.99 4.1 3.0 0-01 01 00 87.4 4.4 4.19 3.9 3.0 0-01 01 00 87.4 3.7 2.78 4.2 3.0 0-01 01 00 87.4 4.4 3.74 3.9 87.4 4.4 3.42 4.0 Disk arm identifier Type of disk Disk space capacity in millions of bytes Percentage of utilization for each Input/Output Processor Input/Output Processor identification number (Bus-IOP) Auxiliary Storage Pool ID Checksum Set ID Percentage of disk space capacity in use Average disk operation utilization (busy) Average number of disk operations per second Average number of kilobytes (1024) transferred per disk operation Average disk service time per I/O operation Average disk wait time per I/O operation Average disk response time per I/O operation

6/04/96 16:27:09 Page 0005 Started . . . . : 05/22/96 17:32:00 Stopped . . . . : 05/22/96 17:34:28 ----- Average Time Per I/O ----Service Wait Response ------------------.017 .010 .013 .011 .012 .015 .008 .019 .016 .014 .032 .018 .032 .027 .026

Figure 87 (Part 4 of 5). System Report
System Report 6/04/96 16:27:09 Communications Summary Page 0006 ODBC API - 3 TIMES - 4 TRANSACTIONS Member . . . : ODBCAPI Model/Serial . : 20S-2010/10-1053A Main storage . . : 96.0 M Started . . . . : 05/22/96 17:32:00 Library . . : USERID39 System name . . : SYSASM07 Version/Release : 3/ 1.0 Stopped . . . . : 05/22/96 17:34:28 Bus/IOP/ Line Avg Max Active Number Average ------ Bytes Per Second ----Line Protocol Speed Util Util Devices Transactions Response Received Transmitted --------------------------------------------------------------------------------BUS 0 IOP 03 (2619) TRNLINE TRLAN 4000.0 0 0 3 26 .6 89.6 370.5 Bus/IOP/Line -- Bus ID, IOP ID and model number, Line ID Protocol -- Line protocol (SDLC, ASYNC, BSC, X25, TRLAN, ELAN, IDLC, DDI, FRLY) Line Speed -- Line speed (1000 bits per second) (For IDLC this is the maximum over the measurement) Avg Util -- Average line utilization Max Util -- Maximum line utilization in all measurement intervals Active Devices -- Average number of active devices on the line Number Transactions -- Number of transactions Average Response -- Average system response (service) time (seconds) Bytes /Sec Received -- Average number of bytes received per second Bytes /Sec Transmitted -- Average number of bytes transmitted per second

Member . . . : ODBCAPI Library . . : USERID39 Select Parameters:

System Report Report Selection Criteria ODBC API - 3 TIMES - 4 TRANSACTIONS Model/Serial . : 20S-2010/10-1053A Main storage . . : System name . . : SYSASM07 Version/Release :

6/04/96 16:27:09 Page 0007 96.0 M 3/ 1.0 Started Stopped . . . . : 05/22/96 17:32:00 . . . . : 05/22/96 17:34:28

- No Select parameters were chosen. Omit Parameters: - No Omit parameters were chosen.

Figure 87 (Part 5 of 5). System Report

The Client Access information at 1 is for interactive 5250 emulation sessions and has no relevance to our order entry client/server application being run. The database server jobs on the AS/400 system are non-interactive applications named QZDAINIT running under the user QUSER and are prestarted jobs in the QSERVER subsystem.

364

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

The Client Access information at 2 includes both 5250 emulation interactive work and our non-interactive client/server order entry application work.. As shipped from IBM, both the 5250 emulation and database server jobs run at priority 20. well under 70% utilization guideline for upgrading a CPU. The 4 value shown as 21.2% utilization includes some general system overhead (.6%) that is not included in the value shown at 3 . The disk arm utilizations shown at guideline value of 40%.

5 show all disk arms are well below the

The memory faulting rates are also generally acceptable as shown under the columns indicated by a 6 .

10.6.2.2 Component Report
The Component Report uses Performance Monitor ″summary level data″ (STRPFRMON ... TRACE(*NONE)). This report expands on the detail for each component of system performance shown on the System Report. It confirms the System Report information but provides more resource utilization details. Note! Since the server jobs (QZDAINIT) are considered ″batch job″ types by the performance monitor, you cannot use the Component Report output to substantiate ″client/server response time″ observations. ″Batch jobs″ do not have transactions or transaction response times that can be correlated with end user (or our client workstation response time log) response times.

The section on ″Job Workload Activity″ as shown in Figure 88 on page 366 shows the resource usage by each job. The CPU utilization represents the percentage of CPU used during the time the job was active. If the AS/400 server job was actively serving the client/server application during the measured period, the CPU utilization value allows for comparison with ″normal″ interactive transactions. However, if the average value represented in the Component Report includes long periods of relative inactivity of the QZDAINIT job, the value can be misleading. From observation, each interactive job uses 1% to 5% of CPU during operation. Within the Component Report, both 5250 Emulation jobs and the database server jobs are listed as type (Typ) C. You should understand that the 5250 emulation jobs of type C are the interactive jobs that are included in the System Report ″Client Access″ workload statistics under the sub-heading ″Interactive Workload″ on the ″Workload″ section. (See Figure 87 on page 362.) In our Component Report example, refer to jobs P23ARVYBS1.USERID35.019377 and P23LBDZWS1.USERID39.019531. These are 5250 emulation interactive jobs. You must use the job name and the C type to distinguish between a 5250 interactive job and a database server job. Because this client/server order entry application is based on a 5250 order entry application, synchronous disk I/O values can be compared to normal interactive

Chapter 10. Case Study

365

This soft copy for use by IBM employees only.

guidelines per transactions. Generally, interactive work has 20 to 30 disk synchronous disk I/Os per transaction. In this case study the database server job we are examining is job QZDAINIT.QUSER.019535. The application was run three times, each with four orders, totalling 12 transactions. The number of disk I/Os of 869 shown in the Component Report appears to be excessive at over 70 synchronous disk I/Os per transaction on average. See figure Figure 88.
.

Component Report Job Workload Activity ODBC API - 3 Times - 4 Orders Member . . . : ODBCAPI31 Model/Serial . : 20S-2010/10-1053A Main storage . . : 96.0 MB Started . . Library . . : USERIDXX System name . . : SYSNM000 Version/Release : 3/ 1.0 Stopped . . T P Job User Job y t CPU Tns --------- Disk I/O --------Cmn Name Name Number p Pl y Util Tns /Hour Rsp Sync Async Logical I/O ---------- ---------- ------ -- -- -- ---- -------- ------- ------ -------- --------- --------- --------: : : : P23ARVYBS1 USERID35 019377 C 04 20 .1 4 24 .25 21 2 0 0 P23LBDZWS1 USERID39 019531 C 04 20 23.5 39 234 3.92 535 271 0 0 : : : : QTMSNMP QTCP 018655 B 02 35 .0 0 0 .00 8 0 0 0 QTMSNMPRCV QTCP 018661 B 02 50 .0 0 0 .00 2 0 0 0 QTSMTPBRCL QTCP 018666 B 02 50 .0 0 0 .00 12 0 0 0 QTSMTPBRSR QTCP 018667 B 02 50 .0 0 0 .00 12 0 0 0 QTSMTPCLNT QTCP 018665 B 02 50 .0 0 0 .00 11 0 0 0 QTSMTPSRVR QTCP 018658 B 02 50 .0 0 0 .00 1 0 0 0 QVARRCV QSVMSS 018395 B 02 20 .0 0 0 .00 0 0 0 0 QZDAINIT QUSER 019535 C 05 20 6.0 0 0 .00 869 479 564 QZDAINIT QUSER 019536 C 05 20 .0 0 0 .00 0 0 0 0 QZDAINIT QUSER 019537 C 05 20 .0 0 0 .00 0 0 0 0 QZDASOINIT QUSER 019322 C 05 20 .0 0 0 .00 0 0 0 0 QZDASRVSD QUSER 018679 B 05 20 .0 0 0 .00 0 0 0 0

5/28/96 15:26:20 Page 6 . . : . . : 05/22/96 17:36:16 05/22/96 17:46:15

EAO PAG Arith Perm Excp Fault Ovrflw Write ----- ----- ------ ----0 16 0 0 0 0 0 0 0 3 0 0 0 0 1 3 8 0 12 12 11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 70 0 0 0 0 12 237 0 0 0 0 0 0 0 0 0 0 0

Figure 88. Component report - Job Workload Activity

10.6.2.3 Job Summary
The Job Summary Report uses Performance Monitor ″trace level data″ (STRPFRMON ... TRACE(*ALL)). Trace data identifies job types differently than sample data (STRPFRMON ... TRACE(*NONE)) does. Jobs that were grouped into Client Access/400 jobs of type C in the System and Component Reports appear in the Job Summary Report as:

BJ (Batch - Prestarted Job) - the QZDAINIT Server jobs Because of the non-interactive (batch) nature of the server jobs, transaction count or response time are not shown. However, because they service ″interactive″ requests from the client, the jobs run at priority 20 when IBM-shipped definitions (the defaults) for the following are used: − Subsystem QSERVER routing entry (number 400) for a ′QZDAINIT′ compare value. This routing entry uses OS/400 class description QSYS/QPWFSERVER. Class description QPWFSERVER specifies run priority 20 and time slice of 3000 (3 seconds).

Although transaction counts are not shown on the Job Summary Report for BJ jobs, CPU seconds and disk I/O counts are shown on the report that may give an indication of the resource overhead of the server jobs on the AS/400 system. This reemphasizes specific jobs worth looking into based on previous Component Report review.

366

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

BE (Batch - Evoked Job) - the PC to AS/400 APPC sessions The evoked communication jobs establishing the APPC conversation use a negligible amount of system resources. These jobs include the ″router function″ of Client Access/400 5250 emulation jobs and the ″client connection″ primary routing program for the original clients. In addition to 5250 emulation ″router function″ jobs and ″client connection″ jobs other jobs are recorded as BE jobs. These include any 5250 display station pass-through job started on this (target) system. Also included is any user-written program job that was started on this target AS/400 system by a received program start request (″evoke″) sent from an authorized remote system.

I (Interactive Jobs) - the interactive 5250 jobs. These can be any jobs started through an AS/400 sign on workstation display device or emulator. This includes 5250 twinaxial devices, 3270 remote attached devices, 5250 emulation from a client PC (″C″ type job on the Component Report), display station pass-through, TCP/IP TELNET, ASCII display device attached to the ASCII workstation controller. These are typical interactive jobs, and show transaction counts and response times because at the OS/400 level they do 5250 display device Input and Output operations.

In the System Summary Data section, a measure of the average CPU per seconds and Disk I/Os per interactive transaction are shown. In most production systems with significant interactive workload, these values provide a good reference to ″normal″ interactive resource overhead. An analysis of the interactive response times is also available. The Job Summary Report contains many ″top 10″ sections, showing interactive jobs and programs with high CPU time, high response time, high ″wait time,″ and so on. Typically the database server jobs and program names never appear in this top ten list. (Remember, even on a well-running system, there are jobs and programs in the top ten list. You should look at the time values for each job/program to determine whether the values indicate normal operation or a potential problem.) The ″top ten″ categories where any job type (including QZDAINIT or QZADSOINIT jobs) may appear are under ″Longest Holders of Seize/Lock Conflicts″ and ″Longest Seize/Lock Conflicts.″ Check for long seizes and locks where server jobs are involved in the Seize/Lock Conflicts section of the Job Summary Report. In our case study, the QZDAINIT job is ranked number 1 as the longest holder of a seize/lock conflict. See Figure 89 on page 368.

Chapter 10. Case Study

367

This soft copy for use by IBM employees only.

Job Summary Report 5/28/96 16:43:54 Longest Holders of Seize/Lock Conflicts Page 0030 ODBC API - 3 times - 4 transactions V3R1 Member . . . : ODBCAPI31 Model/Serial . : 20S-2010/10-1053A Main storage . . : 96.0 M Started . . . . :05 22 96 17:36:21 Library . . : USERIDXX System name . . : SYSNM000 Version/Release : 3/ 1.0 Stopped . . . . :05 22 96 17:46:21 Job User Job ---------------------- Object ----------------------Rank Value Time Name Name Number Pl Typ Pty S/L Type Library File Member RRN ---- -------- ------------ ---------- ---------- ------ -- --- --- --- ------ ---------- ---------- ---------- --------1 .101 17.43.27.052 QZDAINIT QUSER 019535 05 BJ 20 S DS CSDB STOCK 2 .020 17.41.32.803 SRMRTR 05 L 00 S ADDR 00000BCD D000PTSK 3 .014 17.41.30.486 QINTER QSYS 018402 05 M 00 S DEVD P78CLFM1S1 4 .013 17.41.30.504 QINTER QSYS 018402 05 M 00 S DEVD P78CLFM1S1 5 .002 17.41.32.897 QSYSARB QSYS 018352 05 S 00 S DEVD P78CLFM1 6 .002 17.41.32.888 QSYSARB QSYS 018352 05 S 00 S LIND TRNLINE 7 .001 17.41.32.858 SRMRTR 05 L 00 S ADDR 00000BCD D000PTSK

Figure 89. Job Summary Report - Longest Holders of Seize/Lock Conflicts

In this example you can see that job 019535 held a lock on the STOCK file for .101 second or 101 milliseconds. If you saw multiple occurrences of a seize/lock of 2 seconds or more in this section or under the ″Longest Seize/Lock Conflicts″ section, you should consider a possible performance impact. In this example, a single value of 101 milliseconds is obviously ″normal.″ A series of long seize/lock conflicts indicate an application design that locks a record/row for update and then takes a long time to either release the lock or to do the actual update when the application is in actual production mode with a large number of active users. Whereas the Component Report indicated a percentage of CPU utilization, Transaction Summary - Batch Job Analysis Report gives the actual number of seconds in addition to synchronous and asynchronous disk I/Os. These values can be used as if they referred to an interactive job. For example, in Figure 90, QZDAINIT job #019535 used 26.08 seconds of CPU and 872 synchronous disk I/Os, in processing 12 customer orders of 10 items each. The resource utilization is 2.17 seconds of CPU (26.08/12) and 72 (872/12) synchronous disk I/Os per customer order.
Job Summary Report Batch Job Analysis ODBC API - 3 times - 4 transactions V3R1 Member . . . : ODBCAPI31 Model/Serial . : 20S-2010/10-1053A Main storage . . : 96.0 M Library . . : USERIDXX System name . . : SYSNM000 Version/Release : 3/ 1.0 T P Sync Job User Job y t Elapsed CPU CPU Disk Name Name Number Pl p y Start Stop Seconds Seconds Util I/O ---------- ---------- ------ -- -- -- -------- -------- -------- -------- ----- ------QZRCSRVS QUSER 018795 02 BJ 20 17.36.20 17.46.21 600.903 QNPSERVR QUSER 018855 02 BJ 20 17.36.20 17.46.21 600.902 QZSOSIGN QUSER 018942 02 BJ 20 17.36.20 17.46.21 600.894 QZSCSRVS QUSER 018973 02 BJ 20 17.36.20 17.46.21 600.896 QPWFSERVSO QUSER 019060 05 BJ 20 17.36.20 17.46.21 600.895 QPWFSERVSO QUSER 019061 05 BJ 20 17.36.20 17.46.21 600.895 QZDASOINIT QUSER 019322 05 BJ 20 17.36.20 17.46.21 600.885 P23ARVYB USERID35 019366 02 BE 20 17.36.20 17.46.21 600.876 QZSCSRVR QUSER 019450 02 BJ 20 17.36.20 17.46.21 600.885 QNPSERVS QUSER 019474 02 BJ 20 17.36.20 17.46.21 600.885 QZSCSRVS QUSER 019504 02 BJ 20 17.36.20 17.46.21 600.886 QZSCSRVS QUSER 019505 02 BJ 20 17.36.20 17.46.21 600.888 QZDAINIT QUSER 019535 05 BJ 20 17.36.20 17.43.27 427.610 26.076 6.1 872 QZDAINIT QUSER 019536 05 BJ 20 17.36.20 17.46.21 600.895 QZDAINIT QUSER 019537 05 BJ 20 17.36.20 17.46.21 600.897 QPFRMON QPGMR 019538 02 B 00 17.36.20 17.46.21 600.898 5.423 .9 484 5/28/96 16:43:54 Page 0033 Started . . . . :05 22 96 17:36:21 Stopped . . . . :05 22 96 17:46:21 Async --- Synchronous --Excp Disk BCPU --DIO/Sec-Wait I/O /DIO Elp Act Ded Sec ------- ------ --- --- --- ------600.903 600.902 600.894 600.896 600.895 600.895 600.885 600.876 600.885 600.885 600.886 600.888 479 .0299 2 22 16 600.895 600.897 150 .0112 1 35 23

Figure 90. Job Summary Report - Batch Job Analysis

368

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

10.6.2.4 Job Transaction Detail Information
The Transaction and the Transition Reports provide detailed information about:
• •

Transactions within a job (Transaction Report) Job state transitions within a transaction (Transaction Detail Report) Wait (W) to active (A) means work has arrived for the job and the job is now using the CPU. Active to wait can mean several things, including the transaction has been completed (for example a response has been sent to the client) or the active job has been suspended as it waits for a seize/lock to be released, or for an entry to appear on a data queue, or for a higher priority job/LIC task to complete.

Print these reports for specific jobs only, to minimize large print volumes .
The Transaction Report provides detailed information about each transaction in the job:
• • • •

Transaction response time. Name of the program that is active at the time the transaction starts. Processing unit time used. Number of I/O requests.

The Transaction Report output has two parts:
• •

The details, which shows data about each transaction in the job. The summary, which shows data about overall job operation.

The ″Totals″ section at the end of the Transaction Report for the QZDAINIT job indicates total resource utilization for the job (total CPU seconds, synchronous disk I/O, and so on) and a Transaction Count . The ″transaction boundary″ coincides with the server application returning to the QZDACMDP program. However, in the transaction reports the transaction is actually charged to the mainline database server program QZDAINIT in our APPC example. For TCP/IP, the program charged would be QZDASOINIT. We confirmed this by reviewing a QZDAINIT Job Trace report. QZDACMDP is the CA/400 internal router program (command processor) that processes almost every incoming request from the client, and almost every outgoing response to the client. Program QZDACMDP appears in the Transition Report under the heading ″Last″ (program at the bottom of the invocation (call) stack) as shown in Figure 92 on page 371, but QZDAINIT is charged with the transaction.

Chapter 10. Case Study

369

This soft copy for use by IBM employees only.

Batch Transactions! The Transaction Report recognizes some ″transaction boundaries″ during which the job transitions from an Active-to-Wait state (as shown in the Transition Report). However, they do not coincide with an identifiable event in the client application. A single client ″transaction″ results in many server transactions, in much the same way as a business transaction is made up of many AS/400 interactive transactions. Even with straightforward 5250 applications, where the Performance Monitor accurately records workstation I/O as a transaction, there is no automated correlation with the number of 5250 transactions (Enter key responses) with a single order completion. A single order may be considered by the customer as a ″business transaction.″ In our case study reports, we have not been able to reconcile the ″business transaction″ at the client and the ″transaction boundary″ in the Transaction Report.

Based on our analysis of the communications line trace showing ODBC requests received and AS/400 responses sent, the number of transactions shown in the Transaction Detail summary section represent the number of communication flows generated by the application. Figure 91 shows the end of a Transaction Report for our case study QZDAINIT job QZDAINIT/QUSER/019535.
Transaction Report 5/29/96 9:20:20 ODBC API - 3 times - 4 transactions V3R1 Page 0011 Member . . . : ODBCAPI31 Model/Serial . : 20S-2010/10-1053A Main storage . . : 96.0 M Started . . . . :05 22 96 17:36:21 Library . . : USERIDXX System name . . : SYSNM000 Version/Release : 3/ 1.0 Stopped . . . . :05 22 96 17:46:21 Job name . . : QZDAINIT User name . . . : QUSER Job number . . . : 019535 TDE/Pl/Pty/Prg . : 076F/05 E T CPU ---- Physical I/O Counts ----- ***** Transaction Response Time (Sec/Tns) ****** -BMPLx y Sec ----- Synchronous ------ Async ****** - Activity Level Time - Inel Long C I Seize c Program p Per DB DB NDB NDB Disk **** Short Seize Time Wait u n Hold Key/ Time p Name e Tns Read Wrt Read Wrt Sum I/O ** Active Wait Cft A-I/W-I Lck/Oth r l Time Think -------- - ---------- - ------- ---- ---- ---- ---- ---- ----- -------- ------- ------- ------- ------- ------- -- -- ----- -----17.42.28 QZDAINIT .015 0 1 .027 .026 1 .0 17.42.28 QZDAINIT .015 0 1 .026 .025 1 .0 17.42.28 QZDAINIT .016 0 1 .028 .027 1 .0 17.42.28 QZDAINIT .015 0 1 .030 .028 1 .0 17.42.29 QZDAINIT .016 0 1 .026 .025 1 .0 17.42.29 QZDAINIT .016 0 2 .023 .023 1 .0 17.42.29 QZDAINIT .015 0 1 .035 .034 1 .0 17.42.29 QZDAINIT .017 0 1 .025 .025 1 .0 17.42.29 QZDAINIT .015 0 1 .022 .022 1 .0 17.42.29 QZDAINIT .013 0 1 .020 .020 1 .0 17.43.14 QZDAINIT .016 0 0 .019 .019 1 45.0 17.43.14 QZDAINIT .006 0 0 .007 .007 1 .0 17.43.14 Y *JOBOFF* 1.048 25 10 22 57 24 13.362 2.514 2.096 8.751 1 .1 .0 ----------------------------------------------J O B S U M M A R Y D A T A (T O T A L S) ----------------------------------------------Average .058 1 0 1 0 2 1 .112 .086 2.096 .000 .000 8.751 .1 .8 Count 453 456 1 1 1 453 Minimum .004 .005 .001 2.096 8.751 .1 .0 Maximum 1.048 57 24 13.362 .846 2.096 8.751 .1 110.4 Total/Job 26.076 872 479 427.610 Elapsed 6.1 Percent CPU Utilization

Figure 91. Transaction Report

370

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

10.6.2.5 Transition Detail Report
The Transition Detail Report provides information similar to that of the Transaction Report, but the data (for example, processing unit time, I/O requests) is shown for each job state transition (wait to active, active to wait, object locking time outs, and so on). The Transition Report is composed of two sections:

Transition detail, which shows each state transition made by the job (going from one state to another, such as active-to-ineligible). Summary, which shows the same data as the summary output from the Transaction Report.

Figure 92 shows part of the Transition Report of our case study job.
Transition Report ODBC API - 3 times - 4 transactions V3R1 Model/Serial . : 20S-2010/10-1053A Main storage . . : 96.0 M System name . . : SYSNM000 Version/Release : 3/ 1.0 User name . . . : QUSER Job number . . . : 019535 Sync/Async Phy I/O -----------------------DB DB NDB NDB Read Wrt Read Wrt Tot ---- ---- ---- ---- ---9:19:00 Page 0001 Started . . . . :05 22 96 17:36:21 Stopped . . . . :05 22 96 17:46:21 TDE/Pl/Pty/Prg . : 076F 05 2 / / /NO 5/29/96

Member . . . : ODBCAPI31 Library . . : USERIDXX Job name . . : QZDAINIT Job type . . : BJ

Time -----------17.36.20.324 17.36.45.260 17.36.45.261 17.36.45.434 ---------17.36.45.453 17.36.45.467 ---------17.36.45.531 17.36.45.545 ---------17.36.45.575 17.36.45.599 ---------17.36.45.639 17.36.45.701 ---------17.36.45.743 17.36.45.762 ---------17.36.45.790 17.36.45.809 ----------

Elapsed Time -- Seconds ----------------------State Wait Long Active Inel CPU W A I Code Wait /Rsp* Wait Sec ----- ---- ------- ------- ------- -------*TRACE ON ->A 24.935 A .001 .001 W<.173 .118 QZDAINIT .174* .119 ->A .020 W<.014 .009 QZDAINIT .014* .009 ->A .063 .001 .001 W<.015 .006 QZDAINIT .016* .007 ->A .029 W<.025 .024 QZDAINIT .025* .024 ->A .040 .001 .001 W<.061 .034 .062* .042 .020 .020* .001 .020 .021* .020 .020 .001 .020 .021 .035

-MPLC I Last 4 Programs in Invocation Stack u n ------------------------------------------r l Last Second Third Fourth -- -- ---------- ---------- ---------- ---------1 1 1

0

0

2 2 2 2 1 1 1 1 12

QZDACMDP

QZDAINIT

QZDAINIT

ADR=00

0

2* 1 1

0

0

0

2* 1 1

0

0

0

1* 1 1

0

0

0

1* 1 1

QZDAINIT ->A W<QZDAINIT ->A W<QZDAINIT

0

0

12

0

12* 1 1

EAO= EAO=

40 40

Dec = Dec =

0 Bin = 0 Bin =

0 Flp = 0 Flp =

0 0

0

0

0

0

0* 1 1

.027

0

0

0

0

0*

Figure 92. Transition Report

10.6.3 ODBC Trace
The ODBC Trace is generated on the client by selecting an option in the ODBC Administrator display. This trace includes the ODBC commands issued by the client application during execution of a client/server application. Review of the ODBC Trace can be quite tedious, but is helpful in understanding if the client application used efficient code and API functions. Note that we recommend not running the ODBC trace on the client when collecting any time-dependent performance information on the AS/400 server side. The ODBC trace causes considerable performance degradation on the client side, so this may invalidate time-dependent information on the AS/400 server.

Chapter 10. Case Study

371

This soft copy for use by IBM employees only.

Figure 93 on page 373 shows parts of our case study ODBC Trace. For more details, see more complete examples of an ODBC trace in Appendix C, “ODBC Trace Example” on page 467. Some of the key ODBC statements have been highlighted in the case study report to assist in relating these statements to the Communication Trace report and the Job Log report examples shown later in this redbook. They are not included in the actual ODBC trace that is recorded on the client workstation. The SQL cursor names (for example, CRSR0002 ) sent by the client and seen in the Communications Trace have been added to the ODBC Trace to assist in reconciling the information in the two reports.

372

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

SQLAllocEnv(phenv340F0000); 1 SQLAllocConnect(henv340F0000, phdbc604F0000); 1 SQLConnect (hdbc604F0000, ″dbfil″ , -3, ″ ″ , -3, ″ ″ , -3); 1 SQLAllocStmt(hdbc604F0000, phstmt344F0000); 2 --SQLPrepare (hstmt344F0000, ″Select STDI01, STDI02, STDI03, STDI04, STDID05, STDID06, STDID07, STDI08, STDI09, STDI10, STQTY, STYTD, STORDRS, STREMORD, STDATA from CSDB.STOCK where (STWID=? and STIID=?)″ , -3); CRSR0002 SQLPrepare (hstmt346F0000, ″Select IID, INAME, IPRICE, IDATA from CSDB.ITEM where IID in(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)″ , -3); CRSR0004 SQLPrepare (hstmt348F0000, ″Select CLAST, CDCT, CCREDT, WTAX from CSDB.CSTMR, CSDB.WRHS where CWID=? and CDID=? and CID=? and WID=?″ , -3); CRSR0005 SQLPrepare (hstmt34970000, ″Select DTAX, DNXTOR from CSDB.DSTRCT DID=?) ″ , -3); CRSR0006 SQLPrepare (hstmt42B70000, ″Insert into CSDB.ORDERS (OWID, ODID,OCID, OID, OENTDT, OENTTM, OCARID, OLINES, OLOCAL) values (?,?,?,?,?,?,?,?,?)″ , -3) CRSR0009 SQLPrepare (hstmt5E8F0000, ″Insert into CSDB.NEWORD (NOOID, NODID, NOWID) values (?,?, ?)″ , -3); CRSR0010 SQLPrepare (hstmt33C70000, ″Insert into CSDB.ORDLIN (OLOID, OLDID, OLWID, OLNBR, OLSPWH, OLIID, OLQTY, OLAMNT, OLDLVD, OLDLVT, OLDSTI) VALUES (?,?,?,?,?,?,?,?,?,?,?)″ , -3); CRSR0008 --SQLBindParam(hstmt348F0000, 4, 1, 1, 1, 4, 0, rgbValue, 4, pcbValue); SQLExecute (hstmt348F0000); #0005 SQLParamData(hstmt348F0000, prgbValue); VisualBasic SQLPutData(hstmt348F0000, rgbValue, 4); VisualBasic --SQLBindParam(hstmt346F0000, 15, 1, 1, 1, 6, 0, rgbValue, 6, pcbValue); SQLExecute(hstmt346F0000); #0004 --SQLBindParam(hstmt344F0000, 2, 1, 1, 1, 6, 0, rgbValue, 6, pcbValue); SQLExecute (hstmt344F0000); #0002 --SQLPrepare (hstmt342F0000, ″Update CSDB.STOCK set STQTY=?, STYTD S=?, STORDRS=?, STREMORD=? WHERE (STWID=? and STIID=?)″ , -3); CRSR0003 --SQLBindParam(hstmt342F0000, 6, 1, 1, 1, 6, 0, rgbValue, 6, pcbValue); SQLExecute (hstmt342F0000); #0003 --SQLBindParam(hstmt34970000, 2, 1, 5, 3, 3, 0, rgbValue, 0, pcbValue); SQLExecute (hstmt34970000); #0006 --SQLPrepare (hstmt09AF0000, ″Update CSDB.DSTRCT set DNXTOR=? where (DWID=?, DID=?)″ , -3); CRSR0007 --SQLBindParam(hstmt09AF0000, 3, 1, 1, 4, 0, 0, rgbValue, 0, pcbValue); SQLExecute (hstmt09AF0000); #0007 --SQLBindParam(hstmt33C70000, 11, 1, 1, 1, 24, 0, rgbValue, 24, pcbValue); SQLExecute (hstmt33C70000); #0008 --SQLBindParam(hstmt33C70000, 11, 1, 1, 1, 24, 0, rgbValue, 24, pcbValue); SQLExecute (hstmt33C70000); #0008 --( #0008 repeate ten consecutive times) SQLBindParam(hstmt42B70000, 9, 1, 5, 3, 1, 0, rgbValue, 0, pcbValue); SQLExecute (hstmt42B70000); #0009 --SQLBindParam(hstmt5E8F0000, 3, 1, 1, 1, 4, 0, rgbValue, 4, pcbValue); SQLExecute (hstmt5E8F0000); #0010 --SQLDisconnect (hdbc604F0000);
Figure 93. ODBC Trace

Chapter 10. Case Study

373

This soft copy for use by IBM employees only.

The statements marked with 1 show the allocation of an ODBC environment (phenv340F0000), allocation of an ODBC connection (phdbc604F0000) for that environment, and the SQL connect to a data source (dbfil) using the connection hdbc604F0000. This is done at the beginning of an application and at any other time a new connection needs to be established. Normally, this is done once at the beginning of an application and the connection maintained until the application needs to be shut down. For example, you do not normally start and end a connection for each order in our benchmark application. If you start and stop a connection for each order, less orders are completed per unit of time. It also causes unnecessary server overhead if done by many clients within a short period of time. For each SQL statement, the program must also allocate a statement handler as shown in 2 SQLAllocStmt phstmt344F000 for the SQLPrepare of Select STDI01, STDI02, ..., STDATA from CSDB.STOCK. The client/server application performs the following functions for each customer order after the initial connection is established between the client program and the QZDAINIT server job on the AS/400 system:

Prepares statements for: − − − − − − −

CRSR0002 CRSR0004 CRSR0005 CRSR0006 CRSR0009 CRSR0010 CRSR0008

Select from CSDB/STOCK table. Select from CSDB/ITEM table. Select from CSDB/CSTMR and CSDB/WRHS table. Select from CSDB/DSTRCT table. Insert into CSDB/ORDERS table. Insert into CSDB/NEWORD table. Insert into CSDB/ORDLIN table.

The ″?″ (question mark), for example STWID=? in SQLPrepare for CRSR0002, indicates a parameter marker. This allows variables to be passed with the SQL statement at execution time that improves performance. For performance, it is important to use only one SQLPrepare for each SQL statement that is run more than once.

Sets up internal mapping of SQL columns to application variables (through SQLBindParm, SQLParamData, and SQLPutData statements). Some of these statements are language API dependent. Reviewing statements may help you recognize the interface being used, but they have little performance impact and these statements do not pass across to the server (AS/400) system.

Processes a customer order as follows: 1. Executes ODBC statement #0005 . 2. Executes ODBC statement #0004 . 3. Repetitively executes statements #0002 / #0003 ten times corresponding to the ten line items. 4. Note that a prepare is issued for CRSR0003 (Update CSDB/STOCK table) just before it is executed for the first time. 5. Executes ODBC statement #0006 . 6. Prepares CRSR0007 (Update CSDB/DSTRCT table). 7. Executes ODBC statement #0007 . 8. Repetitively executes ODBC statement #0008 ten times, inserting a record in the ORDLIN table each time.

374

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

9. Executes ODBC statement #0009 to insert a record into the ORDERS table. 10. Executes ODBC statement #0010 to insert a record into the NEWORD table.

The insert ORDLIN (statement #0008) is repeated 10 consecutive times, one for each line item ordered. This is a good candidate for use of blocked insert .

Our client application is a Visual Basic application. You can tell this by detecting the SQLParamData and SQLPutData statements. These statements are used because of the way Visual Basic manages memory on the client system. See Chapter 5, ″Client/Server Database Serving″, for more information about statement performance.

10.6.4 SQL Package
The initial run of a client/server application using SQL package facility results in the package being created, and any statement preparations performed. Subsequent runs of the application have significantly faster response time because the SQL package has been created and the SQL statement prepared. The overhead of SQL package creation and statement preparation are normally not part of the ″live″ operating environment. Ideally the package used during application development is deleted and then created new for the production environment. This ensures that no-longer-used SQL statements are not stored within the package. Any SQLPrepare statements are done either before the first order or only on the first order. Thereafter, only the SQLExecute statements are used. In our case, the SQL package description shows that it has been created before our first test and the information on the SQL package indicates that no access plan updates have been done after that. It means that our application is not updating the SQL package. This is good for performance and indicates a stable database environment. Figure 94 shows the SQL package description and Figure 95 on page 376 shows the SQL information about statement CRSR0002:

Display Object Description - Full Object . . . . . . . : Library . . . . . : Type . . . . . . . . : VBFBA QGPL *SQLPKG Attribute . . . . . : Owner . . . . . . . : Primary group . . . : PACKAGE USERIDxx *NONE

Creation information: Creation date/time . Created by user . . System created on . Object domain . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

: : : :

05/16/96 15:07:41 USERIDxx SYSNM000 *USER

Figure 94. SQL Package Description

Chapter 10. Case Study

375

This soft copy for use by IBM employees only.

5763SS1 V3R1M0 940909 Print SQL information SQL package QGPL/VBFBA Object name...............QGPL/VBFBA Object type...............*SQLPKG CRTSQL*** PGM(QGPL/VBFBA) SRCFILE( / ) SRCMBR( ) COMMIT(*NONE) OPTION(*SQL *PERIOD) TGTRLS(*PRV) ALWCPYDTA(*OPTIMIZE) CLOSQLCSR(*ENDPGM) Select STDI01, STDI02, STDI03, STDI04, STDI05, STDI06, STDI07, STDI08, STDI09, STDI10, STQTY, STYTD, STORDRS, STREMORD, STDATA from CSDB.STOCK where (STWID=? and STIID=?) CRSR0002 SQL4021 Access plan last saved on 05/16/96 at 15:18:53. SQL4020 Estimated query run time is 1 seconds. SQL4017 Host variables implemented as reusable ODP. SQL4006 All access paths considered for file 1. SQL4008 Access path STOCKLF used for file 1. SQL4011 Key row positioning used on file 1.

05/22/96 11:49:45

Figure 95. SQL Package Information - CRSR0002 Statement

10.6.5 Query Optimizer Decisions
As previously stated, the initial run of a client/server application using SQL package support results in the package being created, and any statement preparations performed. At that time the Query Optimizer makes decisions on how to process the SQL statement. The Query Optimizer decision making includes:

• • • •

Access Path Considerations - can it use an existing access path or create a new one Join Key Field Choice Open Data Path Processing Blocking Choice Data Translation Choice

Key Optimizer evaluations and decisions at run time are reported in the job log when the job is in DEBUG mode. The V3R6 Optimizer provides more detailed job log messages than V3R1. The following display was observed after the case study order entry application completed a sequence of 10 customer orders. This shows the number of file I/Os issued to the various application files/tables. This provides useful information about the frequency of references made to the various AS/400 database tables that help in determining which files give optimum benefit if SETOBJACC is used to set up a read cache on the AS/400 system.

376

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

5763SS1 V3R1M0 940909 SYSNM000 Display Device . . . . . : P23LBDZW User . . . . . . . . . . : USERIDXX Display Open Files Job . . : QZDAINIT User . . : QUSER Number of open data paths . . . . . . . . . . : Member/ Record File File Library Device Format Type QAZDTBL1 QIWS QAZDTBL1 LGL QAZDTBL4 QIWS QAZDTBL4 LGL QAZDTBL7 QIWS QAZDTBL7 LGL QAZDCOLM QIWS QAZDCOLM LGL WRHS CSDB WRHS PHY CSTMR CSDB CSTMR FORMAT0001 PHY ITEM CSDB ITEM FORMAT0001 PHY STOCK CSDB STOCK FORMAT0001 PHY DSTRCT CSDB DSTRCT FORMAT0001 PHY ORDLIN CSDB ORDLIN FORMAT0001 PHY ORDERS CSDB ORDERS FORMAT0001 PHY NEWORD CSDB NEWORD FORMAT0001 PHY Press Enter to continue. F3=Exit F5=Refresh F11=Display scoping data

05/22/96

17:38:01

Number . . . : 12 I/O Count 0 0 0 0 0 10 10 200 20 100 10 10

019535

----Open--- Relative Opt Shr-Nbr Record I NO I NO I NO I NO I NO I NO 1 I NO 79378 IO NO IO NO 1 O NO 400 O NO 320 + O NO 40 F16=Job menu

F12=Cancel

10.6.6 JOB LOG
The job log for an SQL job provides some information on database table usage and Query Optimizer decisions. Additionally, more detailed job messages are recorded when the job is in DEBUG mode. We strongly recommend review of the DEBUG job log messages for the database server job as a first step in understanding performance results when using queries for local system jobs or queries from a client. There are some specific messages we want to highlight when doing application performance analysis:

When the QZDAINIT server job is associated with a user of a client/server application, message CPIAD02 is issued, indicating the name of the interactive user. The completion of each Prepare statement is identified by a SQL7968 message. When it is necessary to rebuild the access plan, message CPI4323 explains the reason. Rebuilding the access plan is not good for performance and requires additional overhead in most cases. Before the first ″Open cursor″ to a specific file/table, the Query Optimizer makes some decisions and records these decisions in the following messages: − − − − Message CPI432C indicates that all access paths have been considered and which one has been chosen. Message CPI4326 shows the join key choice. Message CPI4328 indicates that an access path was used by the query. Message SQL7912 indicates that the ODP has been created. Remember the Open Data Path (ODP) is the control block that links the running program to the OS/400 file dependent (for example database

Chapter 10. Case Study

377

This soft copy for use by IBM employees only.

file/table) data management that accesses the actual data. Creating the ODP is a relatively long running function on OS/400 so you want to create it once per file for as long as the job is active. See the following messages to ensure this OPD is reused and not re-created during the job. − When blocking has been used for query, message SQL7916 is issued. SQL retrieves a block of records on the first fetch and subsequent fetch statements do not require SQL to request more records. This improves the performance and SQL uses it whenever is possible.

• •

Message SQL7962 indicates that the cursor has been opened. Look for message SQL7914 . This message indicates that ODP has not been deleted and it can be reused. This improves performance because the ODP is not recreated on the next run of the statement. However, if message SQL7913 is issued this means the ODP has been deleted, cannot be reused, and a new ODP has to be created the next time the SQL statement is run. If you do not see the ″ODP reused″ message (SQL7911), you need to review the ODBC SQL requests issued by the client workstation and change them, if possible.

In debug mode, the job log records the time and tasks performed for each ODBC SQL statement issued to the AS/400 server job. In many cases review of the messages can determine which functions took more time than others. Those longer running functions should be reviewed for performance improvement. For example, reviewing a job log (not shown here) shows the start time of the first client/server transaction is approximately the time when CRSR0005 was opened at 17:37:48. The end time corresponds to the addition of a record to the NEWORD table at 17:37:57. The duration was 0.9 seconds. The second order started at 17:38:00 and finished at 17:38:05. The duration was 0.5 seconds. Why was the first order longer than the second order? Analyzing the job log we found the following reasons:

For the first run of each statement, many tasks are done by the Query Optimizer. ODPs are being created this first run time and all statements are reusing them. All Fetch statements are using blocking. Then, in many cases, the next Fetch statement does not require SQL to request more records.

Figure 96 on page 380 shows part of our case study job log. We have added ODBC statement references (for example #0002 and CRSR0003 ) to indicate the actual SQL function associated with the message.

378

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Important Job Log Review Consideration In many cases the reviewer of the job log must understand the sequence of SQL operations requested by the client. The reviewer must manually associate the SQL operation with the job log message. The job log message itself does not show the SQL statement used. The ODBC trace is the easiest source to review to see the full SQL statement and the sequence of these statements (operations). To minimize the amount of job log and ODBC trace information to be reviewed, you should run the application for a short period of time. In our example, two order ″cycles″ should be sufficient.

Chapter 10. Case Study

379

This soft copy for use by IBM employees only.

5763SS1 V3R1M0 940909 Job name . . . . . . Job description . . Unit of Work ID Location name . . . Sequence number . . MSGID TYPE Connecting CPF1124 Information

. . . . : . . . . : . . . . : . . . . :

QZDAINIT QDFTJOBD SYSNM000 000001 SEV DATE 00

Display Job Log User . . . . . . : Library . . . . . : Network ID TIME . . . . : FROM PGM

QUSER QGPL ITSCNET LIBRARY

SYSASM07 05/22/96 17:41:05 Number . . . . . . . . . . . :

Page 019535

1

Instance . . . : INST TO PGM

′404040404040′X LIBRARY INST *N

CPC2196

Completion

CPIAD02

Information

05/22/96 17:32:43 QWTPIIPP QSYS 05BD *EXT Message . . . . : Job 019535/QUSER/QZDAINIT started on 05/22/96 at 17:32:43 in subsystem QSERVER in QSYS. Job entered system on 05/22/96 at 17:32:43. 00 05/22/96 17:32:43 QLICUSRL QSYS 00A4 QC2SYS QSYS To module . . . . . . . . . : QC2SYS To procedure . . . . . . . : system Statement . . . . . . . . . : 6 Message . . . . : Library QIWS added to library list. 00 05/22/96 17:36:45 QZDACMDP QIWS 1A32 QZDACMDP QIWS Message . . . . : Servicing user profile ITSCID35.

*STMT

1A32

Job log messages for the SQL Prepare operations and the SQL Execute operations for file/table CSTM, WHRS, ITEM not shown Open/Fetch_(SELECT)_first_row_of_STOCK_(#0002) See corresponding ODBC trace entries in Figure 93 on page 373 . CPI432C Information 00 05/22/96 17:37:50 QQQIMPLE QSYS 2F3A QSQOPEN Message . . . . : All access paths were considered for file STOCK. CPI4328 Information 00 05/22/96 17:37:50 QQQIMPLE QSYS 2F3A QQQIMPLE Message . . . . : Access path of file STOCKLF was used by query. SQL7912 Information 00 05/22/96 17:37:50 QSQOPEN QSYS 03FA QSQROUTE Message . . . . : ODP created. SQL7916 Information 00 05/22/96 17:37:50 QSQOPEN QSYS 040B QSQROUTE Message . . . . : Blocking used for query. SQL7962 Completion 00 05/22/96 17:37:50 QSQOPEN QSYS 1ADF QSQOPEN Message . . . . : Cursor CRSR0002 opened. Prepare_Statement_first_UPDATE_of_STOCK_(CRSR0003) SQL7968 Completion 00 05/22/96 17:37:50 QSQDESC QSYS 1343 QSQDESC Message . . . . : DESCRIBE of prepared statement QZ797F04E6A4000055 completed. Execute_#0003 (first item) CPI432C Information 00 05/22/96 17:37:50 QQQIMPLE QSYS 2F3A QSQOPEN Message . . . . : All access paths were considered for file STOCK. CPI4328 Information 00 05/22/96 17:37:50 QQQIMPLE QSYS 2F3A QQQIMPLE Message . . . . : Access path of file STOCKLF was used by query. SQL7912 Information 00 05/22/96 17:37:50 QSQOPEN QSYS 03FA QSQUPDAT Message . . . . : ODP created. SQL7914 Information 00 05/22/96 17:37:51 QSQUPDAT QSYS 0EDF QSQROUTE Message . . . . : ODP not deleted. SQL7957 Completion 00 05/22/96 17:37:51 QSQUPDAT QSYS 1035 QSQUPDAT Message . . . . : 1 rows updated in STOCK in CSDB. SQL7914 Information 00 05/22/96 17:37:51 QSQLCLS QSYS 01F4 QSQROUTE Message . . . . : ODP not deleted. Open/Fetch_#0002_(second item) SQL7959 Completion 00 05/22/96 17:37:51 QSQLCLS QSYS 025C QSQLCLS Message . . . . : Cursor CRSR0002 closed. SQL7911 Information 00 05/22/96 17:37:51 QSQOPEN QSYS 044A QSQROUTE Message . . . . : ODP reused. SQL7962 Completion 00 05/22/96 17:37:51 QSQOPEN QSYS 1ADF QSQOPEN Message . . . . : Cursor CRSR0002 opened. Execute_#0003_(second item) SQL7911 Information 00 05/22/96 17:37:51 QSQOPEN QSYS 044A QSQUPDAT Message . . . . : ODP reused. SQL7914 Information 00 05/22/96 17:37:51 QSQUPDAT QSYS 0EDF QSQROUTE Message . . . . : ODP not deleted. SQL7957 Completion 00 05/22/96 17:37:51 QSQUPDAT QSYS 1035 QSQUPDAT Message . . . . : 1 rows updated in STOCK in CSDB.

QSYS QSYS QSYS QSYS QSYS

087B 2F3A 476B 476B 1ADF

QSYS

1343

QSYS QSYS QSYS QSYS QSYS QSYS

087B 2F3A 019A 02A1 1035 2EED

QSYS QSYS QSYS

025C 476B 1ADF

QSYS QSYS QSYS

019A 02A1 1035

Figure 96. Job Log

Note that the SQLPrepare for the first update ( CRSR0003 ) is run during the first order, rather than with the other SQLPrepare statements run before the first order. This is an application design choice, based on not doing an update operation until the order has been completed by the client operator in case the order was cancelled. We recommend doing this update Prepare before the first order along with the other SQLPrepare statements (not shown in the figure). Regardless, following recommended application design tips, the SQLUpdate for the STOCK file/table is

380

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

already in the SQL package shipped as part of the application install process. Therefore, the SQLPrepare of the update should run fast on the AS/400 server anyway.

10.6.7 Job Trace
Standard OS/400 job trace reports can be used to show the flow of IBM modules/programs involved in the database server job ODBC operations. However, to be meaningful, some module/program names need to be explained. Figure 97 on page 383 shows a part of the Job Trace.

QTQGETCC (time stamp 17:37:55.263) QTQGETCC determines the CCSIDs to be used to process data between the AS/400 server and the client workstation. QTQ prefix indicates processing related to the Character Code Segment Id (CCSID) processing that ensures data characters on the AS/400 server and the client are accurately represented. Determining CCSIDs to be used is mandatory AS/400 overhead to enable international language support. For example, data in US English may reside on the AS/400, but a French client may be sending or receiving the data.

QZDACMDP (time stamp 17:37:55.672) QZDACMDP is the CA/400 internal router program (Command Processor) that processes almost every incoming request from the client and almost every outgoing response to the client. On incoming client ODBC requests QZDACMDP initiates the call sequence to process the SQL function. QZD prefix indicates database server modules/programs.

QQQOPTIM (timestamp 17:37:55.341) QQQOPTIM is the Query Optimizer program that determines the access path to be used and other query processing algorithms to use. QQQ prefix indicates query programs.

QQQQUERY (timestamp 17:37:55.287) QQQQUERY is the main line query program in a sequence of calls to other QQQ programs.

QQQIMPLE (timestamp 17:37:55.392) QQQIMPLE is another main line query program that calls other system functions such as creating ODPs (time stamp 17:37:55.434) and sending messages to the job log (time stamp 17:37:55.397).

QSUPDAT (time stamp 17:37:55.521) QSQUPDAT calls the sequence of modules/programs that do the actual database operation. See QDBGETSQ (database get sequential) at time stamp 17:37:55.526 and QDBUDR (database update record/row) at time stamp 17:37:55.536.

Note the time stamps in the job trace are slightly larger than the real time of a call/return sequence if job trace were not running. Job trace introduces system overhead, but the time stamps can be used relatively. That is, when the
Chapter 10. Case Study

381

This soft copy for use by IBM employees only.

difference between one call/return sequence set of time stamps is larger than another call/return sequence of time stamps, the larger value indicates the processing took longer than the call/return sequence with a smaller value. The Job Trace report can be used to verify that the ″transaction boundaries″ in the Transition Detail report corresponds approximately to the occurrence of the QZDACMDP program. Using corresponding time stamp values of the returns to QZDACMDP program with the Transition Detail Report time stamps, it is possible to conclude that the QZDACMDP program gains control shortly before the server job transitions from an Active-to-Wait state. Figure 97 on page 383 shows a part of the Job Trace. See the Transition Detail Report example for job 019535 shown Figure 92 on page 371.

382

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

5763SS1 V3R1M0 940909 TRACE TYPE - *ALL RECORD COUNT- 003433 TIME 17:37:55.261 SEQNBR 000851

MAX STORAGE- 01024 START TIME - 17:37:36

AS/400 TRACE JOB INFORMATION EXIT PROGRAM*NONE START DATE - 05/22/96

05/22/96 17:38:47 JOB- 019535 /QUSER / QZDAINIT DB NON-DB PAGES NUMBER READS READS WRITTEN WAITS 0 0 0 0

PAGE

1

17:37:55.263 17:37:55.279 17:37:55.287 17:37:55.296 17:37:55.313 17:37:55.317 17:37:55.338 17:37:55.341 17:37:55.370 17:37:55.374 17:37:55.385 17:37:55.392 17:37:55.397 17:37:55.402

000852 000853 000854 000855 000856 000857 000858 000859 000860 000861 000862 000863 000864 000865

17:37:55.406 17:37:55.434 17:37:55.440 17:37:55.445 17:37:55.450 17:37:55.455 17:37:55.470 17:37:55.473 17:37:55.481 17:37:55.485 17:37:55.491 17:37:55.499 17:37:55.504 17:37:55.507 17:37:55.513

000866 000867 000868 000869 000870 000871 000872 000873 000874 000875 000876 000877 000878 000879 000880

17:37:55.517 17:37:55.521 17:37:55.526 17:37:55.532 17:37:55.536 17:37:55.612 17:37:55.615 17:37:55.621

000881 000882 000883 000884 000885 000886 000887 000888

17:37:55.625 17:37:55.629 17:37:55.635

000889 000890 000891

17:37:55.640 17:37:55.643 17:37:55.649

000892 000893 000894

17:37:55.653 17:37:55.656 17:37:55.661 17:37:55.672 17:37:55.711

000895 000896 000897 000898 000899

FUNCTION PROGRAM LIBRARY ENTRY EXIT CALL LVL CPU TIME RETURN QTQCOMFN QSYS 0003 0001 11 0.001 MODULE QTQGTASY QBUILDSS1 PROCEDURE qtqgetasscc_syscall RETURN QTQGETCC QSYS 0011 0013 10 0.014 0 1 RETURN QQQSETUP QSYS 01D1 001D 09 0.008 0 0 RETURN QQQQUERY QSYS 0713 0716 08 0.009 0 0 CALL QQQVAP QSYS 0001 008F 09 0.016 0 0 RETURN QQQQUERY QSYS 0717 0714 08 0.004 0 0 CALL QQQISVSU QSYS 0001 00DA 09 0.021 0 0 RETURN QQQQUERY QSYS 0715 071A 08 0.003 0 0 CALL QQQOPTIM QSYS 0001 00CA 09 0.028 0 0 RETURN QQQQUERY QSYS 071B 0649 08 0.004 0 0 CALL QQQIMPLE QSYS 0001 2DA9 09 0.010 0 0 CALL QLIADOPT QSYS 0001 0004 10 0.006 0 1 RETURN QQQIMPLE QSYS 2DAA 2F3A 09 0.004 0 0 CALL QMHSNDPM QSYS 0001 0065 10 0.009 0 0 DATA MESSAGE HANDLER SEND MESSAGE MESSAGE ID -CPI4328 MESSAGE TYPE -INFORMATIONAL SEVERITY -00 CALL LEVEL -0009 PROGRAM -QQQIMPLE LIBRARY -QSYS RETURN QQQIMPLE QSYS 2F3B 1DCA 09 0.028 0 0 CALL QDMCRODP QSYS 0001 01CA 10 0.005 0 0 RETURN QQQIMPLE QSYS 1DCB 1F9E 09 0.004 0 0 CALL QDMCOPEN QSYS 0001 0581 10 0.006 0 0 CALL QDBOPEN QSYS 0001 0165 11 0.005 0 0 INTXHINV QDBOPEN QSYS 0166 065F 11 0.015 0 0 RETURN QDMCOPEN QSYS 0582 0240 10 0.007 0 1 DATA DATA MANAGEMENT OPEN RETURN QQQIMPLE QSYS 1F9F 204D 09 0.004 0 0 CALL QMHMOVPM QSYS 0001 00AC 10 0.006 0 0 RETURN QQQIMPLE QSYS 204E 00FF 09 0.006 0 1 RETURN QQQQUERY QSYS 064A 06DE 08 0.005 0 0 RETURN QSQOPEN QSYS 087C 03FA 07 0.003 0 0 CALL QMHSNDPM QSYS 0001 0065 08 0.010 0 0 DATA MESSAGE HANDLER SEND MESSAGE MESSAGE ID -SQL7912 MESSAGE TYPE -INFORMATIONAL SEVERITY -00 CALL LEVEL -0006 PROGRAM -QSQUPDAT LIBRARY -QSYS RETURN QSQOPEN QSYS 03FB 1AF8 07 0.004 0 0 RETURN QSQUPDAT QSYS 019B 025F 06 0.004 0 0 CALL QDBGETSQ QSYS 0001 0347 07 0.006 0 0 RETURN QSQUPDAT QSYS 0260 0329 06 0.004 0 0 CALL QDBUDR QSYS 0001 0224 07 0.008 0 1 RETURN QSQUPDAT QSYS 032A 0FE2 06 0.003 0 0 CALL QMHSNDPM QSYS 0001 0065 07 0.010 0 0 DATA MESSAGE HANDLER SEND MESSAGE MESSAGE ID -SQL7939 MESSAGE TYPE -INFORMATIONAL SEVERITY -00 CALL LEVEL -0005 PROGRAM -QSQROUTE LIBRARY -QSYS RETURN QSQUPDAT QSYS 0FE3 0EDF 06 0.004 0 0 CALL QMHSNDPM QSYS 0001 0065 07 0.010 0 0 DATA MESSAGE HANDLER SEND MESSAGE MESSAGE ID -SQL7914 MESSAGE TYPE -INFORMATIONAL SEVERITY -00 CALL LEVEL -0005 PROGRAM -QSQROUTE LIBRARY -QSYS RETURN QSQUPDAT QSYS 0EE0 1035 06 0.003 0 0 CALL QMHSNDPM QSYS 0001 0065 07 0.009 0 0 DATA MESSAGE HANDLER SEND MESSAGE MESSAGE ID -SQL7957 MESSAGE TYPE -COMPLETION SEVERITY -00 CALL LEVEL -0006 PROGRAM -QSQUPDAT LIBRARY -QSYS RETURN QSQUPDAT QSYS 1036 1039 06 0.004 0 0 RETURN QSQROUTE QSYS 02A2 3FD1 05 0.005 0 0 RETURN QZDASQL QIWS 04CA 00DF 04 0.005 0 1 RETURN QZDACMDP QIWS 1264 1263 03 0.008 0 0 CALL QZDASQL QIWS 0001 04C9 04 0.006 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 2 0 0

0 0 0 0 0 0 0

0 0

0 0

0 0

0 0

0 0 0 0 0

0 0 0 1 0

Figure 97. Job Trace

Chapter 10. Case Study

383

This soft copy for use by IBM employees only.

10.6.8 Communication Trace
A communications trace, either SNA or TCP/IP can show the exact time stamps, in .1 second increments, for an ODBC request received by the AS/400 and the ODBC response sent by the AS/400 server. (After collecting APPC trace data, you must specify ″Format SNA data only = No″ to get the IOP time stamp included in a printed report.) Normally you do not need to run this trace, but it can be useful in times of performance problems to clearly indicate the portion of the total response time actually taken by the AS/400. For example, a network can be composed of communication routers and bridges that, under heavy loads, are actually part of the performance problem. You can compute the AS/400 response time and subtract it from the client workstation′s actual response time. Detailed examination of the communication trace confirms that a single business transaction (a customer order of 10 line items) involves many communication flows between the client and server processors. Note that this is not really any different than running a 5250 workstation order entry application over a communications line. A single order involves several ″enter keys.″ For example, consider the first customer order represented by the communication trace:

The AS/400 receives the first data frame for the order in frame #2315 at time period 1091.9. The last data frame for this order is frame #2603 at 1101.5. Between these two frames there are approximately 43 communications flows including: − − Ten repetitions of a SELECT statement #0002 Ten repetitions of an UPDATE statement #0003

• •

The 10 update transactions take approximately 4.6 seconds based on the Communications Trace time stamps (ending at frame #2499 at 1098.1 seconds and starting at frame #2343 at 1093.5). The communications line trace can be used to construct an outline of the communications flows occurring during the execution of the client/server application, including: 1. 2. 3. 4. 5. 6. 7. The exchange of XIDs to negotiate frame size Binding the SNA RU and Pacing values Receiving of the APPC evoke for program QIWS/QZDAINIT Exchange of language ID and translation table information Exchange of the SQL package name The SQL prepare statements and the AS/400 responses The actual execution of the SQL statements for selection, insert, update, and so on

In most cases, you do not include steps 1 through 5 as part of a performance problem because they should occur only once per database source connection. If you saw these communications flows repetitively for the same client workstation, the client application should be changed.

384

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

The following figures show examples of some of these steps. We have highlighted ( xxxxxx ) important values shown in the traces. We want to describe three ODBC function types shown in our communication trace:

SQL - X′ E004′ - Provides OS/400 SQL run-time and OS/400 query support including: Prepare, Describe, Open/Fetch, etc. NRB - X′ E005′ - Provides OS/400 Native database operations inclu ding: Create source physical file, Add library list, Add physical file member, etc. ROI - X′ E006′ - Retrieve object information about AS/400 objects including: Library, Relational database, SQL package, etc.

These function types are ″headers″ for ODBC Request Parameter Blocks (RPBs) that specify the complete function being requested. To assist you in reviewing the communications trace examples that follow, we have provided SQL/NRB/ROI Functions and Parameter IDs tables in Appendix E, “Database Server Function Code Summaries” on page 493. We show the following functions in communication trace examples: 1. SQL Attribute Negotiation Information from the Windows 3.1 ODBC.INI file or Windows95 repository is used to establish the SQL attributes that are used during the conversation, including the server job (QIWS/QZDAINIT). The client program also identifies the database library/collection and the SQL package name (CSDB). See Figure 98 on page 386.

Chapter 10. Case Study

385

This soft copy for use by IBM employees only.

COMMUNICATIONS TRACE Title: ODBC API - V3R1 05/22/96 17:18:56 Page: 3 Rec Data Record Data Controller Destination Source Frame Number Number Poll/ Num S/R Length Timer Type Name MAC Address MAC Address Format Command Sent Received Final DSAP SSAP --- --- ------ --------- ------ ---------- ------------ ----------- ------ ------- ------ -------- ----- ---- ---715 R 245 1028.7 EBCDIC P23ARVYB 400000001007 C00030002291 LLC I 115 114 OFF 04 04 Routing Information . . . . . . . . : 0C301071C06AFC8AC0718020 Data . . . : 2E00010500770B90A0370502FF1003D000000DD8C9E6E261D8E9C4C1C9D5 *............. QIWS/QZDAIN ... C9E3140902C9E3E2C3C9C4F3F509019D20911359AD5D5000000800000000 * IT ..USERIDXX....J...)&..... 0000000900B50000000B300 E004 000000000000010014 1F80 81400000 *.........A ......... 0000000000010001000000000000000F00000008381003520000000C3802 *..............................* 0025F2F9F2F40000001238030025F1404040404040404040000007 3805 *.... 2924 ............ F10000000838070005000000083808000100000008380900000000000838 *1.............................* 0A000000000008380B000000000008380C000000000008380E0000000000 *..............................* 0E380F00250004C3E2C4C200000008381100010000000838060001000000 *....... CSDB ................ 0838120001 *..... * 722 S 669 1029.0 EBCDIC P23ARVYB 400030002291 C00000001007 LLC I 114 116 OFF 04 04 Routing Information . . . . . . . . : 0CB01071C06AFC8AC0718020 Data . . . : 2E0005010057039020002A0000028E00 E004 0000000000000010014 28 *.......J.............. 00 00000000000000001 1F80 1F800000000000006200000060380400 *.A ....- ............. 00000500010000000000000000000100000001F100250000C5D5E45CC8C5 *...................1....ENU*HE* E740404040404040404040404040404040F2F9F2F4E5F3D9F1D4F0F0F0F0 *X 2924V3R1M0000* F1C4C2F2F4F0F0404040404040404040404040C3E2C4C240404040404002 *1DB2400 CSDB .* 0800000206380900010203DC09C31CCAB2D50B0C0D0E0F10111213DBDA08 *.............C...N............* C11819C8F21A1D1E1FC4B3C0D9BF0A171BB4C2C5B0B1050607CDBA16BCBB *A..H2....D.{R.....BE..........* C9CC04B9CBCEDF1415FE7F20FF838485A0C68687A4BD2E3C282B7C268288 *I.........″ . . CDE.FFGU.....@.BH* 898AA18C8B8DE121242A293BAA2D2FB68EB7B5C78F80A5DD2C255F3E3F9B *I..................G..V...¬ . . . * 90D2D3D4D6D7D8DE603A2340273D229D616263646566676869AEAFD0ECE7 *.KLMOPQ.-.. ..../..........}.X* F1F86A6B6C6D6E6F707172A6A791F792CFE67E737475767778797AADA8D1 *18.,%_>?...WXJ7K.W=.......:.YJ* EDE8A95E9CBEFAB8F5F4ACABF35B5DEEF9EF9E7B414243444546474849F0 *.YZ;....54..3$).9..#.........0* 939495A2E47D4A4B4C4D4E4F505152FB968197A3985CF653545556575859 *LMNSU′ ¢.<(+|&...OAPTQ*6.......* 5AFDE299E3E0E530313233343536373839FCEA9AEBE99F00010203372D2E *!.SRT.V..............Z........* 2F1605250B0C0D0E0F101112133C3D322618191C27071D1E1F405A7F7B5B *......................... !″#$* 6C507D4D5D5C4E6B604B61F0F1F2F3F4F5F6F7F8F97A5E4C7E6E6F7CC1C2 *%&′ ( ) *+,-./0123456789:;<=>?@AB* C3C4C5C6C7C8C9D1D2D3D4D5D6D7D8D9E2E3E4E5E6E7E8E9BAE0BBB06D79 *CDEFGHIJKLMNOPQRSTUVWXYZ...._.* 818283848586878889919293949596979899A2A3A4A5A6A7A8A9C04FD0A1 *ABCDEFGHIJKLMNOPQRSTUVWXYZ{|}.* 3F68DC5142434447485253545756586367719C9ECBCCCDDBDDDFECFC70B1 *..............................* 80BFFF4555CEDE49699A9BABAF5FB8B7AA8A8B2B2C092128656264B43831 *.............¬ . . . . . . . . . . . . . . . . * 34334AB22422172906202A46661A35083936303A9F8CAC7273740A757677 *..¢...........................* 231514046A783BEE59EBEDCFEFA08EAEFEFBFD8DADBCBECA8F1BB9B6B5E1 *..............................* 9D90BDB3DAFAEA3E41 *......... *

Figure 98. Communication Trace - SQL Negotiation Attributes

X′1 F80′ in frame #715 means ″SQL Attribute Function″ and, for example, X′3805′ means ″Translate Indicator″. You can see more X′38xx′ functions in this frame. For more information on these functions, see Appendix E, “Database Server Function Code Summaries” on page 493. 2924 means US upper/lower case is being used. CSDB means library CSDB is used.
Frame #722 shows the AS/400 response to the SQL Attribute Negotiation request. Subtracting time stamp 1028.7 from time stamp 1029.0 shows the response took .3 seconds.

2800 indicates a response to an ″SQL data request is being returned″.
You can see the character set to be used between the AS/400 server and the client in the trace. Remember, AS/400 stores data in EBCDIC format and personal computers use ASCII format. So there must always be an EBCDIC to ASCII translation performed. 2. SQL Statement Preparation The prepare statements for the various SELECT, UPDATE, and INSERT statements are received and responded to by the AS/400. Figure 99 on page 387 shows the SQL Prepare statement for CRSR0002 and the AS/400 response.

386

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

COMMUNICATIONS TRACE Title: ODBC API - V3R1 05/22/96 17:18:56 Page: 3 Rec Data Record Data Controller Destination Source Frame Number Number Poll/ Num S/R Length Timer Type Name MAC Address MAC Address Format Command Sent Received Final DSAP SSAP --- --- ------ --------- ------ ---------- ------------ ----------- ------ ------- ------ -------- ----- ---- ---734 R 368 1029.1 EBCDIC P23ARVYB 400000001007 C00030002291 LLC I 118 118 OFF 04 04 Routing Information . . . . . . . . : 0C301071C06AFC8AC0718020 Data . . . : 2E000105007A039020006B000000690000 E004 000000000000010014 1D *...................... 00 00000000 0002 000200000002000000050000000E3801002500 *........................... 04D8C7D7D300000008381300000000000738080000000012380600250008 *.QGPL.........................* E2E3D4E3F0F0F0F200000012380B00250008C3D9E2D9F0F0F0F200FC0000 *STMT0002...... CRSR0002 ..... 00FA0000E0040000000000000001001418038A8000000000000000020002 *..............................* 0002000200000004000000B43807002500AAE285938583A340E2E3C4C9F0 *..................SELECT STDI0* F16B40E2E3C4C9F0F26B40E2E3C4C9F0F36B40E2E3C4C9F0F46B40E2E3C4 *1, STDI02, STDI03, STDI04, STD* C9F0F56B40E2E3C4C9F0F66B40E2E3C4C9F0F76B40E2E3C4C9F0F86B40E2 *I05, STDI06, STDI07, STDI08, S* E3C4C9F0F96B40E2E3C4C9F1F06B40E2E3D8E3E86B40E2E3E8E3C46B40E2 *TDI09, STDI10, STQTY, STYTD, S* E3D6D9C4D9E26B40E2E3D9C5D4D6D9C46B40E2E3C4C1E3C1408699969440 *TORDRS, STREMORD, STDATA FROM * 83A284824BE2E3D6C3D240A688859985404DE2E3E6C9C47E6F4081958440 *CSDB.STOCK WHERE (STWID=? AND * E2E3C9C9C47E6F5D000000073808010000000F380400250005E5C2C6C2C1 *STIID=?).................VBFBA* 0000000838120002 *........ * 736 S 1145 1029.2 EBCDIC P23ARVYB 400030002291 C00000001007 LLC I 118 119 OFF 04 04 Routing Information . . . . . . . . : 0CB01071C06AFC8AC0718020 Data . . . : 2E000501005A039020002A000004680000 E004 0000000000010014 28 *........................ 00 8A80000 00021803 180300000000000000900000008E3807E2 *...........................S D8D3C3C1404040000000880000000000140012D8E9F7F9F7C5C5F4F1F2F3 *QLCA ...H........QZ797EE4123* F0C3F0F0F0C3C50000000000000000000000000000000000000000000000 *0C000CE.......................* 000000000000000000000000000000000000000000000000000000D8E2D8 *...........................QSQ* D9D6E4E3C50000000000000000000000000000000000000000F7F9F6F840 *ROUTE....................7968 * 40404040404040404040F0F0F0F0F0033A00000338380500000000000F01 * 00000...............* 30003601C40018001800000025F00000000000000000060025E2E3C4C9F0 *....D........0...........STDI0* F1404040404040404040404040404040404040404040404040003601C400 *1 ...D.* 18001800000025F00000000000000000060025E2E3C4C9F0F24040404040 *.......0...........STDI02 * 40404040404040404040404040404040404040003601C400180018000000 * ...D.......* 25F00000000000000000060025E2E3C4C9F0F34040404040404040404040 *.0...........STDI03 * 40404040404040404040404040003601C40018001800000025F000000000 * ...D........0....* 00000000060025E2E3C4C9F0F44040404040404040404040404040404040 *.......STDI04 * 40404040404040003601C40018001800000025F000000000000000000600 * ...D........0..........*

Figure 99. Communication Trace - SQL Prepare Statement

Note STMT0002 and CRSR0002 in the data received by the AS/400 in frame #734. These values are assigned by the client workstation ODBC support and are used later to identify the SQL statement/function being performed.

X′1 D00′ indicates the ″Create RPB with based-on RPB will initialize with default values″ request. VBFBA at the end of frame #734 is the SQL package name being used.
Frame #736 has the SQL statement identifier X′0002′ and X′1803′. The X′0002′ identifies the SQL statement being responded to - the ″SQL Prepare / Describe″ request for CRSR0002. The frame includes, also, the SQLCA (communication area - feedback information) and the column/field of each column/field in the SELECT statement. The column/field definitions are used to map columns/fields between communication buffers and application buffers. Later, when the SQLExecute is sent by the client application, the client uses X′0002′ to identify the SQL statement (within an SQL package) to be executed by the server. 3. SQL Statement Execution The SQL statements to be executed are referred to by their hexadecimal value returned by the AS/400 during statement preparation. Figure 100 on page 388 shows CRSR0005 Open/Fetch Execution for SELECT CLAST, ... from CSDB.CSTMR, CSDB.WRHS where ....

Chapter 10. Case Study

387

This soft copy for use by IBM employees only.

COMMUNICATIONS TRACE Title: ODBC API - V3R1 05/22/96 17:18:56 Rec Data Record Data Controller Destination Source Frame Num S/R Length Timer Type Name MAC Address MAC Address Format Command --- --- ------ --------- ------ ---------- ------------ ----------- ------ ------2315 R 397 1091.9 EBCDIC P23ARVYB 400000001007 C00030002291 LLC I Routing Information . . . . . . . . : 0C301071C06AFC8AC0718020 Data . . . : 2E000105008103902001100000010E0000 E004 00000000000010014 1E 00 000000000 0005 000500000500050001000000E6380100000000000 000004000E003601C4000400000000002500000000000000000000000000 000000000000000000000000000000000000000000000000000000000000 3601E4000200000003000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000003601C4000400 000000002500000000000000000000000000000000000000000000000000 0000000000000000000000000000000000003601C4000400000000002500 000000000000000000000000000000000000000000000000000000000000 00000000000000000000000074000000720000E004000000000000000100 14180E860000000000000000050005000500050005000400000007380980 0000000F380400250005E5C2C6C2C10000000A380C000005550000002A38 11000000000000000100040002000E0000000000000000F0F0F0F1001FF0 F7F4F9F0F0F0F1 2329 S 251 1092.5 EBCDIC P23ARVYB 400030002291 C00000001007 LLC I Routing Information . . . . . . . . : 0CB01071C06AFC8AC0718020 Data . . . : 2E0005010061039020002A000000EA0000 E004 00000000000010014 28 00 8600000 0005180E 180E0002000002BD00900000008E3807E2 D8D3C3C140404000000088000000000012C3D9E2D9F0F0F0F54040404040 404040404040404040404040404040404040404040404040404040404040 404040404040404040404040404040404040404040404040404040D8E2D8 E9C4C1404000000000000000000000000100000018000000000000000040 40404040404040404040F0F0F0F0F0001E00000034380600000008000000 010004000200180000000000000000001AD7D9C5E2C3C1D3D3E8C1E3C9D6 D5404003932FC7C300999F

Page: 3 Number Number Poll/ Sent Received Final DSAP SSAP ------ -------- ----- ---- ---126 126 OFF 04 04 *....A................. *...............W..... *........D.....................* *..............................* *..U...........................* *..........................D...* *..............................* *....................D.........* *..............................* *..............................* *...F..........................* *..........VBFBA...............* *.......................0001..0* *7490001 * 126 127 OFF 04 04 *...../................ *.F...................... *QLCA ...H.. CRSR0005 ...... * * * QSQ* *ZDA ........................ * * 00000...............* *.................PRESCALLYATIO* *N .L.GC.R. *

Figure 100. Communication Trace - SQL Open/Fetch Execution

Note X′0005′ in frame #2315. It indicates a function request for CRSR0005, which was ″agreed to″ when the SQL Prepare for CRSR0005 was received by the AS/400. See Figure 93 on page 373 for the full SQLPrepare statement for CRSR0005. X′1 E00′ means ″Change descriptor, create if it does not exist″ request. This ID occurs only in the first run of the statement. For subsequent transactions it is changed for the same ID shown in the send frame (in this case, X′180E′). The fields at the end of the frame (0001..07490001) represent the current client values for columns/fields CWID, CDID, CID, and WID - ″parameter marker″ values being used for CRSR0005. CWID, CDID, and CID are primary keys of file/table CSTMR. WID is a primary key of the WRHS file/table. Frame #2329 is the AS/400 response to this join SELECT represented by X′0005. The response to this SELECT took .6 seconds. The X′0005′ and X′180 E′ mean the response is for ″SQL Open/Describe/Fetch Statement″ for CRSR0005. By understanding how to read the communication trace, we can compute the AS/400 response time of each function. See Figure 101 on page 389:

388

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

┌───────────────────────┬─────────────┬──────────────┬───────────────┐ │ │ │ │ Elapsed Time │ │ │ Commence │ Complete │ (secs) │ │ Function ├─────┬───────┼─────┬────────┼───────┬───────┤ │ │Frame│ Time │Frame│ Time │ AS/400│ PC │ │ │ Nbr │ │ Nbr │ │ │ │ ├───────────────────────┼─────┼───────┼─────┼────────┼───────┼───────┤ │ Connect/SQL Attribute │ 715 │ 1028.7│ 722│ 1029.1│ 0.3 │ │ │ Add Library List │ 724 │ 1029.0│ 728│ 1029.0│ >0.1 │ │ │ Clear Package │ 730 │ 1029.1│ 732│ 1029.1│ >0.1 │ │ │ Prepare CRSR0002 │ 734 │ 1029.1│ 736│ 1029.2│ 0.1 │ │ │ Prepare CRSR0004 │ 738 │ 1029.2│ 740│ 1029.3│ 0.1 │ │ │ Prepare CRSR0005 │ 748 │ 1029.3│ 750│ 1029.3│ >0.1 │ │ │ Prepare CRSR0006 │ 752 │ 1029.4│ 754│ 1029.4│ >0.1 │ │ │ Prepare CRSR0009 │ 758 │ 1029.4│ 760│ 1029.4│ >0.1 │ │ │ Prepare CRSR0010 │ 762 │ 1029.5│ 764│ 1029.5│ >0.1 │ │ │ Prepare CRSR0008 │ 769 │ 1029.6│ 772│ 1029.6│ >0.1 │ │ │ Think Time │ │ 1029.6│ │ 1091.9│ │ 62.3 │ │ Open/Fetch #0005 │2315 │ 1091.9│ 2329│ 1092.5│ 0.6 │ │ │ Open/Fetch #0004 │2331 │ 1092.6│ 2339│ 1093.4│ 0.8 │ │ │ Open/Fetch #0002 │2343 │ 1093.5│ 2354│ 1094.0│ 0.5 │ │ │ Prepare CRSR0003 │2359 │ 1094.1│ 2361│ 1094.1│ >0.1 │ │ │ Execute #0003 │2363 │ 1094.2│ 2377│ 1094.7│ 0.5 │ │ │ Open/Fetch #0002 │2379 │ 1094.8│ 2385│ 1094.9│ 0.1 │ │ │ Execute #0003 │2387 │ 1095.0│ 2390│ 1095.1│ 0.1 │ │ │ (#0002/#0003 7 times) │2392 │ 1095.1│ 2491│ 1097.7│ 2.6 │ │ │ Open/Fetch #0002 │2493 │ 1097.7│ 2495│ 1097.9│ 0.2 │ │ │ Execute #0003 │2497 │ 1098.0│ 2499│ 1098.1│ 0.1 │ │ │ Open/Fetch #0006 │2503 │ 1098.1│ 2513│ 1098.7│ 0.6 │ │ │ Prepare CRSR0007 │2515 │ 1098.7│ 2517│ 1098.8│ 0.1 │ │ │ Execute #0007 │2519 │ 1098.8│ 2526│ 1099.3│ 0.1 │ │ │ Execute #0008 │2528 │ 1099.3│ 2533│ 1099.7│ 0.4 │ │ │ Execute #0008 │2535 │ 1099.7│ 2537│ 1099.8│ 0.1 │ │ │ (#0008 8 times) │2539 │ 1099.8│ 2583│ 1100.6│ 0.8 │ │ │ Execute #0009 │2586 │ 1100.6│ 2593│ 1101.0│ 0.5 │ │ │ Execute #0010 │2597 │ 1101.0│ 2603│ 1101.5│ 0.5 │ │ └───────────────────────┴─────┴───────┴─────┴────────┴───────┴───────┘
Figure 101. Communication Trace - Time per Function

Think Time is the time waiting for client workstation operator input.
Note that the first execution of SQL statements of #0002, #0003, and #0008 take longer than subsequent ones. Figure 102 on page 390 summarizes the first test - 4 orders:

Chapter 10. Case Study

389

This soft copy for use by IBM employees only.

┌───────────────────────┬─────────────┬──────────────┬───────────────┐ │ │ │ │ Elapsed Time │ │ │ Commence │ Complete │ │ │ Function ├─────┬───────┼─────┬────────┼───────┬───────┤ │ │Frame│ Time │Frame│ Time │ Trace │ PC Log│ │ │ Nbr │ │ Nbr │ │(secs) │ (secs)│ ├───────────────────────┼─────┼───────┼─────┼────────┼───────┼───────┤ │ Session Negotiation │ 715 │ 1028.7│ 732│ 1029.1│ 0.4 │ │ │ Statement Preparation │ │ │ │ │ │ │ │ #0002 │ 734 │ 1029.1│ 736│ 1029.2│ 0.1 │ │ │ #0004 │ 738 │ 1029.2│ 740│ 1029.3│ 0.1 │ │ │ #0005 │ 748 │ 1029.3│ 750│ 1029.3│ >0.1 │ │ │ #0006 │ 752 │ 1029.4│ 754│ 1029.4│ >0.1 │ │ │ #0009 │ 758 │ 1029.4│ 760│ 1029.4│ >0.1 │ │ │ #0010 │ 762 │ 1029.5│ 764│ 1029.5│ >0.1 │ │ │ #0008 │ 769 │ 1029.6│ 772│ 1029.6│ >0.1 │ │ │ First Test │ │ │ │ │ │ │ │ Order #1 │2315 │ 1091.9│ 2603│ 1101.5│ 9.6 │ 9.67 │ │ Order #2 │2616 │ 1103.6│ 3152│ 1109.5│ 5.9 │ 5.88 │ │ Order #3 │3210 │ 1111.6│ 3463│ 1117.8│ 6.2 │ 6.21 │ │ Order #4 │3480 │ 1119.8│ 3726│ 1125.9│ 6.1 │ 6.10 │ └───────────────────────┴─────┴───────┴─────┴────────┴───────┴───────┘
Figure 102. Communication Trace - Summary

As you expect, the response time for the first order is higher than the subsequent orders.

10.6.9 Client/Server Order Entry Benchmark Test Results
We ran the order entry benchmark using several different interfaces from the client workstation with various combinations of SNA and TCP/IP communications protocols and Window 3.1 and Windows 95 as shown below:

ODBC - Visual Basic DB objects (″jet engine″ interface) ODBC-DB Obj

ODBC - APIs, no blocked insert ODBC-API

ODBC - APIs with blocked insert ODBC-BI

ODBC - Stored Procedure ODBC-SP

Distributed Procedure Call Dst Proc Call

AS/400 Data Queue Data Queue

The following tables show the response time and communication I/Os, logical database I/Os, and physical disk I/Os counts of a single order entry for the various interfaces and client ″operating systems.″ The system used for the following performance figures was not the CISC 20S server model used earlier in this chapter. The AS/400 was a RISC V3R6 with

390

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

processor feature 2144, which has a Relative Performance Rating (RPR) of 26.6 and CPW of 104.2. This is a medium speed RISC processor.
Table 31. WINDOWS 3.1 Order Entry Test Results
C/S Interface APPC Dst Proc Call Data Queue ODBC-DB Obj ODBC-API ODBC-BI ODBC-SP Syn Disk I/O 93 35 18 154 36 27 45 Asyn Disk I/O 52 33 08 71 43 33 33 Logical I/O 27 27 02 80 47 41 46 Comm I/O 02 11 06 511 85 73 05 Ave RT (sec) 0.61 0.77 0.71 8.46 1.82 1.62 0.77

Table 32. WINDOWS 95 SNA Order Entry Test Results
C/S Interface ODBC-DB Obj ODBC-API ODBC-SP Syn Disk I/O 199 47 53 Asyn Disk I/O 93 43 33 Logical I/O 80 46 46 Comm I/O 512 87 05 Ave RT 51.4 2.8 1.2

Table 33. WINDOWS 95 ODBC TCP/IP Order Entry Test Results
C/S Interface ODBC-DB Obj ODBC-API ODBC-SP Note: 1. Disk, Database, Communication I/O counts not available. Syn Disk I/O Note 1 Note 1 Note 1 Asyn Disk I/O Note 1 Note 1 Note 1 Logical I/O Note 1 Note 1 Note 1 Comm I/O Note 1 Note 1 Note 1 Ave RT 46.8 2.2 1.0

10.6.10 Conclusions

Visual Basic Data Control Objects (″jet engine″) interfaces enable quickest application development but deliver the worst performance of all interfaces. Key things to minimize are: − − Communications I/Os Disk Synchronous I/Os

APPC program to program provides the fastest performance for a complete function. APPC requires programming and communications protocol expertise on both the host server and client workstation. (We did not have time to complete a TCP/IP sockets program to program interface, but it should deliver response time very close to APPC at slightly higher CPU seconds per client request processed.)

The data queue implementation provides the second fastest response time, but does not provide the complete function.

Chapter 10. Case Study

391

This soft copy for use by IBM employees only.

The data queue implementation uses a combination of ODBC and data queue support. This implementation does not do the new order processing on-line, but simply writes the new order information to a data queue for later processing. This is done to demonstrate the idea of time independent processing and to give the end user fast response time. The response time that the end user sees is very fast, but not all the processing is completed. Data queue interfaces require AS/400 unique coding skills on both the AS/400 server and the client. These programming skills are not as complex as those required for either APPC of TCP/IP sockets programming.

Stored Procedures perform best using ODBC standardized interfaces and require no APPC or TCP/IP programming skills. They can very effectively reduce communications I/Os and associated line turnarounds. However, stored procedures do require programming skills on both the AS/400 and the client workstation.

Distributed procedure calls perform approximately equivalent to ODBC Stored Procedures. Distributed procedure calls require programming skills unique to the AS/400 programming interfaces available, which impact application portability.

ODBC APIs provide very good performance and offer a wide range of portability from the client workstation programmer′s viewpoint. ODBC is a defacto standard from the client programmer′s view to many different servers and server database support. To ensure optimum performance, the client workstation programmer must: − − − − Use parameter markers (appear as ″c o l u m n n a m e = ? ″ in the ODBC trace). Only do SQLPrepare statements once. Insure that SQL packages and access plans are being used. If doing consecutive inserts to the same file/table, consider using blocked inserts. As the number of inserts per file/table increase per unit of measurement (such as orders per day), blocked inserts can be very beneficial in improving response time and throughput.

392

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Appendix A. Example Programs
The accompanying PC media contains example programs. These examples are intended to show coding techniques for the application programming interfaces discussed in this redbook. The examples use Microsoft Visual Basic version 4.0 and Microsoft Visual C++ version 1.5. We provide a guided tour of the code used for each application. We step you through the code to show you how the applications access the AS/400 system. For a description of the application that these programs are part of, refer to Chapter 10, “Case Study” on page 351. For information about the contents of the PC media and instructions for restoring the PC files and AS/400 library, please refer to the included README file. Important Information These example programs have not been subjected to any formal testing. They are provided ″AS-IS″, these examples should be used for reference only. Please refer to the Special Notices section at the back of this document for more information.

The following examples are provided:
• • • • •

A.1, “Database Serving Using Visual Basic and Windows 3.1” on page 394 A.2, “Database Serving Using Visual Basic and Windows 95” on page 402 A.3, “Application Serving Using Visual Basic with Windows 3.1” on page 410 A.4, “Database Serving Using Visual C++ and Windows 3.1” on page 414 A.5, “Application Serving Visual C++ With Windows 3.1” on page 420

The directory structure is:

CPP APPC DPC DATAQ SPEED SPEEDEX STORPROC 16BITS APPC DPC DQ SPEEDAPI SPEEDJET SPSETS 32BITS SPEEDAPI SPEEDJET SPSETS

Windows 3.1 C++ examples application serving using APPC appl serving using dist pgm call appl serving using data queues ODBC using APIs ODBC using block insert and extended fetch ODBC using stored procedures Windows 3.1 Visual Basic examples application serving using APPC appl serving using dist pgm call appl serving using data queues ODBC using APIs ODBC database objects ODBC using stored procedures Windows 95 Visual Basic examples ODBC using APIs ODBC database objects ODBC using stored procedures

© Copyright IBM Corp. 1996

393

This soft copy for use by IBM employees only.

A.1 Database Serving Using Visual Basic and Windows 3.1
Three examples using Visual Basic are included. The first example, A.1.1, “ Client/Access ODBC Using Visual Basic Database Objects,” shows using database objects, the second example, A.1.2, “Client/Access ODBC Using ODBC APIs” on page 397, shows using APIs, and the third example, A.1.3, “Client/Access ODBC Using Stored Procedures” on page 399, shows using stored procedures. The source code for each Visual Basic exercise resides in its own directory as follows:

Directory Name
16BITS\SPEEDJET

Description
Exercise 1. ODBC using the Visual Basic database objects. (Jet Engine interface)

16BITS\SPEEDAPI

Exercise 2. ODBC using the ODBC API calls.

16BITS\SPSETS

Exercise 3. Using a stored procedure.

A.1.1

Client/Access ODBC Using Visual Basic Database Objects
In this program, we use Visual Basic database objects to access the AS/400 database. If you have not done so, open the Speed.Vbp project in the 16BITS\SPEEDJET directory. 1. Click on SPEED00.FRM in the Project Window to highlight it. If you cannot find the Project Window, click on the Window menu and choose Project. 2. Click on the View Code pushbutton to open up the Form Window. 3. View the declarations for the Visual Basic database objects. These objects are the way you communicate with the ODBC driver. They have methods and properties specifically designed to allow you to easily access data in the server database. To find the module where you want to view the code, use the Object pulldown to select the object and the Proc pulldown to select the function or subroutine.

In the Object (general), function (declarations) find the line that starts: with

Dim aDB

As Database

This object is used by Visual Basic as the connection object to the AS/400 database.

Find the seven lines that define the recordset objects we will use in the exercise. The first line begins with:

Dim t_Stock

As Recordset

′ Stock Select.

We create two different sets of database objects. For input only queries we will use snapshots, whereas for tables that may be updated we will use dynasets. Because snapshots are not

394

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

updateable, the ODBC driver can make some assumptions that may improve performance. Dynasets must be used if you wish to update or insert records into a table. 4. View the code to connect to the AS/400 system.

In the Object Form, Subroutine Load find the line that starts:

Set aDB = OpenDatabase(″ ″ , False,False,Connect)
This statement creates a connection to the AS/400 using the information in the Connect parameter. You must have a valid connection to the AS/400 before this statement can be run. Notice that the connection string contains ″ODBC″; that means it will connect via the ODBC interface. 5. View the code to create the dynasets used for inserting records into the ORDERS, ORDLIN, and NEWORD files.

In the Object (general), Subroutine SQLInit find the three lines that start with:

Set t_Orders = aDB.OpenRecordset(TableName,dbOpenDynaset,dbAppendOnly)
These statements open the appropriate AS/400 database tables. We use the dbAppendOnly option to open the AS/400 tables for append only. Because we will only append records to these tables, we can leave them open for the duration of the program.

In the Object (general), Subroutine Proc_NO find the line that starts with:

Set t_Customer = aDB.OpenRecordset(Query,dbOpenSnapshot,dbSQLPassThrough)
This statement runs the query over the CSTMR file. The query is already set up in the preceding line. We use the dbSQLPassThrough option to bypass the Jet Engine interface. This option is only available for Snapshots. We use Open as Snapshot to run the query and return the first block of data. The first record becomes the current record and the data from that record is made available. The assignment statements move the data field by field from the current record to variables in the program. Notice that the fields are referenced by field name; this means that Visual Basic must retrieve the field descriptions from the database via the ODBC driver at some cost to performance. 6. View the code to retrieve the ITEM information.

In the same section find the line that starts with:

Set t_Item = aDB.OpenRecordset(Query,dbOpenSnapshot,dbSQLPassThrough)
This statement runs the query over the ITEM file. 7. View the code to retrieve the Stock information.

In the same section find the line that starts with:

Set t_Stock = aDB.OpenRecordset(Query,dbOpenDynaset)
This statement runs the query over the STOCK file. Here we need to use a Dynaset rather than a Snapshot. Dynasets and Snapshots behave very similarly, but there are some differences.

Appendix A. Example P r o g r a m s

395

This soft copy for use by IBM employees only.

− −

Dynasets always contain the up-to-date data. Snapshots are not guaranteed to be up-to-date. They may be a copy of the data. Dynasets can be updated. Snapshots are read only.

8. View the code to update the Stock information.

In the same section find the five lines that start with:

t_Stock.Edit
The Edit and Update methods work as a pair to update a record in a Dynaset. You must use the Edit method to allow the current record to be updated and to lock the record. The record is not updated until you execute the Update method. 9. View the code to retrieve and update the District information.

In the same section find the line that starts with:

Set t_District = aDB.OpenRecordset(Query,dbOpenDynaset)
This statement runs the query over the DISTRICT file.

In the same section find the four lines that start with

t_District.Edit
These four lines update the DISTRICT information. 10. View the code to add the Order Line information.

In the same section, find the line that starts with:

t_Ordlin.AddNew

In the same section, find the line that starts with:

t_Ordlin.Update
To add a record you need to use the AddNew method to create a new record buffer. You can then assign the required data values to that buffer. The Update method is used to insert the record into the dynaset. 11. View the code to add new and update the Orders information.

In the same section, find the line that starts with:

t_Orders.AddNew
This statement adds a new record to the ORDERS information

In the same section, find the line that starts with:

t_Orders.Update
This statement updates the ORDERS information. 12. View the code to add the New Order information.

In the same section, find the two lines that starts with:

t_Neword.Addnew
These statements are used to add a new record to the ORDERS information. 13. View the code to close the snapshots, tables, and connection.

In the object Form, subroutine Unload find the three lines that start with:

t_Ordlin.Close 396
AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

These statements close the open dynasets. This closes the SQL cursor. The files, or more correctly the open data path, on the AS/400 may remain open to increase the performance when opening and closing SQL cursors.

In the Form_Unload subroutine find the line that starts with:.

aDB.Close
This statement removes the connection to the AS/400 system.

A.1.1.1 Running the application.
To run the application press F5 and refer to A.7, “Running the Application” on page 425.

A.1.2 Client/Access ODBC Using ODBC APIs
In this program, we use the ODBC APIs to access the AS/400 system. 1. Click on the File pull-down from the Visual Basic menu bar. 2. Select Open Project. 3. Go to the 16BITS\SPEEDAPI subdirectory and select the SPEEDBLK.VBP project 4. Click on SPEED00.BAS in the Project Window to highlight it. If you cannot find the Project Window, click on the Window menu and choose Project . 5. Click on the View Code pushbutton to open up the Form Window. 6. View the declarations for the ODBC environment and connection handles. To find the module where you want to view the code, use the Object pulldown to select the object and the Proc pulldown to select the function or subroutine.

In the Object (general), function (declarations) find the line that starts with:

Dim a_henv Dim a_hdbc

As Long As Long

These handles are used to communicate with the ODBC driver and are used on all ODBC function calls. 7. Click on SPEED00.FRM in the Project Window to highlight it. 8. Find the lines that define the statement handles we will use in this exercise. These begin with:

Dim s_Stock1

As Long

There are ten of them. These statement handles are used to define ODBC statements we prepare and then execute. 9. View the code to connect to the AS/400 database.

In the Object Form, subroutine Load find the line that starts with:

ret = SQLAllocEnv(a_henv) ′ Allocates the SQL environment.
This statement allocates environment space for ODBC. This must be done before any other ODBC calls.

Appendix A. Example P r o g r a m s

397

This soft copy for use by IBM employees only.

In the same section, Object Form, subroutine Load find the line that starts with:

ret = SQLAllocConnect(a_henv,a_hdbc) ′ Allocates the connection.
This statement allocates space for an ODBC connection. This must be done before you try to connect to the AS/400. ODBC allows more than one connection at a time and each connection can be to a different or the same Data Source.

In the same section, find the line that starts:

ret = SQLConnect(a_hdbc,DataSource

′ Connect

This statement connects you to the AS/400 system. The user name and password can be blank at run time, and if so then the router′ s common user ID and password will be used. 10. View the code to prepare the SQL statements we use in the application.

In the Object (general), subroutine SQLInit find the nine lines that start with:

ret = SQLAllocStmt(HDBC, s_Stock1) ′
These lines allocate storage in the ODBC environment for the statements used.

In the same section find the line that reads:

ret = SQLPrepare(s_Item1, Query, SQL_NTS)
This statement prepares the SQL statement for execution. The parameter markers, denoted by ″?″ in the previous statements will be replaced with the required data when the SQLPrepare statement is executed. Each statement used needs to be prepared in this way. 11. View the code to retrieve the customer information.

In the Object (general), subroutine Proc_NO find the four lines that start with:

ret = SQLBindParameter(s_Customer1, 1, SQL_PARAM_INPUT, SQL_C_CHAR, ..
SQLBindParameter defines the characteristics (and optionally the storage location) of the parameters used in the SQL statement. In Visual Basic it is unsafe to bind storage locations to a statement because Visual Basic may move storage around outside of the programmer′s control, so here we use the cbValue parameter which is set to the value SQL_DATA_AT_EXEC.

In the same section, find the line that starts:

ret = SQLExecute(s_Customer1) ′ Execute the Customer select query.
This function begins the execution process, but because you have bound parameters with the cbValue parameter set to SQL_DATA_AT_EXEC you must supply the data before the statement is actually executed.

In the same section, find the nine lines that start:

ret = SQLParamData(s_Customer1, aToken)

398

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

These function calls supply data to each of the four parameters in turn, with the last call activating the actual execution of the statement.

In the same section, find the line that starts:

ret = SQLFetch(s_Customer1) ′ Fetch the result.
This statement fetches the first (and only) row of data.

In the same section, find the four lines that start:

ret = SQLGetdata(s_Customer1, 1, SQL_C_CHAR, ByVal s_rslt(1) ....
These four lines retrieve the data from each column in the current row. 12. Carefully read the ODBC function calls in the rest of this subroutine. Notice that all the SQL statements follow the same pattern, except that the INSERT and UPDATE statements are not followed by calls to SQLFetch or SQLGetData. 13. View the code to close the connection and free storage.

In the Object Form, subroutine Unload find the nine lines that start with:

ret = SQLFreeStmt(s_Stock1, SQL_DROP)
These statements close the SQL cursor on the AS/400 and release any storage associated with the statement.

In the same section, find the line that starts with:

ret = SQLDisconnect(a_hdbc)

′ Disconnecting and deallocating.

This statement closes the connection to the AS/400.

The following two lines release the storage used by the ODBC connection and the ODBC environment.

A.1.2.1 Running the Application
To run the application press F5 and refer to A.7, “Running the Application” on page 425.

A.1.3 Client/Access ODBC Using Stored Procedures
In this program, we use the ODBC APIs to call a stored procedure on the AS/400 system. The stored procedure in this example is an AS/400 RPG program. The AS/400 program is available on the included PC media; please refer to A.6, “AS/400 Programs” on page 424 for details. 1. Open the SPEEDBLK.VBP project in the 16BITS\SPSETS directory. 2. Click on SPEED00.FRM in the Project Window to highlight it. 3. Click on the View Code pushbutton to open up the Form Window. 4. In the General_Declaration subroutine, view the declarations for the ODBC environment and connection handles. These handles are used to communicate with the ODBC driver and are used on all future ODBC function calls.

In the module declarations find the lines that read:

Dim a_henv Dim a_hdbc

As Long As Long
Appendix A. Example P r o g r a m s

399

This soft copy for use by IBM employees only.

These line define storage for the ODBC environment and connection.

Find the line that defines the statement handle for the stored procedure call.

Dim s_StoredProc As Long
This statement handle is used to refer to statement we prepare using parameter markers and then execute at a later time. 5. View the code to connect to the AS/400 database.

In the Form_Load subroutine find the line:

ret = SQLAllocEnv(a_henv) ′ Allocates the SQL environment.
This statement allocates environment space for ODBC. This must be done before any other ODBC calls.

In the Form_Load subroutine find the line:

ret = SQLAllocConnect(a_henv,a_hdbc) ′ Allocates the connection.
This statement allocates space for an ODBC connection. This must be done before you try and connect to the AS/400 system. ODBC allows more than one connection at a time and each connection can be to a different or the same Data Source.

In the Form_Load subroutine find the line:

ret = SQLConnect(a_hdbc ...
This statement connects you to the AS/400. The user name and password can be blank at run time and if so then the router′ s common user ID and password will be used. 6. View the code to prepare the SQL statement used in the application to call the stored procedure.

In the function SQLInit find the line that begin with:

ret = SQLAllocStmt(hdbc, s_StoredProc)
This statement allocates storage in the ODBC environment for the statements used.

In the function SQLInit find the first line that begins with
szDropProc = ″ drop procedure csdbsm.nordset″ szCreatProc = ″ Create procedure csdbsm.nordset(in p1 char(10), in p2 dec(3,0), in p3 char(195))″ szCreatProc = szCreatProc & ″ result sets 1 external name csdbsm.nordset language RPGLE general″ ret = SQLExecDirect(s_StoredProc, szDropProc, SQL_NTS) ret = SQLExecDirect(s_StoredProc, szCreatProc, SQL_NTS)

The declares for szDropProc and szCreatProc contain the SQL statements to drop and create a stored procedure. This Visual Basic program can create the stored procedure. This needs to be done only once for the life of the stored procedure. It could also be done on the AS/400 system using interactive SQL or by a program. Here we are demonstrating how to drop and create a stored procedure from the client. In an actual production environment, it would be better to do this once on the AS/400 system.

In the function SQLInit find the line that begins with:

400

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

ret = SQLAllocStmt(a_hdbc, s_StoredProc) ′ Prepare the call. Query = ″CALL NORDSET(?, ?, ?)″ ′ Prepare the stored procedure stmt. ret = SQLPrepare(s_StoredProc, Query, SQL_NTS)
These statements prepare the call to the stored procedure. The stored procedure is called NORDSET and expects three parameters. It will return the results in a result set. The call needs to be prepared only once for the program. 7. View the code to call the stored procedure.

In the function Proc_NO find the lines that start:

IcbValue = SQL_DATA_AT_EXEC ret = SQLBindParameter(s_StoredProc, 1, SQL_PARAM_INPUT, SQL_C_CHAR, ret = SQLBindParameter(s_StoredProc, 2, SQL_PARAM_INPUT, SQL_C_SHORT, ret = SQLBindParameter(s_StoredProc, 3, SQL_PARAM_INPUT, SQL_C_CHAR,
These statements set up the parameters used for the call to the stored procedure. Notice that we set IcbValue to SQL_DATA_AT_EXECUTE. This is avoid problems with the way Visual Basic manages memory.

Find the lines that start:

ret = SQLExecute(s_StoredProc) If ret = SQL_ERROR Then Call GiveErrMsg(s_StoredProc, ″Error on SQLExec of Stored Proc.″ ) Else If ret = SQL_SUCCESS Or ret = SQL_NEED_DATA Then ret = SQLParamData(s_StoredProc, aToken) ′ Parameter 1 ret = SQLPutData(s_StoredProc, ByVal INWDC, 10) ret = SQLParamData(s_StoredProc, aToken) ′ Parameter 2 ret = SQLPutData(s_StoredProc, OLINES, Len(OLINES)) ret = SQLParamData(s_StoredProc, aToken) ′ Parameter 3 ret = SQLPutData(s_StoredProc, ByVal INORDINF, 195) ret = SQLParamData(s_StoredProc, aToken)
These statements cause the execution of the stored procedure. Notice that we use the combination of SQLParamData and SQLPutData to pass in the parameters. The final SQLParamData causes the execution.

Find the lines that start

ret = SQLFetch(s_StoredProc) If ret = SQL_NO_DATA_FOUND Then Call GiveErrMsg(s_StoredProc, ″Error on SQLExec of Stored Proc.″ ) End If ret ret ret ret ret = = = = = SQLGetdata(s_StoredProc, SQLGetdata(s_StoredProc, SQLGetdata(s_StoredProc, SQLGetdata(s_StoredProc, SQLGetdata(s_StoredProc, 1, 2, 3, 4, 5, SQL_CHAR, SQL_CHAR, SQL_CHAR, SQL_CHAR, SQL_CHAR, ByVal ByVal ByVal ByVal ByVal outname, 360, IColLen) outqty, 45, IColLen) outorg, 15, IColLen) outprice, 75, IColLen) outamt, 105, IColLen) 401

Appendix A. Example P r o g r a m s

This soft copy for use by IBM employees only.

ret = SQLGetdata(s_StoredProc, 6, SQL_CHAR, ByVal outrepeat, 61, IColLen)
These statements receive the information returned by the stored procedure. We use SQLFetch because the data is returned in a result set. We use SQLGetData to move the column data to internal storage locations. Again we do not want to bind the returned values to storage locations because of Visual Basic memory management considerations.

A.1.3.1 Running the Application
To run the application press F5 and refer to A.7, “Running the Application” on page 425.

A.2 Database Serving Using Visual Basic and Windows 95
Three examples using Visual Basic are included. The first example, A.2.1, “ Client/Access ODBC Using Visual Basic Database Objects,” shows using database objects, the second example, A.2.2, “Client/Access ODBC Using ODBC APIs” on page 405, shows using APIs, and the third example, A.2.3, “Client/Access ODBC Using Stored Procedures” on page 408, shows using stored procedures. The source code for each Visual Basic exercise resides in its own directory as follows:

Directory Name
32BITS\SPEEDJET

Description
Exercise 1. ODBC using the Visual Basic database objects. (Jet Engine interface)

32BITS\SPEEDAPI

Exercise 2. ODBC using the ODBC API calls.

32BITS\SPSETS

Exercise 3. Using a stored procedure.

A.2.1

Client/Access ODBC Using Visual Basic Database Objects
In this program, we use Visual Basic database objects to access the AS/400 database. If you have not done so, open the Speed.Vbp project in the 32BITS\SPEEDJET directory. 1. Click on SPEED00.FRM in the Project Window to highlight it.

If you cannot find the Project Window, click on the Window menu and choose Project.

2. Click on the View Code pushbutton to open the Form Window. 3. View the declarations for the Visual Basic database objects. These objects are the way you communicate with the ODBC driver. They have methods and properties specifically designed to allow you to easily access data in the server database. To find the module where you want to view the code, use the Object pulldown to select the object and the Proc pulldown to select the function or subroutine.

402

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

In the Object (general), function (declarations) find the line that starts:

Dim aDB

As Database

This object is used by Visual Basic as the connection object to the AS/400 database.

Find the seven lines that define the recordset objects we will use in the exercise. The first line begins with:

Dim t_Stock

As Recordset

′ Stock Select.

We create two different sets of database objects. For input only queries we will use snapshots, whereas for tables that may be updated we will use dynasets. Because snapshots are not updateable, the ODBC driver can make some assumptions that may improve performance. Dynasets must be used if you wish to update or insert records into a table. 4. View the code to connect to the AS/400 system.

In the Object Form, Subroutine Load find the line that starts: with

Set aDB = OpenDatabase(″ ″ , False,False,Connect)
This statement creates a connection to the AS/400 using the information in the Connect parameter. You must have a valid connection to the AS/400 before this statement can be run. Notice that the connection string contains ″ODBC″; that means it will connect via the ODBC interface. 5. View the code to create the dynasets used for inserting records into the ORDERS, ORDLIN, and NEWORD files.

In the Object (general), Subroutine SQLInit find the three lines that start with:

Set t_Orders = aDB.OpenRecordset(TableName,dbOpenDynaset,dbAppendOnly)
These statements open the appropriate AS/400 database tables. We use the dbAppendOnly option to open the AS/400 tables for append only. Since we will only append records to these tables, we can leave them open for the duration of the program.

In the Object (general), Subroutine Proc_NO find the line that starts with:

Set t_Customer = aDB.OpenRecordset(Query,dbOpenSnapshot,dbSQLPassThrough)
This statement runs the query over the CSTMR file. The query is already set up in the preceding line. We use the dbSQLPassThrough option to bypass the Jet Engine interface. This option is only available for Snapshots. We use open as Snapshot to run the query and return the first block of data. The first record becomes the current record and the data from that record is made available. The assignment statements move the data, field by field, from the current record to variables in the program. Notice that the fields are referenced by field name; this means that Visual Basic must retrieve the field descriptions from the database via the ODBC driver at some cost to performance. 6. View the code to retrieve the ITEM information.

Appendix A. Example P r o g r a m s

403

This soft copy for use by IBM employees only.

In the same section find the line that starts with:

Set t_Item = aDB.OpenRecordset(Query,dbOpenSnapshot,dbSQLPassThrough)
This statement runs the query over the ITEM file. 7. View the code to retrieve the Stock information.

In the same section find the line that starts with:

Set t_Stock = aDB.OpenRecordset(Query,dbOpenDynaset)
This statement runs the query over the STOCK file. Here we need to use a Dynaset rather than a Snapshot. Dynasets and Snapshots behave very similarly, but there are some differences. − − Dynasets always contain the up-to-date data. Snapshots are not guaranteed to be up-to-date. They may be a copy of the data. Dynasets can be updated. Snapshots are read only.

8. View the code to update the Stock information.

In the same section find the five lines that start with:

t_Stock.Edit
The Edit and Update methods work as a pair to update a record in a Dynaset. You must use the Edit method to allow the current record to be updated and to lock the record. The record is not updated until you execute the Update method. 9. View the code to retrieve and update the District information.

In the same section find the line that starts with:

Set t_District = aDB.OpenRecordset(Query,dbOpenDynaset)
This statement runs the query over the DISTRICT file.

In the same section find the four lines that start with

t_District.Edit
These four lines update the DISTRICT information. 10. View the code to add the Order Line information.

In the same section, find the line that starts with:

t_Ordlin.AddNew

In the same section, find the line that starts with:

t_Ordlin.Update
To add a record you need to use the AddNew method to create a new record buffer. You can then assign the required data values to that buffer. The Update method is used to insert the record into the dynaset. 11. View the code to add new and update the Orders information.

In the same section, find the line that starts with:

t_Orders.AddNew
This statement adds a new record to the ORDERS information.

In the same section, find the line that starts with:

t_Orders.Update

404

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

This statement updates the ORDERS information. 12. View the code to add the New Order information.

In the same section, find the two lines that starts with:

t_Neword.Addnew
These statements are used to add a new record to the ORDERS information. 13. View the code to close the snapshots, tables, and connection.

In the object Form, subroutine Unload find the three lines that start with:

t_Ordlin.Close
These statements close the open dynasets. This closes the SQL cursor. The files, or more correctly the open data path, on the AS/400 may remain open to increase the performance when opening and closing SQL cursors.

In the Form_Unload subroutine find the line that starts with:.

aDB.Close
This statement removes the connection to the AS/400 system.

A.2.1.1 Running the Application
To run the application press F5 and refer to A.7, “Running the Application” on page 425.

A.2.2 Client/Access ODBC Using ODBC APIs
In this program, we use the ODBC APIs to access the AS/400 system. 1. Click on the File pull-down from the Visual Basic menu bar. 2. Select Open Project. 3. Go to the 32BITS\SPEEDAPI subdirectory and select the SPEEDBLK.VBP project. 4. Click on SPEED00.BAS in the Project Window to highlight it. If you cannot find the Project Window, click on the Window menu and choose Project . 5. Click on the View Code pushbutton to open up the Form Window. 6. View the declarations for the ODBC environment and connection handles. To find the module where you want to view the code, use the Object pulldown to select the object and the Proc pulldown to select the function or subroutine.

In the Object (general), function (declarations) find the line that starts with:

Dim a_henv Dim a_hdbc

As Long As Long

These handles are used to communicate with the ODBC driver and are used on all ODBC function calls. 7. Click on SPEED00.FRM in the Project Window to highlight it.

Appendix A. Example P r o g r a m s

405

This soft copy for use by IBM employees only.

8. Find the lines that define the statement handles we will use in this exercise. These begin with:

Dim s_Stock1

As Long

There are ten of them. These statement handles are used to define ODBC statements we prepare and then execute. 9. View the code to connect to the AS/400 database.

In the Object Form, subroutine Load find the line that starts with:

ret = SQLAllocEnv(a_henv) ′ Allocates the SQL environment.
This statement allocates environment space for ODBC. This must be done before any other ODBC calls.

In the same section, Object Form, subroutine Load find the line that starts with:

ret = SQLAllocConnect(a_henv,a_hdbc) ′ Allocates the connection.
This statement allocates space for an ODBC connection. This must be done before you try to connect to the AS/400. ODBC allows more than one connection at a time and each connection can be to a different or the same Data Source.

In the same section, find the line that starts:

ret = SQLConnect(a_hdbc,DataSource

′ Connect

This statement connects you to the AS/400 system. The user name and password can be blank at run time, and if so then the router′ s common user ID and password will be used. 10. View the code to prepare the SQL statements we use in the application.

In the Object (general), subroutine SQLInit find the nine lines that start with:

ret = SQLAllocStmt(HDBC, s_Stock1) ′
These lines allocate storage in the ODBC environment for the statements used.

In the same section find the line that reads:

ret = SQLPrepare(s_Item1, Query, SQL_NTS)
This statement prepares the SQL statement for execution. The parameter markers, denoted by ″?″ in the previous statements will be replaced with the required data when the SQLPrepare statement is executed. Each statement used must be prepared in this way. 11. View the code to retrieve the customer information.

In the Object (general), subroutine Proc_NO find the four lines that start with:

ret = SQLBindParameter(s_Customer1, 1, SQL_PARAM_INPUT, SQL_C_CHAR, ..
SQLBindParameter defines the characteristics (and optionally the storage location) of the parameters used in the SQL statement. In Visual Basic it is unsafe to bind storage locations to a statement because Visual Basic may move storage around outside of the

406

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

programmer′s control, so here we use the cbValue parameter, which is set to the value SQL_DATA_AT_EXEC .

In the same section, find the line that starts:

ret = SQLExecute(s_Customer1) ′ Execute the Customer select query.
This function begins the execution process, but because you have bound parameters with the cbValue parameter set to SQL_DATA_AT_EXEC you must supply the data before the statement is actually executed.

In the same section, find the nine lines that start:

ret = SQLParamData(s_Customer1, aToken)
These function calls supply data to each of the four parameters in turn, with the last call activating the actual execution of the statement.

In the same section, find the line that starts:

ret = SQLFetch(s_Customer1) ′ Fetch the result.
This statement fetches the first (and only) row of data.

In the same section, find the four lines that start:

ret = SQLGetdata(s_Customer1, 1, SQL_C_CHAR, ByVal s_rslt(1) ....
These four lines retrieve the data from each column in the current row. 12. Carefully read the ODBC function calls in the rest of this subroutine. Notice that all the SQL statements follow the same pattern, except that the INSERT and UPDATE statements are not followed by calls to SQLFetch or SQLGetData. 13. View the code to close the connection and free storage.

In the Object Form, subroutine Unload find the nine lines that start with:

ret = SQLFreeStmt(s_Stock1, SQL_DROP)
These statements close the SQL cursor on the AS/400 and release any storage associated with the statement.

In the same section, find the line that starts with:

ret = SQLDisconnect(a_hdbc)

′ Disconnecting and deallocating.

This statement closes the connection to the AS/400.

The following two lines release the storage used by the ODBC connection and the ODBC environment.

A.2.2.1 Running the Application
To run the application press F5 and refer to A.7, “Running the Application” on page 425.

Appendix A. Example P r o g r a m s

407

This soft copy for use by IBM employees only.

A.2.3 Client/Access ODBC Using Stored Procedures
In this program, we use the ODBC APIs to call a stored procedure on the AS/400 system. The stored procedure in this example is an AS/400 RPG program. The AS/400 program is available on the included PC media, please refer to A.6, “AS/400 Programs” on page 424 for details. 1. Open the SPEEDBLK.VBP project in the 32topS\SPSETS directory. 2. Click on SPEED00.FRM in the Project Window to highlight it. 3. Click on the View Code pushbutton to open up the Form Window. 4. In the General_Declaration subroutine, view the declarations for the ODBC environment and connection handles. These handles are used to communicate with the ODBC driver and are used on all future ODBC function calls.

In the module declarations find the lines that read:

Dim a_henv Dim a_hdbc

As Long As Long

These lines define storage for the ODBC environment and connection.

Find the line that defines the statement handle for the stored procedure call.

Dim s_StoredProc As Long
This statement handle is used to refer to statement we prepare using parameter markers and then execute at a later time. 5. View the code to connect to the AS/400 database.

In the Form_Load subroutine find the line:

ret = SQLAllocEnv(a_henv) ′ Allocates the SQL environment.
This statement allocates environment space for ODBC. This must be done before any other ODBC calls.

In the Form_Load subroutine find the line:

ret = SQLAllocConnect(a_henv,a_hdbc) ′ Allocates the connection.
This statement allocates space for an ODBC connection. This must be done before you try and connect to the AS/400 system. ODBC allows more than one connection at a time and each connection can be to a different or the same Data Source.

In the Form_Load subroutine find the line:

ret = SQLConnect(a_hdbc ...
This statement connects you to the AS/400. The user name and password can be blank at run time, and if so then the router′ s common user ID and password will be used. 6. View the code to prepare the SQL statement used in the application to call the stored procedure.

In the function SQLInit find the line that begins with:

ret = SQLAllocStmt(hdbc, s_StoredProc)
This statement allocates storage in the ODBC environment for the statements used.

In the function SQLInit find the first line that begins with:

408

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

szDropProc = ″ drop procedure csdbsm.nordset″ szCreatProc = ″ Create procedure csdbsm.nordset(in p1 char(10), in p2 dec(3,0), in p3 char(195))″ szCreatProc = szCreatProc & ″ result sets 1 external name csdbsm.nordset language RPGLE general″ ret = SQLExecDirect(s_StoredProc, szDropProc, SQL_NTS) ret = SQLExecDirect(s_StoredProc, szCreatProc, SQL_NTS)

The declares for szDropProc and szCreatProc contain the SQL statements to drop and create a stored procedure. This Visual Basic program can create the stored procedure. This only needs to be done once for the life of the stored procedure. It could also be done on the AS/400 system using interactive SQL or by a program. Here we are demonstrating how to drop and create a stored procedure from the client. In an actual production environment, it would be better to do this once on the AS/400 system.

In the function SQLInit find the line that begins with:

ret = SQLAllocStmt(a_hdbc, s_StoredProc) ′ Prepare the call. Query = ″CALL NORDSET(?, ?, ?)″ ′ Prepare the stored procedure stmt. ret = SQLPrepare(s_StoredProc, Query, SQL_NTS)
These statements prepare the call to the stored procedure. The stored procedure is called NORDSET and expects three parameters. It will return the results in a result set. The call only needs to be prepared once for the program. 7. View the code to call the stored procedure.

In the function Proc_NO find the lines that start:

IcbValue = SQL_DATA_AT_EXEC ret = SQLBindParameter(s_StoredProc, 1, SQL_PARAM_INPUT, SQL_C_CHAR, ret = SQLBindParameter(s_StoredProc, 2, SQL_PARAM_INPUT, SQL_C_SHORT, ret = SQLBindParameter(s_StoredProc, 3, SQL_PARAM_INPUT, SQL_C_CHAR,
These statements set up the parameters used for the call to the stored procedure. Notice that we set IcbValue to SQL_DATA_AT_EXECUTE. This is avoid problems with the way Visual Basic manages memory.

Find the lines that start:

ret = SQLExecute(s_StoredProc) If ret = SQL_ERROR Then Call GiveErrMsg(s_StoredProc, ″Error on SQLExec of Stored Proc.″ ) Else If ret = SQL_SUCCESS Or ret = SQL_NEED_DATA Then ret = SQLParamData(s_StoredProc, aToken) ′ Parameter 1 ret = SQLPutData(s_StoredProc, ByVal INWDC, 10) ret = SQLParamData(s_StoredProc, aToken) ′ Parameter 2 ret = SQLPutData(s_StoredProc, OLINES, Len(OLINES)) ret = SQLParamData(s_StoredProc, aToken) ′ Parameter 3 ret = SQLPutData(s_StoredProc, ByVal INORDINF, 195)

Appendix A. Example P r o g r a m s

409

This soft copy for use by IBM employees only.

ret = SQLParamData(s_StoredProc, aToken)
These statements cause the execution of the stored procedure. Notice that we use the combination of SQLParamData and SQLPutData to pass in the parameters. The final SQLParamData causes the execution.

Find the lines that start:

ret = SQLFetch(s_StoredProc) If ret = SQL_NO_DATA_FOUND Then Call GiveErrMsg(s_StoredProc, ″Error on SQLExec of Stored Proc.″ ) End If ret ret ret ret ret ret = = = = = = SQLGetdata(s_StoredProc, SQLGetdata(s_StoredProc, SQLGetdata(s_StoredProc, SQLGetdata(s_StoredProc, SQLGetdata(s_StoredProc, SQLGetdata(s_StoredProc, 1, 2, 3, 4, 5, 6, SQL_CHAR, SQL_CHAR, SQL_CHAR, SQL_CHAR, SQL_CHAR, SQL_CHAR, ByVal ByVal ByVal ByVal ByVal ByVal outname, 360, IColLen) outqty, 45, IColLen) outorg, 15, IColLen) outprice, 75, IColLen) outamt, 105, IColLen) outrepeat, 61, IColLen)

These statements receive the information returned by the stored procedure. We use SQLFetch because the data is returned in a result set. We use SQLGetData to move the column data to internal storage locations. Again we do not want to bind the returned values to storage locations because of Visual Basic memory management considerations.

A.2.3.1 Running the Application
To run the application press F5 and refer to A.7, “Running the Application” on page 425.

A.3 Application Serving Using Visual Basic with Windows 3.1
Three examples using Visual Basic are included. The first example, A.3.1, “Using APPC and Visual Basic to Access the AS/400 System,” shows using APPC to access the AS/400 system. The second example, A.3.3, “Application Serving Using Visual Basic and Data Queues” on page 414, shows using data queues, and the third example, A.3.2, “Application Serving Using DPC” on page 412, shows using distributed program calls.

Directory Name
16BITS\APPC 16BITS\DQ 16BITS\DPC

Description
Application Serving exercise with Visual Basic code using APPC. Application Serving exercise with Visual Basic code using Data Queues Application Serving exercise with Visual Basic code using Distributed Program Call.

A.3.1 Using APPC and Visual Basic to Access the AS/400 System
In this program, we use Visual Basic APPC code to access the AS/400 system. 1. Open the SPEED.VBP project in the 16BITS\APPC directory. If the Project Window is not visable, select Project from the Window pulldown.

410

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

To access the code, click on APPC.BAS in the Project Window, and then click the View Code pushbutton.

A.3.1.1

View Function appc.Allocate()

Click on the Proc combo box in the APPC.BAS Window and then select Allocate. − Find the line: rc%=EHNAPPC_Allocate(hWnd,1929,.... ″CSDBSM/APPCIXRPG″, 0, PipDataS, ConvId) − This statement allocates a mapped conversation with the RPG program, APPCIXRPG, in the CSDBSM library.

A.3.1.2

View Subroutine AsciiToEbcdic()

Click on the Proc combo box in the APPC.BAS Window and then select AsciiToEbcidic. − Find the line: rc% = EHNDT_ASCIIToEBCDIC(hWnd,StrName,Target$,Len(StrName),tlen) When we use the APPC interface, we have to provide the conversion from PC format to AS/400 format. Here we show how the character data is converted from ASCII to EBCDIC.

A.3.1.3

View Function Send()

Click on the Proc combo box in the APPC.BAS Window and then select Send. − Find the line: rc% = EHNAPPC_SendData(hWnd, ConvId, Len(FData), FData, rqs%) We use the APPC call SendData to send the information to the AS/400 program. The APPC calls used here are provided by Client Access/400.

A.3.1.4

View Function RecvW()

Click on the Proc combo box in the APPC.BAS Window and then select RecvW. − Find the line: rc% = EHNAPPCReceiveAndWait(hWnd,ConvId,EHNAPPC_BUFFER, ... WhatRec%,Rts%, ActLen%) We use the APPC call ReceiveAndWait to receive the data back from the AS/400 program.

A.3.1.5

View Subroutine EbcdicToASCII()

Click on the Proc combo box in the APPC.BAS Window and then select EbcdicToAscii. − Find the line: rc% = EHNDT_EBCDICToASCII(hWnd, StrName, Target$, Len(StrName), tlen) When we use the APPC interface, we have to provide the conversion from PC format to AS/400 format. Here we show how the character data is converted from EBCDIC to ASCII.
Appendix A. Example P r o g r a m s

411

This soft copy for use by IBM employees only.

A.3.1.6

View Subroutine Deallocate()

Click on the Proc combo box in the APPC.BAS Window and then select Deallocate. − Find the line: rc% = EHNAPPC_Deallocate(hWnd, ConvId, ... In order to end the conversation with the AS/400 system, we use the APPC Deallocate verb.

A.3.1.7 Run the Application
1. Press F5 to run the example. See A.7, “Running the Application” on page 425 for details. Important Information You must enter the AS/400 System Name rather than the ODBC Data Source Name in the Data Source entry of the Speed Connection Options dialog box for this exercise to work. The log from this run of Speed will be shown. This log shows the order numbers processed and how long each took in seconds.

A.3.2 Application Serving Using DPC
DPC (Distributed Program Call) APIs are used in this program to communicate between the PC and the AS/400 system. The DPC APIs are used to call a program on the AS/400 system. The AS/400 program reads, updates, and inserts records in the AS/400 database. The DPC APIs send the necessary information from the PC to the AS/400 program and retrieve the results from the AS/400 program back to the PC. In this section of the lab, you will investigate a Client order entry program which uses DPC APIs to communicate with the Server. The AS/400 server code for DPC is always running. It is not necessary to start it or create buffers.

A.3.2.1

Viewing the Visual Basic Code

1. Open the SPEED.MAK project in the 16BITS\DPC directory. To access the code, click on SPEED00.FRM in the Project Window, and then click the View Code pushbutton.

A.3.2.2

View Subroutine Form_Load()

Click on the Object combo box in the SPEED00.FRM Window and then select Form:. Then select Load from the Proc combo box. − Find the line: ret = EHNDP_StartSys(Me.hWnd, DataSource, ″NewOrder″, a_hSystem) This line starts the connection to the AS/400 system and creates an AS/400 system object, which will be used by the other DPC calls.

412

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

A.3.2.3

View General Procedure SQLInit

Click on the Object combo box in the SPEED00.FRM Window and then select general. Then select SQLInit from the Proc combo box. − Find the line: ret = EHNDP_CreatePgm(Me.hWnd, ″NEWORD″, Trim$(Lib_Name)... This statement creates a program object called NEWORD. We will use this program object to call the AS/400 program. − Find the line: ret = EHNDP_AddParm(Me.hWnd,a_hProgram,EHNDP_INPUT,10,... This line specifies an input parameter for WID_DID_CID. We have to set up all the input parameters for the AS/400 program. − Find the line: ret = EHNDP_AddParm(Me.hWnd,a_hProgram,EHNDP_OUTPUT,61,... This line specifies an output parameter for INFOBACK. We have to set up all the output parameters that will be returned from the AS/400 program.

A.3.2.4

View General Procedure Proc_NO ()

Click on the Object combo box in the SPEED00.FRM Window and then select general. Then select Proc_NO () from the Proc combo box. − Find the line: ret = EHNDP_CallPgm(Me.hWnd, a_hSystem, a_hProgram) This statement calls the AS/400 program. Notice that we also handle the conversions from PC format to AS/400 format.

A.3.2.5

View Subroutine Form_UnLoad()

Click on the Object combo box in the SPEED00.FRM Window and then select Form. Then select Unload from the Proc combo box. − Find the lines: ret = EHNDP_DeletePgm(Me.hWnd, a_hProgram) ret = EHNDP_StopSys(Me.hWnd, a_hSystem) These statements stop the connection to the AS/400 system.

A.3.2.6 Run the Application
1. Run the example as described in A.7, “Running the Application” on page 425. Important Information You must enter the AS/400 System Name rather than the ODBC Data Source Name in the Data Source entry of the Speed Connection Options dialog box for this exercise to work. The log from this run of Speed will be shown. This log shows the order numbers processed and how long each took in seconds.

Appendix A. Example P r o g r a m s

413

This soft copy for use by IBM employees only.

A.3.3 Application Serving Using Visual Basic and Data Queues
In this program we have removed all the AS/400 database update processing previously done via ODBC. Instead, we are sending the update data to an AS/400 data queue to be done at a later time.

A.3.3.1

Viewing the Visual Basic Code

1. Open the SPEED.MAK project in the 16BITS\DQ directory. To access the code, click on SPEED00.FRM in the Project Window, and then click the View Code pushbutton.

A.3.3.2

View Subroutine Proc_NO

Click on the Object combo box in the SPEED00.FRM Window and then select general. Select Proc_NO from the Proc combo box. − Notice that a large portion of the New Order processing has been commented out and eliminated. Instead of completing the new order processing real time, we are delaying it until later. Find the line: ret = EHNDQ_Send(Me.hWnd, ″CSDBSM/CSDQ″, ″????????″, EBCDICBuffer, Len(EBCDICBuffer)) − Change the line to send the buffer to your AS/400 system, in a data queue called CSDQ in library CSDBSM. Replace the question marks with the name of your AS/400 system. Note! Automatic translation can be done using the EHNDQ_SetMode API.

A.3.3.3 Run the Application
1. Run the example, enter the ODBC data source rather than system name in the data source entry. Refer to A.7, “Running the Application” on page 425 for details about running the application. Important Information This time, You must enter the ODBC Data Source Name in the Data Source entry of the Speed Connection Options dialog box for this exercise to work.

A.4 Database Serving Using Visual C++ and Windows 3.1
The PC media contains three examples using Visual C++ for database serving. The first example A.4.1, “Client/Access ODBC Using ODBC APIs” on page 415 uses ODBC APIs to access the AS/400 system, the second example A.4.2, “Client/Access ODBC With Block Inserts and Extended Fetch” on page 417 uses extended fetch and block inserts, and the third example A.4.3, “Client/Access ODBC Using Stored Procedures” on page 418 uses stored procedures. The source code for each exercise resides in its own directory as follows.

Directory Name
C:\CPP\SPEED

Description
Exercise 1. ODBC.

414

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

C:\CPP\SPEEDEX

Exercise 2. ODBC with blocked insert and extended fetch.

C:\CPP\STORPROC

Exercise 3. Using a stored procedure.

A.4.1 Client/Access ODBC Using ODBC APIs
In this program, we use the ODBC APIs to access the AS/400 system. 1. Open the file SPEED.MAK in directory C\CPP\SPEED. 2. Check the declarations for the ODBC environment and connection handles. These handles are used to communicate with the ODBC driver and are used on all future ODBC function calls.

Open the SPEEDVW.H module and find the lines that read: Note: To find the statements in the source code, you can use the search function. Enter the search text in the search list box and then click on the Search icon (it looks like a set of binoculars).

HENV HDBC

a_henv; a_hdbc;

// Environment. // Connection.

These statements are used to define the ODBC environment and the ODBC connection handle. HENV and HDBC are defined in the SQL.H file, which is included with the Visual C++ compiler.

Find the lines that define the statement handles we use in the program. These begin with:

HSTMT s_Stock1;
There are ten of them.

// Stock Select.

A statement handle is required for each ODBC statement that we want to execute. Statement handles are used to refer to statements we prepare using parameter markers and then execute at a later time.

Close the SPEEDVW.H module window.

3. View the code to connect to the AS/400 database.

Open the SPEEDVW.CPP module and find the lines that read:

ret = SQLAllocEnv(&a_henv)

//Allocates the SQL environment.

This statement allocates environment space for ODBC. This must be done before any other ODBC calls.

Find the line:

VERIFY(SQLAllocConnect(a_henv,&a_hdbc) == SQL_SUCCESS);
This statement allocates space for an ODBC connection. This must be done before you try and connect to the AS/400. ODBC allows more than one connection at a time and each connection can be to a different or the same Data Source as required.

Find the line:

ret = SQLConnect(a_hdbc,unsigned char __far*)DataSource,SQL_NTS,

Appendix A. Example P r o g r a m s

415

This soft copy for use by IBM employees only.

This statement connects you to the AS/400 system. The user name and password can be blank at run time, and if so then the router′ s common user ID and password will be used.

find the lines that read:

ret = SQLFreeStmt(s_Stock1, SQL_DROP);
These lines close the SQL cursors and release any storage associated with the statement.

find the line that starts with:

ret = SQLDisconnect(a_hdbc); // Disconnecting and deallocating.
This line closes the connection to the AS/400.

The following two lines release the storage used by the ODBC connection and the ODBC environment.

4. View the code to prepare the SQL statements we use in the application.

Open the NO_SQL.CPP module and find the lines that read:

ret=SQLAllocStmt(hdbc,&s_Stock1);
These lines allocate storage in the ODBC environment for the statements used.

Find the line that begins with

ret=SQLPrepare(s_Stock1,(unsigned char *)Query,SQL_NTS);
This line prepares the SQL statement for execution. Move back in the code and notice now the SQL statement is built in Query and Query is used to prepare s_Stock1.

View the lines that bind the parameters to the statement. SQLSetParam defines the characteristics and the storage location of the parameters used in the SQL statement. SQLBindCol defines the characteristics and the storage location used to receive the column values when an SQLFetch is performed for this statement. Each statement used must be prepared in this way. The parameter markers, denoted by ″?″ in the statements, will be replaced with the required data when the statement is executed.

5. View the code to retrieve the customer information.

Open the PROC_NO.CPP module and find the lines that reads:

ret = SQLExecute(s_Customer1); // Execute the Customer select query.
This line executes the query. It uses the statement handle named s_Customer1.

Find the line that starts:

ret = SQLFetch(s_Customer1);

// Fetch the result.

This line fetches the first (and only) row of data.

View the code to retrieve the stock information. − find the lines that starts:

ret = SQLExecute(s_Stock1);

// Execute the query.

416

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

This line executes the query over the stock file. It use the statement handle s_Stock1. − Find the line that starts:

ret = SQLFetch(s_Stock1);

// Fetch the result.

This line fetches the first (and only) row of data. Carefully read the ODBC function calls in the rest of this subroutine. Notice that all the SQL statements follow the same pattern, except that the INSERT and UPDATE statements are not followed by calls to SQLFetch.

A.4.1.1 Run the Application
1. Compile the application. 2. Run the program by selecting Project/Execute SPEED.EXE from the C + + menu. Refer to A.7, “Running the Application” on page 425 for details.

A.4.2 Client/Access ODBC With Block Inserts and Extended Fetch
In this program, we use the ODBC APIs to access the AS/400 database using some performance enhancing techniques. 1. View the code to prepare the SQL statements used in the application.
• •

Open the project in the \CPP\SPEEDEX directory. Open the NO_SQL.CPP module and find the lines that read:

ret=SQLAllocStmt(hdbc,&s_Stock1);
These lines allocate storage for the statements which will be prepared and executed.

Find the line that begins with:

ret=SQLPrepare(s_Item1,(unsigned char *)Query,SQL_NTS);
This line prepares the SQL statement for execution. The parameter markers, denoted by ″?″ in the statements, will be replaced with the required data when the statement is executed.

View the lines that bind the parameters to the statement. SQLSetParam defines the characteristics and the storage location of the parameters used in the SQL statement. SQLBindCol defines the characteristics and the storage location used to receive the column values when an SQLFetch is performed for this statement.

Open the module SPEEDVW.H Find the declarations for the variables (s_parm) bound to the columns for statement s_Item1. Notice they are in fact arrays of variables which will be filled by the extended fetch.

Find the line (in NO_SQL.CPP) that begins with:

ret= SQLPrepare(s_Ordlin1,(unsigned char far *)tmpbfr,SQL_NTS);
This line prepares the SQL insert for execution. Notice the SQL statement reads:

Insert into ORDLIN ? ROWS ..................
Appendix A. Example P r o g r a m s

417

This soft copy for use by IBM employees only.

Scroll down to view the binding of the parameters. Notice that lOrderNum is used as the input variable for the first parameter and uiDistrict for the second parameter. Open the module SPEEDVW.H and find the declarations for the variables (lOrderNum, uiDistrict, etc.) bound to the parameters for statement s_Ordlin1. Notice once again that they are in fact arrays of variables that this time will be used by the blocked insert operation.

2. View the code to retrieve the item information.

Open the PROC_NO.CPP module and find the lines that read:

ret=SQLExecute(s_Item1);

// Execute Item query.

This line executes the query of the item file.

Find the line that reads:

ret=SQLExtendedFetch(s_Item1, SQL_FETCH_NEXT, NewOrd_IO->m_OrderCount...
This line will fetch NewOrd_IO->m_OrderCount rows of data at once. 3. View the code to insert the order lines.

Find the lines that read:

ret=SQLParamOptions(s_Ordlin1, NewOrd_IO->m_OrderCount, &rowcnt);
This function tells the ODBC driver you will be inserting NewOrd_IO->m_OrderCount rows at a time.

Find the line that starts:

ret=SQLExecute(s_Ordlin1);

// Execute Order Line insert.

This line will insert NewOrd_IO->m_OrderCount rows of data into the ORDLIN table at once.

A.4.2.1

Run the Application

1. Compile the application. 2. Run the example as described in A.7, “Running the Application” on page 425. For this exercise, you must select Use Extended Fetch/Block Insert in the Speed Options Box. The log from this run of Speed will be shown. This log shows the order numbers processed and how long each took in seconds.

A.4.3 Client/Access ODBC Using Stored Procedures
In this program, we use the ODBC APIs to call a stored procedure on the AS/400 system. The stored procedure in this example is an RPG program. For information about the RPG program, go to A.6, “AS/400 Programs” on page 424.

A.4.3.1 Viewing the C++ Code
1. Open the STORPROC.MAK file in the C:\CPP\STORPROC directory. 2. View the declarations for the ODBC environment and connection handles.

Open the SPEEDVW.H module and find the lines that read:

HENV HDBC

a_henv; a_hdbc;

// Environment. // Connection.

These statements define the environment and connection handles.

418

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

Find the line that defines the statement handle we will use for the stored procedure:

HSTMT s_StoredProc;

//Handle for stored procedure.

This statement defines a statement handle named s_StoredProc . It will be used to call the AS/400 stored procedure. 3. View the code to connect to the AS/400 system.

Open the SPEEDVW.CPP module and find the lines that read:

ret = SQLAllocEnv(&a_henv); //Allocates the SQL environment.
This statement allocates environment space for ODBC. This must be done before any other ODBC calls.

Find the line:

VERIFY(SQLAllocConnect(a_henv,&a_hdbc) == SQL_SUCCESS);
This statement allocates space for an ODBC connection. This must be done before you try and connect to the AS/400. ODBC allows more than one connection at a time and each connection can be to a different or the same Data Source as required.

Find the line:

ret = SQLConnect(a_hdbc,(unsigned char__far*)DataSource,SQL_NTS,
This statement connects you to the AS/400 system. The user name and password can be blank at run time, and if so then the router′ s common user ID and password will be used.

Find the line that reads: // Free and drop all statements. These lines free all the resources used by ODBC for the statements, connections, and ODBC environment.

4. View the code to prepare the stored procedure used in the application.

In the module STORPROC.CPP find the lines that begin with:

ret = SQLAllocstmt(hdbc, &s_StoredProc);
This line allocates storage in the ODBC environment for the stored procedure call.

Find the line that begins with:

ret = SQLPrepare(s_StoredProc
This line prepares the statement that calls the stored procedure. Notice that we call a stored procedure named NEWORDRPG and it requires 9 parameters.

Find the lines that start:

ret = SQLBindParameter(s_StoredProc, 1, SQL_PARAM_INPUT, ......
These function binds actual storage locations to each parameter in the SQL statement.

View the code to call the stored procedure. − Find the line that starts:

ret = SQLExecute(s_StoredProc);
This line calls the stored procedure. The data for the input parameters is taken from the storage locations set up in the calls

Appendix A. Example P r o g r a m s

419

This soft copy for use by IBM employees only.

to SQLBindParameter and the output parameter values are placed in the storage locations set up in the calls to SQLBindCol.

A.4.3.2 Run the Application
• •

Compile the application. Run the example as described in A.7, “Running the Application” on page 425. The log from this run of Speed will be shown. This log shows the order numbers processed and how long each took in seconds.

A.5 Application Serving Visual C++ With Windows 3.1
Three examples using Visual C++ are included. The first example, A.5.1, “Visual C++ Example Using APPC With Windows 3.1,” shows using APPC to access the AS/400 system. The second example, A.5.3, “Application Serving Using Data Queues” on page 423, shows using data queues, and the third example, A.5.2, “Application Serving Using DPC” on page 422, shows using distributed program calls.

Directory Name
C:\CPP\APPC C:\CPP\DATAQ C:\CPP\DPC

Description
Application Serving exercise with C++ code using APPC. Application Serving exercise with C++ code using Data Queues Application Serving exercise with C++ code using Distributed Program Call.

A.5.1 Visual C++ Example Using APPC With Windows 3.1
In this program, we use APPC to interface with a host program written in RPG. Please refer to A.6, “AS/400 Programs” on page 424 for details about the AS/400 program.
• •

Open the make file in the \CPP\APPC directory. Open the SPEEDVW.CPP module.

A.5.1.1 View the Function to allocate the conversation.

Scroll down to the class constructor CSpeedView::CSpeedView() and then find the following line:

ret = EHNAPPCC_Allocate(m_hWnd, commBufferSize, // Communication buffer size EHNAPPC_MAPPED, // Mapped conversation EHNAPPC_SYNCLEVELNONE, // SYNC level - NONE locationName, // PC name ″CSDBSM/APPCIXRPG″ , // TP name pipLen, // PIP data length pipData, // PIP data &conv_id); // conversation ID
This call allocates a mapped conversation between the PC program and the AS/400 RPG program, APPCIXRPG, in library CSDBSM.

420

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

A.5.1.2 View the Send Function

In the SPEEDVW.CPP module, find the CSpeedView::Send function. − Find the statement: ret = EHNAPPC_SendData(m_hWnd, conv_id, iLength, FData, &rqs); This line will send the data to the AS/400 program using the conversation id stored in conv_id.

A.5.1.3 View the Receive Function()

In the SPEEDVW.CPP module, find the CSpeedView::RecvW function. − Find the statement: ret = EHNAPPC_ReceiveAndWait(m_hWnd, conv_id, EHNAPPC_BUFFER, iLen,IntBuff, &WhatRec, &rts, &ActLen); This line will receive the answer back from the AS/400 RPG program using the conversation id stored in conv_id.

A.5.1.4 View the deallocate function()

In the SPEEDVW.CPP module find the destructor function CSpeedView::CSpeedView(). − Find the statement: ret = EHNAPPC_Deallocate(m_hWnd, conv_id, ... This line deallocates the APPC conversation for the conversation id stored in conv_id.

A.5.1.5 View the Translation from ASCII to EBCDID Routine
• •

Open the SPEEDAPP.CPP module. In the SPEEDAPP.CPP module find the CSpeedView::Proc_NO function. − Find the statement: ret = EHNDT_ASCIIToEBCDIC(m_hWnd,sString,achBuffer,strlen(aString),&iLen); − Find the statement: ret = EHNDT_EBCDICToASCII(m_hWnd, INFOBACK, aString, iLen, &iLen); We have to provide the data conversion between the PC and AS/400 system. We use the EHNDT routines provided by Client Access/400.

A.5.1.6

Compile and Run the Application

1. From the C + + menu select Project/Build SPEEDAPP.EXE. If you get an error message indicating missing files, do a Rebuild All to regenerate the missing files. 2. If you have compile errors, double click on the error message and the C++ environment will position you to the line in error in the code. 3. When the program has compiled correctly, run the program by selecting Project/Execute SPEEDAPP.EXE from the C++ menu. Refer to A.7, “Running the Application” on page 425 for details.

Appendix A. Example P r o g r a m s

421

This soft copy for use by IBM employees only.

Important Information You must enter the AS/400 System Name rather than the ODBC Data Source Name in the Data Source entry of the Speed Connection Options dialog box for this exercise to work. 4. When the number of cycles you requested have been run, a message box reading All Cycles are complete will be displayed. 5. Press OK. The log from this run of Speed will be shown. This log shows the order numbers processed and how long each took in seconds.

A.5.2 Application Serving Using DPC
DPC (Distributed Program Call) APIs are used in this program to communicate between the PC and the AS/400 system. The DPC APIs are used to call a program on the AS/400 system. The program reads, updates, and inserts records in the AS/400 database. The AS/400 server code for DPC is always running. It is not necessary to start it or create buffers.

A.5.2.1

Client Side Application Serving

In this section, we use the DPC APIs calls written in C++ on the PC to communicate with the DPC APIs on the AS/400.
• •

Open the make file in the \CPP\DPC directory. Open the SPEEDVW.CPP module.

A.5.2.2 View the initialization code.

Scroll down till you find the class constructor CSpeedView::CSpeedView() and then find the following line: ret = EHNDP_StartSys(m_hWnd, DataSource, ″NewOrder″, &a_hSystem); This line starts the connection to the AS/400 system and creates a system object.

A.5.2.3 View the Code to Set Up the Program and Its Parameters
• •

Open the SPEEDDPC.CPP module. Scroll down till you find the class constructor CSpeedView::SQLInit and then find the following line: ret = EHNDP_CreatePgm(m_hWnd, ″NEWORD″, Lib_Name, &a_hProgram); This line creates a program object named NEWORD. We will use this program object to call the AS/400 program named NEWORD. − Find the following lines: // Add the input parameters ret = EHNDP_AddParm(m_hWnd, a_hProgram, EHNDP_INPUT, 10, (unsigned char __far *)&WID_DID_CID); This line specifies an input parameter for WID_DID_CID. We have to add the parameters for all input and output parameters. − Find the following lines:

422

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

// Add the output parameters ret = EHNDP_AddParm(m_hWnd, a_hProgram, EHNDP_OUTPUT, 61, (unsigned char __far *)&INFOBACK); This line specifies an output parameter for WID_DID_CID.

A.5.2.4 Find the Code To Call the Remote Program

Find the function CSpeedView::Proc_NO. − Find the following lines: ret = EHNDT_ASCIIToPacked(m_hWnd, OLINES, aString, 2, 0); ret = EHNDP_CallPgm(m_hWnd, a_hSystem, a_hProgram); This line calls the program object that we previously created. The parameters that we added will be used for input and output. Notice that we must handle the data conversions in the program.

A.5.2.5 View the Disconnect Code

In the SPEEDVW.CPP module find the destructor function CSpeedView::CSpeedView(). − Find the following lines: // Free and drop all statements ret = EHNDP_DeletePgm(m_hWnd, a_hProgram);//Delete program ret = EHNDP_StopSys(m_hWnd, a_hSystem);//Stop a DPC connection These lines will delete the program object and end the connection to the AS/400 system.

A.5.2.6

Run the Application

1. Run the program as described in A.7, “Running the Application” on page 425. The log from this run of Speed will be shown. This log shows the order numbers processed and how long each took in seconds. The system name is received from the Data Source entry on the SPEED ODBC User Info box. Important Information You must enter the AS/400 System Name rather than the ODBC Data Source Name in the Data Source entry of the Speed Connection Options dialog box for this exercise to work.

A.5.3 Application Serving Using Data Queues
In this program we have removed all the AS/400 database update processing previously done via ODBC. Instead, we are sending the update data to an AS/400 data queue to be done at a later time. In this program, we use the Data Queue APIs calls written in C++ on the PC to communicate with a Data Queue on the AS/400 system.

Appendix A. Example P r o g r a m s

423

This soft copy for use by IBM employees only.

A.5.3.1

Viewing the C++ Code

Open the make file in the \CPP\DQ directory. Open the PROC_NO.CPP module.

A.5.3.2

Modify CSpeedView::Proc_NO Function
Notice that most of the new order processing is removed or disabled. In this program, we will put the order information on a data queue for later processing. This should help improve end user response time. Find the line: ret = EHNDQ_Send(m_hWnd, ″CSDBSM/CSDQ″, ″??????″, achBuffer, iLen); − This statement sends the buffer to the AS/400 system to be placed in a data queue called CSDQ in library CSDBSM. The question marks represent the name of the AS/400 system. In order to run this program, you must change this field to match the name of your AS/400 system.

A.5.3.3

Run the Application
Important Information This time you must enter the ODBC Data Source Name in the Data Source entry of the Speed Connection Options dialog box for this exercise to work.

Compile and run the example. This time enter the ODBC source name rather than the AS/400 system name in the Data Source field entry on the Speed ODBC User Info Box Refer to A.7, “Running the Application” on page 425 for details. The log from this run of Speed will be shown. This log shows the order numbers processed and how long each took in seconds.

A.6 AS/400 Programs
The following AS/400 programs are used with these example programs. They are found the the CSDBSM library in file QRPGLESRC.

APPCIXRPG RPGLE Target APPC program

CUSTSRCH SQLRPGLE ODBC stored procedure using SQL result sets

NEWORDRPG SQLRPGLE ODBC stored procedure using parameter passing

NORDSET SQLRPGLE ODBC stored procedure using array result sets

NEWORD SQLCBLLE New Order Stored Procedure

424

AS/400 Client/Server Performance

This soft copy for use by IBM employees only.

A.7 Running the Application
This section shows windows captured from the application. For more information about the application, please refer to Chapter 10, “Case Study” on page 351. The applications have been developed to run in an automatic mode. In automatic mode, the application uses random numbers to enter data for the order entry window. When the program is run automatically, the order entry information is generated by the program. The program generates random numbers for the customer number, item number, and order quantity. This allows for easy, repetitive performance testing. Test Database Only a small version of the test database is included. The databases support only 10 customers and 100 items. The customer number will be a number between 1 and 10. The item number will be a number between 1 and 100. Because the test database is very small, after a few executions of the application the database information will already be in the AS/400 main memory. This will result in response times faster than normal.

When starting the application, the following window should appear:

Figure 103. Speed Connection Options

Make sure the following fields are set correctly. Warehouse District Data Source 1. A value between 1 and 10. The name of the ODBC data source you configured using the ODBC administrator

Appendix A. Example P r o g r a m s

425

This soft copy for use by IBM employees only.

Database library User name Password

″CSDBSM″ Leave blank Leave blank

Click the Options button to bring up the run options dialogue. The following window should appear: Note: Some of the options shown in this window may not appear for some of the exercises.

Figure 104. Speed Run Options

Make sure the following fields are set correctly. Use Numeric data type. Select Use Numeric data type for the C++ ODBC examples Database delimiter Path to