You are on page 1of 193

SAP DBA Cockpit

Flight Plans for DB2 LUW Database Administrators


Eduardo Akisue Jeremy Broughton Liwen Yeow Patrick Zeng

Foreword by Torsten Ziegler

SAP DBA Cockpit: Flight Plans for DB2 LUW Database Administrators Eduardo Akisue, Jeremy Broughton, Liwen Yeow, Patrick Zeng Foreword by Torsten Ziegler October 2009 2009 IBM Corporation. All rights reserved. Portions MC Press Online, LP. Every attempt has been made to provide correct information. However, the publisher and the author do not guarantee the accuracy of the book and do not assume responsibility for information included in or omitted from it. IBM is a registered trademark of International Business Machines Corporation in the United States, other countries, or both. DB2 is a registered trademark of International Business Machines Corporation in the United States, other countries, or both. All other product names are trademarked or copyrighted by their respective manufacturers. This publication is protected by copyright, and permission must be obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or likewise. For information regarding permissions or special orders, please contact: MC Press Corporate Offices 125 N. Woodland Trail Lewisville, TX 75077 USA For information regarding sales and/or customer service, please contact: MC Press P.O. Box 4300 Big Sandy, TX 75755-4300 USA ISBN: 978-158347-089-3

About the Authors


Eduardo Akisue is a member of the WW DB2 SAP Technical Sales Enablement and Support team. Previous to his current role, he worked for many years supporting DB2 and Informix customers in the Latin America region. He is a Certified DB2 9 Administrator, a Certified Informix Administrator, and an Informix dial-up Engineer. He is also a Certified SAP Technology Consultant and an SAP Certified OS/DB Migration Consultant. Eduardo can be reached at akisue@us.ibm.com.

Jeremy Broughton is a Technical Enablement Specialist for IBM DB2 and SAP. He has worked within the IBM DB2 Development Lab for 10 years, first developing infrastructure and tooling for DB2 development, and then rewriting internal DB2 code to optimize compilation performance and development agility. For the past three years, Jeremy has been dedicated to helping SAP professionals leverage the strengths of DB2 within SAP implementations. He has assisted with proofs of concept, provided consulting to customers implementing SAP on DB2, and presented numerous workshops around the world teaching DB2 administration and migration methodology for SAP systems. He is an SAP Certified Basis Consultant for DB2 on NetWeaver 2004, and an SAP Certified OS/DB Migration Consultant. Jeremy can be reached at jeremyb@ca.ibm.com.

Liwen Yeow is the WW SAP Technical Sales Manager for DB2 Distributed Platforms. He has been with IBM since 1988 and has worked in the SAP field since 1995 in multiple capacities: as part of DB2 Service, as an SAP Consultant for DB2, as a Customer Advocate for many of the large SAP-DB2 customers, and as Manager of the IBM-SAP Integration and Support Center. In his current role, he is responsible for the enablement of the Technical Pre-Sales teams and provides guidance to the Sales teams in SAP sales opportunities. He is a Certified Technology AssociateSystem Administration (DB2) for SAP NetWeaver 7.0, and an SAP Certified Technology Consultant for DB/OS Migration. Liwen can be reached at yeow@ca.ibm.com

Patrick Zeng was a member of the WW DB2 SAP Technical Sales Enablement and Support team and currently works as a DBA at Bank of America. He has many years worth of experience supporting SAP and DB2 customers. He is a Certified DB2 Solutions Expert and a Certified SAP Technology Consultant. Patrick can be reached at patrick.pucheng.zeng@gmail.com.

Torsten Ziegler has been the Development Manager for SAP NetWeaver on IBM DB2 for Linux, UNIX, and Windows since 2001. After having worked in other industries, he joined SAP as a developer in 1997. In his current role, he is responsible for development, maintenance, and development support for all DB2-specific components of SAP NetWeaver and applications based on NetWeaver. He can be reached at torsten.ziegler@sap.com

Acknowledgments

The authors would like to express their gratitude for the technical contributions received by the following colleagues: At IBM: Guiyun Cao Martin Mezger Karl Fleckenstein At SAP AG: Torsten Ziegler Ralf Stauffer Andreas Zimmermann Steffen Siegmund Britta Bachert

Contents
Foreword by Torsten Ziegler. . . . . . . . . . . . . . . . . . . vi Chapter 1: The SAP DBA Cockpit . . . . . . . . . . . . . . . . . . . . . . 1

Central Monitoring of Remote Systems . . . . . . . . . . . . . . 5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5


Chapter 2: Performance Monitoring . . . . . . . . . . . . . . . . . . . . . 6

Performance: Partition Overview . . . . . . . . . . . . . . . . . . 7 Performance: Database Snapshot . . . . . . . . . . . . . . . . . . 9 The Buffer Pool . . . . . . . . . . . . . . . . . . . . . . . . 10 The Catalog Cache and Package Cache . . . . . . . . . . . . 15 Asynchronous I/O . . . . . . . . . . . . . . . . . . . . . . . 18 Direct I/O. . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Real-Time Statistics (RTS). . . . . . . . . . . . . . . . . . . 20 Locks and Deadlocks. . . . . . . . . . . . . . . . . . . . . . 21 Logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Sorts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 XML Storage . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Performance: Schemas. . . . . . . . . . . . . . . . . . . . . . . 31 Performance: Buffer Pool Snapshot . . . . . . . . . . . . . . . . 31 Performance: Tablespace Snapshot . . . . . . . . . . . . . . . . 33 Performance: Table Snapshot . . . . . . . . . . . . . . . . . . . 34 Performance: Application Snapshot . . . . . . . . . . . . . . . . 36 Performance: SQL Cache Snapshot . . . . . . . . . . . . . . . . 37 Performance: Lock Waits and Deadlocks . . . . . . . . . . . . . 39
i

Performance: Active Inplace Table Reorganizations Performance: HistoryDatabase . . . . . . . . . . . Performance: HistoryTables . . . . . . . . . . . . Performance Warehouse . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

41 41 42 43 44

Chapter 3: Storage Management . . . . . . . . . . . . . . . . . . . . . . 45

Automatic Storage . . . . . . . . . Table Spaces . . . . . . . . . . . . The Technical Settings Tab . . The Storage Parameters Tab . . The Containers Tab . . . . . . DMS/SMS Table Spaces . . . . Containers . . . . . . . . . . . . . Tables and Indexes. . . . . . . . . Single Table Analysis . . . . . . . The Table Tab . . . . . . . . . The Indexes Tab . . . . . . . . The Table Structures Tab . . . The RUNSTATS Control Tab . The Index Structures Tab . . . The RUNSTATS Profile Tab . The Table Status Tab. . . . . . The Compression Status Tab. . Virtual Tables . . . . . . . . . . . Historical Analysis . . . . . . . . The Database and Table Spaces Tables and Indexes . . . . . . . Summary . . . . . . . . . . . . . .
Chapter 4:

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

47 49 51 53 54 54 56 57 60 61 63 64 65 67 67 67 69 72 75 77 78 80

Job Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . 81

The Central Calendar . . . . . . . . . . . The DBA Planning Calendar . . . . . . . REORGCHK for All Tables . . . . . . Scheduling Backups . . . . . . . . . . Archiving Log Files to a Tape Device .
ii

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

81 83 84 85 86

Updating Statistics . . Table Reorganization. Custom Job Scripts . . The DBA Log . . . . . . Back-end Configuration . SQL Script Maintenance. Summary . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

86 87 87 88 89 90 91

Chapter 5: Backup and Recovery . . . . . . . . . . . . . . . . . . . . . . 92

The Backup Strategy. . . . . . . . . . . . . . . . . . . . . . . . 93 Utility Throttling. . . . . . . . . . . . . . . . . . . . . . . . . . 94 Scheduling Backups in the DBA Cockpit . . . . . . . . . . . . . 95 Multi-partition Databases . . . . . . . . . . . . . . . . . . . 99 Advanced Backup Technology . . . . . . . . . . . . . . . . 100 The DB2 Recovery History File . . . . . . . . . . . . . . . 100 The Backup and Recovery Overview Screen . . . . . . . . . . 102 The Database Backup Tab . . . . . . . . . . . . . . . . . . 102 The Archived Log Files Tab . . . . . . . . . . . . . . . . . 102 Logging Parameters . . . . . . . . . . . . . . . . . . . . . . . 103 The Log Directory . . . . . . . . . . . . . . . . . . . . . . 104 The ARCHMETH1 Tab . . . . . . . . . . . . . . . . . . . 105 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Chapter 6: Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 106

The Overview Screen. . . . . . . . . . The Database Manager . . . . . . . . . The Database . . . . . . . . . . . . . . Registry Variables . . . . . . . . . . . Environment Variables . . . . . . . Registry Variables . . . . . . . . . Parameter Changes . . . . . . . . . . . Database Partition Groups . . . . . . . Buffer Pools . . . . . . . . . . . . . . Special Tables Regarding RUNSTATS File Systems . . . . . . . . . . . . . . Data Classes . . . . . . . . . . . . . .
iii

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

108 109 112 117 118 118 121 122 123 125 127 127

Monitoring Settings . . . . . . . Automatic Maintenance Settings . Automatic Backups . . . . . . Automatic RUNSTATS. . . . Automatic REORG . . . . . . Summary . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

128 130 130 131 132 133

Chapter 7: The Alert Monitor . . . . . . . . . . . . . . . . . . . . . . . . 135

The Alert Monitor. . . . The Alert Message Log . Alert Configuration . . . Summary . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

136 137 138 139

Chapter 8: Database Diagnostics . . . . . . . . . . . . . . . . . . . . . 140

The Audit Log . . . . . . . . . . . . . . . . . . The EXPLAIN Option . . . . . . . . . . . . . . The New Version of EXPLAIN . . . . . . . . . Missing Tables and Indexes . . . . . . . . . . . The Deadlock Monitor . . . . . . . . . . . . . . Creating the Deadlock Monitor . . . . . . . Enabling the Deadlock Monitor . . . . . . . Analyzing the Information Collected . . . . . Stopping the Deadlock Monitor . . . . . . . Resetting or Dropping the Deadlock Monitor The SQL Command Line. . . . . . . . . . . . . The Index Advisor . . . . . . . . . . . . . . . . Indexes Recommended by DB2 . . . . . . . Creating Virtual Indexes . . . . . . . . . . . The Cumulative SQL Trace . . . . . . . . . . . The DBSL Trace Directory. . . . . . . . . . . . The Sequential DBSL Trace . . . . . . . . . The Deadlock Trace. . . . . . . . . . . . . . Trace Status. . . . . . . . . . . . . . . . . . . . The Database Notification Log. . . . . . . . . . The Database Diagnostic Log . . . . . . . . . . DB2 Logs . . . . . . . . . . . . . . . . . . . . .
iv

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

141 142 146 147 149 151 151 151 154 154 154 156 157 157 159 161 161 162 164 165 166 167

The Dump Directory . . . . . . . . . . . . . . . . . . . . . . . 168 The DB2 Help Center . . . . . . . . . . . . . . . . . . . . . . 169 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Chapter 9: New Features . . . . . . . . . . . . . . . . . . . . . . . . . . 171

Workload Management (WLM) . . Workloads and Service Classes. Critical Activities . . . . . . . . BI Administration . . . . . . . . . BI Data Distribution . . . . . . The MDC Advisor . . . . . . . Summary . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

171 172 173 174 174 176 179

Foreword
This is a remarkable book, written by IBM experts who have in-depth knowledge about SAP on DB2. The authors have their profound experience not only from their work with many customers who adopted DB2 for their SAP applications, but also from their very close cooperation with SAP development. Based on the analogy of a pilots need to know about the controls of his aircraft, this book takes you through the entire world of DB2 monitoring and administration. You will find it a useful introduction if you are new to SAP on DB2, and you will also be able to use it as a reference if you are an experienced DBA. The SAP DBA Cockpit is one of many visible proof points of the excellent integration of SAP solutions with IBM DB2. This book will familiarize you with everything you need to know to operate IBM DB2 optimally with your SAP solution. In a tutorial-like, easy to read style it takes you from the basic controls to advanced monitoring and tuning, and at the same time provides you with useful background information about DB2. And even more, it is just fun to read. I hope you will find it as useful and enjoyable as I did. Torsten Ziegler SAP Manager DB2 LUW Platform Development

vi

Chapter 1 The SAP DBA Cockpit A Pilot Must Know the Controls
Like a pilot must know the aircraft cockpit, a database administrator must know the SAP database administration tools. The SAP DBA Cockpit is the central database administration interface for SAP systems on all databases. The DBA Cockpit for DB2 provides administrators with a more comprehensive administration and monitoring tool for SAP databases.

iloting a large commercial aircraft requires a great deal of skill. Pilots must understand how the adjustments they make to the aircraft components affect the flight of the airplane. Balancing lift and drag, speed and altitude, yaw and wind are all important parts of a safe, comfortable flight. However, a huge amount of technology also operates and manages the individual aircraft components. A pilot who flew the aircraft without knowing what the technology does could disrupt automated flight operations. Similarly, if the technology were not leveraged specifically for the aircraft flight requirements, flight operations could become more difficult. To ensure an efficient and comfortable flight, an adept pilot must understand both the high-level operation of the aircraft and the underlying technology that operates the components.
1

CHAPTER 1: The SAP DBA Cockpit

Considering the operation of the database technology within an SAP application, administrators and pilots have similar skill requirements. Operating SAP applications without considering the optimizations within the database technology can cause inefficiencies, and configuring the database without considering the unique SAP application workload characteristics can produce unstable, sub-optimal performance results. Adept SAP administrators must understand how to best leverage the database technology specifically for the workloads of their SAP systems. Traditionally, this is where administrative consoles have come up short. Database administration consoles were too generic to focus on application-specific requirements, and application administration consoles were not specific enough to fully leverage the database. SAP and IBM took huge steps to bridge this gap, though, with the development of the SAP DBA Cockpit for DB2. The result is a complete graphical interface for monitoring and administering the database, all within a single transaction in the SAP application. Administrators can now easily access all of the database key performance indicators (KPIs) and make changes to improve system performance from within the same dialog screens. The most important information for SAP administrators is now at their fingertips, and the database administrative tasks can often be executed with a few simple mouse clicks. This single DBA Cockpit interface simplifies monitoring and maintenance tasks, and can reduce the overall time spent on database administration. The DBA Cockpit contains two main sections: a large detailed display on the right, and a small navigation menu on the left. Figure 1.1 shows the System Configuration screen, which is the initial dialog screen displayed by running the DBACOCKPIT transaction. This can also be displayed at any time by clicking the System Configuration button, just above the left navigation menu.

CHAPTER 1: The SAP DBA Cockpit

Figure 1.1: The SAP DBA Cockpit for DB2 has a large display area on the right and a small navigation menu on the left.

The right display window contains a list of all the database systems that are configured for monitoring from the DBA Cockpit. The left navigation menu contains the following folders for navigating into database function groups: PerformanceDisplay performance statistics for monitoring database memory, disk I/O, application resource usage, query execution, and more. SpaceReview historical and real-time storage usage for table spaces, containers, and individual tables, and perform administrative functions to alter the logical and physical storage layout of the SAP database. Backup and Recovery OperationsReview historical backup and log archival information, and real-time log file system statistics. Database ConfigurationDisplay and update database configuration parameters, configure partition groups and buffer pools, and adjust monitoring and automatic maintenance settings.
3

CHAPTER 1: The SAP DBA Cockpit

Job SchedulingCreate, schedule, and monitor periodic jobs from a planning calendar. Alert MonitoringView key database health alert statuses and messages, and enable notification for database alert threshold violations. Diagnostic FunctionsView and filter messages from the database diagnostic logs, view optimizer access plans and recommended indexes for SQL statements, run SQL commands, view DB2 online help, and more. Workload ManagementSet up, maintain, and monitor the different workloads and service classes configured for the SAP system in DB2s Workload Management. BW AdministrationChange data distribution and analyze Multi-Dimensional Clustering in partitioned SAP NetWeaver BW databases. The left navigation frame of SAP Enhancement Package 1 for SAP NetWeaver 7.0 contains two additional screens. The first entry links the user directly into the DB2 LUW main web page in the SAP Developers Network (SDN), allowing the user to browse the SDN from directly within the SAP GUI. The other screen launches the new web browser-based DBA Cockpit. Several of the new features of the DBA Cockpit are now launched as WebDynpro browser applications. When one of these is clicked in the SAP GUI-based DBA Cockpit, the corresponding WebDynpro screen will automatically launch in the browser. The Start WebDynpro GUI menu entry launches the main page of the web browser-based DBA Cockpit, similar to the DBACOCKPIT transaction in the SAP GUI. The contents of the left navigation menu may differ slightly among different versions of SAP BASIS, in order to leverage new functionality available in the latest releases of SAP and DB2. This book illustrates the latest features available in the DBA Cockpit in SAP Enhancement Package 1 for SAP NetWeaver 7.0.
4

CHAPTER 1: The SAP DBA Cockpit

Central Monitoring of Remote Systems


The DBA Cockpit allows administrators to configure connections to every SAP system from a single DBA Cockpit session. A Solution Manager instance or a standalone SAP NetWeaver instance can be installed for administrators to use for central monitoring and administration. You should keep this SAP system at the most current SAP release level, to maximize backward compatibility and make the most advanced DBA Cockpit features available for all systems. Remote connections can be established using the database information from the System Landscape Directory (SLD). Alternatively, they can be configured manually from within the DBA Cockpit, using the DB Connections button at the top of the left navigation menu. From the System Configuration screen, simply click the SLD System Import button. This provides a graphical interface to select and register the unregistered SAP systems into the cockpit. This allows the entire SAP system landscape to be centrally managed in the SLD, and provides a simple way to register any new or changed systems in your central DBA Cockpit. Alternatively, click the Add button to manually register new databases into the cockpit. This allows administrators to register even non-SAP systems. Therefore, the DBA Cockpit can provide a single administrative GUI for every SAP and non-SAP database in your IT landscape.

Summary
The SAP DBA Cockpit for DB2 is a powerful interface for SAP pilots to centrally manage the DB2 database operations of their SAP systems. It provides a single point of administration for every DB2 database in your organization. The SAP DBA Cockpit for DB2 gives administrators fast and easy access to all of the most important DB2 database information, all from within the familiar look and feel of SAP GUI.

Chapter 2 Performance Monitoring Are You Flying a Glider or a Jet?


The DBA Cockpit performance monitors provide a simple interface to easily access all of the key performance data for the DB2 database. By understanding the DBA Cockpit information and integrating it with the other performance data available within SAP administrators can more , effectively optimize the performance of their SAP systems.

erformance tuning can be a very complicated task, involving many different areas of the SAP application. The database is one of the key areas, and the SAP DBA Cockpit for DB2 can greatly reduce the effort of monitoring and tuning it. The DBACOCKPIT transaction efficiently organizes the database performance statistics into the following sections, containing easily accessible screens and tabs for important, related information: Performance: Partition Overview Performance: Database Snapshot Performance: Schemas Performance: Buffer Pool Snapshot Performance: Tablespace Snapshot
6

CHAPTER 2: Performance Monitoring

Performance: Table Snapshot Performance: Application Snapshot Performance: SQL Cache Snapshot Performance: Lock Waits and Deadlocks Performance: Active Inplace Table Reorganizations Performance: HistoryDatabase Performance: HistoryTables Everything needed by a database administrator is only a click or two away.

Performance: Partition Overview


Database Partitioning Feature (DPF) is one of the key DB2 features for improving the performance of SAP NetWeaver BW systems. DPF allows a SAP NetWeaver BW database to scale out incrementally on lower-cost hardware, or grow massive data warehouses across multiple, large servers. The goal of database partitioning is to divide the database workload evenly across multiple partitions, perhaps on different physical machines, so that long-running SQL statements can be divided and conquered. If the workload is balanced evenly across all partitions, all then operate on an equal share of the data and process their intermediate results sets in about the same amount of time. This equal division of processing minimizes the overall response time and maximizes performance. To access the partition overview, shown in Figure 2.1, click Performance Partitions in the navigation frame of the DBA Cockpit. This displays the most important performance statistics for each active partition in the current SAP NetWeaver BW system. For each partition, this overview shows the total number and size of the buffer pools, key I/O read and write characteristics, SQL statement executions, and package cache statistics. Ideally, a well-balanced system will have similar values on each partition for all of these characteristics. Probably the most important performance indicator is buffer pool hit ratio. This can be calculated by comparing the number of logical and physical reads. Alternatively, it can be displayed by double-clicking one of the partitions to view
7

CHAPTER 2: Performance Monitoring

the database snapshot data from that partition. On each partition, the index hit ratio should be about 98 percent, and the data hit ratio should be 95 to 98 percent.

Figure 2.1: The performance characteristics of the DB2 database partitions are shown in the Performance: Partition Overview screen.

Administrators should try to balance I/O as evenly as possible across all partitions in the system. The easiest way to achieve this is to distribute all large or heavily accessed tables across all partitions. However, for very large systems with a very high number of partitions, it might be impractical to distribute tables thinly across all partitions. In this case, heavily accessed tables can be balanced equally across subsets of partitions. For example, one heavily accessed InfoCube can reside on partitions 1 through 9, and another heavily accessed InfoCube can reside on partitions 10 through 19. The most important point is to try to keep database size and I/O activity as balanced as possible across all partitions, so that the database leverages the full processing capacity of all partitions equally.
8

CHAPTER 2: Performance Monitoring

Partitioned SAP NetWeaver BW databases have unique package cache requirements. Since all application servers connect to the Administration Partition (partition 0), all SAP basis function-related SQL statements will only be compiled and performed on partition 0. Therefore, the Administration Partition requires a bigger package cache than other data partitions. Package cache quality should be 95 to 98 percent on each partition.

Performance: Database Snapshot


The database performance dialog of the DBA Cockpit is the equivalent of running the ST04 transaction code. This screen, shown in Figure 2.2, contains tabs for each of the following key database performance indicators (KPIs): Buffer pool Cache Asynchronous I/O Direct I/O Real-time statistics Locks and deadlocks Logging Calls Sorts XML storage By default, the database performance monitor displays database statistics since the last system reset. The system can be manually reset at any time by clicking the Reset button at the top of the screen. To the right of the Reset button, you will find a Since Reset button and a Since DBM Start button. These toggle the statistics between the values since the last reset, and the values since the start of the database manager (the DB2 instance).
9

CHAPTER 2: Performance Monitoring

Figure 2.2: This tab of the database snapshot dialog displays statistics about the buffer pool.

The Buffer Pool


Disk I/O is relatively slow compared to other types of database operations. Therefore, if a database reduces disk I/O and performs most disk I/O operations in the background (asynchronous), performance generally improves. On the other hand, if an SQL statement is forced to wait for disk I/O (synchronous), performance generally declines. Administrators should strive for high buffer quality, fast physical I/O, and few synchronous reads. All of this information is available in the DBA Cockpit buffer pool statistics, shown in Figure 2.2. High buffer quality is probably one of the most important criteria for performance. If an agent can find the pages it needs already in memory, I/O wait is reduced and response time improves. For peak performance, overall buffer quality for the entire database should be above 95 percent, with data hit ratios above 95 percent and indexes hit ratios above 98 percent. Hit ratios can be improved by increasing buffer pool size, compressing the database, improving cluster ratios for SAP NetWeaver BW, or by optimizing buffer pool allocation, which can be done automatically by the DB2 Self Tuning Memory Manager (STMM).
10

CHAPTER 2: Performance Monitoring

Buffer pool hit ratios depend on the ratio of logical and physical reads. Each request for a page of table or index data is referred to as a logical read. In a well-tuned system, the majority of logical read requests will be satisfied from the buffer pool, resulting in buffer pool hits. If a page is not in the buffer pool, a buffer pool miss occurs, and the page must be read from disk, which is called a physical read. The buffer pool quality is the ratio of the number of page requests found in the buffer pool to the total number of logical read requests. Physical reads and writes are unavoidable, because new transactions are always reading and writing new data to the database. However, a properly configured database will perform most disk I/O asynchronously and in parallel, thereby minimizing the I/O wait experienced by the client and maintaining high buffer quality. Physical reads and writes can either be synchronous or asynchronous, depending on which DB2 agent (process or thread) performs the I/O operation. Synchronous I/O is performed directly by the database agent working on behalf of the client connection, and asynchronous I/O is performed by the DB2 prefetchers and page cleaners. The statistic labeled Average Time for the Physical Reads and Physical Writes on the DBA Cockpit indicates the I/O subsystem performance. An average physical read time above 10ms and/or an average physical write time above 5ms indicates an I/O subsystem that is not performing optimally. Asynchronous reads are performed in the background by the DB2 prefetchers, which anticipate the needs of the applications, and load, from disk into buffer pools, the pages that are likely to be required. In most cases, the prefetchers read these pages just before they are needed. For example, during a full table select, the prefetchers will populate the buffer pool with all of the pages containing data for that table, so that when the agent tries to access that data, it is already available in memory. Synchronous reads occur when an agent reads a page of data from disk itself, rather than signaling the prefetchers to read the page asynchronously. This occurs most frequently during random requests for single pages, which are common in OLTP applications operating on single rows of data via an index. However, this may also occur if the prefetchers are all busy with other prefetch requests.
11

CHAPTER 2: Performance Monitoring

Each synchronous read request results in I/O wait at the client, because the agent processing the SQL statement must directly perform a read from disk before it can continue query processing. For single-row access, it is just as efficient for the agent to read the single page itself. However, for prefetch requests involving multiple pages, it is far more efficient to have the prefetchers read these pages in the background. A properly configured system performs most read operations asynchronously and minimizes overall system I/O wait. If a large percentage of read operations are synchronous, it might indicate that the prefetchers are not doing their job effectively. This might be due to slow disks or an inefficient database layout, or the system might just require more prefetchers to satisfy the database workload. The physical writes specify the number of pages written from buffer pool to disk. Similar to a read, a write can be either synchronous or asynchronous, depending on the agent that performs it. Asynchronous writes are performed in the background by the DB2 page cleaners at specific checkpoints. These are far more efficient than synchronous writes, which are performed directly by the DB2 agents to make room in the buffer pool for new data pages being accessed by that agent. DB2 can perform page cleaning in two different ways: Standard Page Cleaning or Pro-Active Page Cleaning. By default, all new SAP installations use Standard Page Cleaning.

Standard Page Cleaning


Using Standard Page Cleaning, page cleaners will asynchronously write data to disk whenever one of the following occurs: CHNGPGS_THRESH is exceeded.The database configuration parameter CHNGPGS_THRESH specifies the maximum percentage of changed pages allowed within a DB2 buffer pool. Once a buffer pool reaches this percentage of changed pages, the DB2 page cleaners are signaled to write those changed pages to disk in the table space containers. This parameter is set to 40 percent by SAPinst. To find it in the cockpit, click Configuration Database Optimization.
12

CHAPTER 2: Performance Monitoring

SOFTMAX is exceeded.The database configuration parameter SOFTMAX specifies the maximum total size of changed pages in the buffer pool that have not yet been written to disk. You can find this parameter in the cockpit by clicking Configuration Database Logging. It is specified as a percentage of one log file in size, and is set to 300 by SAPinst. This means that the buffer pool can contain a maximum of three log files worth of changes (300 percent of one log file). Once this parameter is exceeded, the database enters a log sequence number (LSN) gap situation, and the page cleaners are signaled to begin writing those changed pages from buffer pool to disk in the table space containers. Whenever the above two thresholds are exceeded, the DB2 page cleaners begin writing changed pages from the buffer pool(s) to disk. This avoids LSN gap situations, and ensures that there is room in the buffer pool for future prefetch requests.

Proactive Page Cleaning


DB2 also has another method of page cleaning, Proactive Page Cleaning, which is not currently used by default by SAP. Performance testing has indicated that Standard Page Cleaning currently performs marginally better for most SAP workloads. However, for OLTP systems with very update-intensive workloads, performance might improve slightly by enabling Proactive Page Cleaning in the DB2 profile registry:
db2set DB2_USE_ALTERNATE_PAGE_CLEANING=ON

Using Proactive Page Cleaning, the page cleaners no longer respond to the CHNGPGS_THRESH parameter. Rather than keeping a percentage of the buffer pool clean, this alternate method only uses SOFTMAX, and DB2 keeps track of good victim pages and their locations in the buffer pool. Good victim pages include those that have been recently written to disk and are unlikely to be read again soon. If either a LSN gap occurs, or the number of good victim pages drops below an acceptable threshold, the page cleaners are triggered. They proceed to search the buffer pool, write out pages, and keep track of these new good victim pages. The page cleaners will not only write out pages in a LSN gap situation,
13

CHAPTER 2: Performance Monitoring

but will also write pages that are likely to enter a LSN gap situation soon, based on the current level of activity. When the database agents need to read new data into the buffer pool, the prefetchers read the list of good victim pages, rather than searching through the buffer pool for victims. This tends to spread writes more evenly, by writing smaller amounts more frequently. By spreading the page cleaner write operations over a greater period of time, and avoiding buffer pool searches for victim pages, high-update workloads might see performance improvements. Since most SAP workloads on DB2 9.5 have been found to perform marginally better using Standard Page Cleaning, we recommend using it for all SAP applications. Future changes to Proactive Page Cleaning might increase its usage within SAP. For now, though, if you have a uniquely heavy-update workload that you think might benefit from Proactive Page Cleaning, test the change thoroughly to determine the effect on performance before enabling it in the production system. The No Victim Buffers element in the DBA Cockpit can help evaluate whether you have enough page cleaners when using Proactive Page Cleaning. This element displays the number of times a database agent was unable to find pre-selected victim pages in the buffer pool during a prefetch request, and instead, needed to search through the buffer pool for suitable victim pages. If this element is high relative to the number of logical reads, the database page cleaners are not keeping up with the changes occurring in the database, and more page cleaners are likely required. If Proactive Page Cleaning is off, and you are using Standard Page Cleaning, the No Victim Buffers monitor element can be safely ignored. In the default configuration, Standard Page Cleaning is triggered by CHNGPGS_THRESH and SOFTMAX, and the prefetchers will usually search the buffer pool to find suitable victim pages. Therefore, you can expect this monitor element to be large.

Synchronous Writes
If the database must read data from disk into a buffer pool, and there are no free pages remaining in the buffer pool, DB2 must make room, by replacing existing
14

CHAPTER 2: Performance Monitoring

data pages (victims) with the data pages being read. If these victim buffer pool pages contain changed data, these pages must be written to disk before they are swapped out of memory. In this case, the pages are written to disk synchronously by the DB2 agent processing the SQL statement. Synchronous writes always result in I/O wait at the client, because the write operation must occur synchronously, before the buffer pool page can be victimized (replaced with a new page from disk). A large percentage of synchronous write operations indicates that the DB2 page cleaners are not operating effectively. This might be due to slow disks or unbalanced I/O in the storage system, or the system might require more page cleaners to handle the system workload.

Temporary Table Space I/O


The DBA Cockpit also contains I/O characteristics for the temporary table spaces, displaying the temporary logical and physical reads for both data and indexes. The logical reads display the total number of read requests for temporary table space data. The physical reads display the number of read requests that were not satisfied from the buffer pool, and therefore, had to be read physically from disk. For most transactional systems, temporary table space I/O should be fairly low, since most calculations should be performed in memory. SAP NetWeaver BW systems might show larger temporary table space I/O, but large values here might still indicate inefficient queries or a need to create higher-level aggregates to improve query performance.

The Catalog Cache and Package Cache


The second tab in the DBA Cockpit database performance monitor is the Cache tab, shown in Figure 2.3. This tab displays the details for the database catalog cache and the package cache.
15

CHAPTER 2: Performance Monitoring

Figure 2.3: The Cache tab displays the Catalog Cache and Package Cache statistics.

The Catalog Cache


The catalog cache is a portion of database memory that is dedicated to caching access to table descriptor and authorization information from the database system catalog tables. These table descriptors include the table information used by DB2 during query optimization. When this data is accessed, it is first read from disk into the catalog cache, and then the database agents requesting this data read it from memory. Therefore, high hit ratios on this buffer are important for performance. If the most frequently accessed system catalog details can be cached in memory, unnecessary disk reads can be avoided. A high catalog cache hit ratio is even more important in multi-partition SAP NetWeaver BW systems. In a partitioned SAP NetWeaver BW system, the system catalog tables all reside on the Administration Partition (partition 0). Therefore, if other partitions need to read system catalog information from disk, they must request this information from partition 0, which inserts into the catalog cache on partition 0, and then sends the information to the catalog cache on the other partition. Caching most of the system catalog information at each partition avoids both disk I/O and network I/O, and reduces the workload on the Administration Partition. All of these contribute to better performance. The default catalog cache size in new SAP installations is 2,560 4KB pages. Well-configured systems should have a hit ratio of 98 percent and experience no overflows. If overflows occur, DB2 must allocate more memory from database shared memory into the catalog cache. Then, when some table descriptor and authorization information is no longer needed for active transactions, it is removed
16

CHAPTER 2: Performance Monitoring

from memory, and the cache is reduced to its configured size. This involves extra overhead in the system, and should be avoided by increasing the catalog cache size. The total number of overflows and the high-water mark can be used together with the cache quality to determine whether or not the default size is adequate for your workload. The catalog cache size is set by the CATALOGCACHE_SZ database configuration parameter. To view or change this parameter in the DBA Cockpit, click Configuration Database Database Memory.

The Package Cache


The package cache is another important area of database memory. It is dedicated to caching compiled static and dynamic SQL statements and optimizer access plans. When a new dynamic SQL statement is executed, the DB2 optimizer compiles it, computes an access plan for reading the data pages required to satisfy the query, and then caches this information in the package cache. The database agents executing SQL statements then read this access plan from memory. If the same query is executed multiple times, the access plan can be read from memory, which avoids repeating the compilation and optimization phase of query processing. Static SQL statements are embedded in application programs. These statements must be precompiled and bound into a package, which gets stored in the DB2 system catalog tables. SAP does not use static SQL, so this will not be discussed further. By default, the package cache size in new SAP installations is dynamically configured and adjusted by DB2, as part of its Self Tuning Memory Manager (STMM) feature. This allows DB2 to adjust the size of this cache to optimize overall performance, based on your changing workload. Package cache hit ratio should remain above 98 percent, and overflows should not occur. The package cache size is set by the PCKCACHESZ database configuration parameter. To view or change the package cache size in the DBA Cockpit, click Configuration Database Self-Tuning Memory Manager. Larger catalog and package cache sizes might be required if the workload involves a large number of SQL statements accessing many different database objects. However, in most cases, it is recommended that you keep the package
17

CHAPTER 2: Performance Monitoring

cache size set to AUTOMATIC, and let DB2 STMM configure the size based on your current available memory and optimal overall system performance.

Asynchronous I/O
The third tab in the Database Performance Monitor is Asynchronous I/O, shown in Figure 2.4. This displays information on the I/O reads and writes that use background read and write operations to perform disk I/O to and from the DB2 buffer pools, using the DB2 prefetchers and page cleaners. Asynchronous I/O operations anticipate application I/O requirements, and operate in the background to minimize I/O wait. Therefore, well-performing systems should perform the majority of disk I/O asynchronously. Asynchronous I/O is performed by the DB2 prefetchers and page cleaners. The number of prefetchers and page cleaners should be configured to drive the physical disks in underlying storage system to full capacity. This is set by two database configuration parameters: NUM_IOSERVERS for prefetchers and NUM_IOCLEANERS for page cleaners. Both are found in the cockpit under Configuration Database I/O. New SAP installations default both of these parameters to automatic. This allows DB2 to calculate the optimal number of prefetchers and page cleaners, when the database is activated, based on the following formulae:
NUM_IOSERVERS = MAX( MAX over all table spaces ( parallelism setting MAX # of containers in a stripe set ), 3 ) NUM_IOCLEANERS = MAX( CEIL( # CPUs / # local logical partitions ) 1, 1 )

The parallelism setting for prefetchers refers to the DB2_PARALLEL_IO registry variable, which tells DB2 the number of physical disks assembled into the containers in each table space. This ensures that the number of prefetchers is always greater or equal to the number of disks available to any one table space, which enables asynchronous prefetch requests to drive every available disk in parallel.
18

CHAPTER 2: Performance Monitoring

The formula for page cleaners ensures that they are evenly distributed across all partitions in a partitioned SAP NetWeaver BW system, and that there are never more page cleaners than CPUs. This prevents asynchronous page cleaning from affecting normal transaction processing performance. Ideally, both asynchronous read and write times should be less than 5 ms.

Figure 2.4: The Asynchronous I/O tab shows statistics for background disk I/O performed by the DB2 prefetchers and page cleaners.

Direct I/O
Direct I/O is involved whenever a DB2 agent reads from disk or writes to disk, without using the DB2 buffer pools. Direct I/O is performed in units, the smallest being a 512-byte disk sector. Direct reads always occur when the database reads LONG or LOB data, and when a database backup is performed. Direct writes always occur when LONG or LOB data is written to disk, and when database restore and load operations are performed. The Direct I/O tab of the DBA Cockpit screen is shown in Figure 2.5. Direct I/O should be extremely fast, because it operates on entire disk sectors. Therefore, read and write times should generally be under 2ms. The average I/O per request should be proportional to the average size of the LOB columns in the database.
19

CHAPTER 2: Performance Monitoring

Figure 2.5: The Direct I/O tab displays statistics for database disk I/O that is not buffered in memory by the DB2 buffer pools.

Real-Time Statistics (RTS)


The concept of Real-Time Statistics (RTS) was first introduced in DB2 9.5. SAP Enhancement Package 1 for SAP NetWeaver 7.0 now contains a performance monitoring screen for this new DB2 feature. RTS allows DB2 to trigger either statistics collection or estimation during query compilation, if table statistics are either absent or stale. If statistics collection would exceed 5 seconds, it is done in the background. Otherwise, it may even be done synchronously during query compilation, depending on the cost of the query relative to the cost of the statistics collection. This feature ensures that recent statistics are always available for queries, and that performance is never excessively bad due to stale statistics. The information available in the DBA Cockpit, shown in Figure 2.6, is valuable for determining the performance impact of RTS. It might suggest the need for more structured statistics collection for some tables in the system.

Figure 2.6: The Real-Time Statistics tab shows details related to RTS statistics collection.

20

CHAPTER 2: Performance Monitoring

The statistics cache is a portion of the catalog cache used to store real-time statistics information. If RTS is being frequently triggered, a larger catalog cache might be required. Asynchronously collected statistics occur when synchronous statistics collection during query compilation would take longer than 5 seconds. Rather than consuming this time synchronously during query compilation, statistics collection is instead started as a background job, so that subsequent queries will benefit from newer statistics. Synchronous statistics collection occurs when a RUNSTATS is triggered to collect statistics during query compilation. This RUNSTATS may or may not be sampled, depending on the RUNSTATS profile for the table and the time estimate for statistics collection. The end user might experience a maximum of 5 seconds extra time running this query, due to the synchronous RUNSTATS. The number of synchronous RUNSTATS occurrences and the total time consumed by those occurrences are displayed in the cockpit. The final piece of data for RTS is based on statistics fabrication (or statistics estimation). If a sampled RUNSTATS table or index scan consumes too much time, then new metadata stored in the data and index manager system catalog tables is used to estimate the current table statistics. Those statistics are immediately made available in memory for all other queries to use until a RUNSTATS is performed on the table. In the cockpit, statistics estimation is displayed by the number of statistics collections during query compilation, and the time spent during query compilation.

Locks and Deadlocks


Whenever table records are accessed, DB2 places locks on those records to maintain transaction integrity and ensure that two transactions cannot update the same data at the same time. The type of lock used by DB2 depends on the isolation level defined for the application accessing those records. Traditional DB2 locking involves the following isolation levels, ordered by increasingly restrictive locking:
21

CHAPTER 2: Performance Monitoring

UR (Uncommitted Read)Read operations do not acquire any locks. Uncommitted updates of other transactions can be read immediately. CS (Cursor Stability)Read-only locks are placed on the current record being accessed by a cursor. If that record contains an uncommitted update, the read of that row must wait until that update is committed. This ensures that the application cannot read uncommitted data, and that the current position of the cursor cannot be changed while the application is accessing it. RS (Read Stability)Read-only locks are placed on the entire result set retrieved within a unit of work, and those locks are held until the unit of work is committed or rolled back. This ensures that any row read during a unit of work (UOW) remains unchanged until the UOW commits, and that the application cannot read uncommitted changes from other transactions. RR (Repeatable Read)Read-only locks are placed on all records referenced during the processing of the current UOW. This includes all rows in the result set, plus any rows evaluated and excluded due to WHERE clause restrictions in the query. This ensures that new rows do not appear in the result set, existing rows remain unchanged, and uncommitted updates from other transactions cannot be read. The default isolation level for most SAP applications is Uncommitted Read, which allows the highest level of concurrency within the database. SAP transaction integrity is managed within the SAP application. One SAP transaction may involve multiple database transactions, each of which is committed into the SAP update tables. While one SAP transaction updates data in the update tables, other SAP transactions are reading committed data from the tables containing the permanent, committed data. Therefore, concurrent SAP transactions always read committed data. When an SAP transaction is finally committed, those update table records are applied to the target database tables by the SAP update work processes, and other transactions then see the committed changes from the entire SAP transaction.
22

CHAPTER 2: Performance Monitoring

One potential exception to the UR default isolation level occurs when accessing cluster or pool tables. Since reading a single logical row may involve reading multiple physical rows, more restrictive locking might be required. SAP first tries to read the logical row with UR. If this does not produce a consistent read of all physical rows, SAP will read again, first trying CS, and if necessary, finally reading with RS, which will guarantee read consistency for all physical rows in the logical record. However, inconsistent reads on logical rows using UR rarely occur, and most cluster/pool table reads succeed the first time with UR. Database locks are stored in a portion of database memory called the lock list. When row locks are acquired, they are added to this lock list. If the size of the row locks exceeds the size of the lock list, DB2 will convert multiple row locks on a single table into a single table lock. This lock escalation frees up space in the lock list for other row locks. However, it can also reduce concurrent access to the table involved in the escalation. At best, this might reduce performance for applications accessing that table; at worst, it might result in increased lock waits or deadlock scenarios in other concurrently running applications. Normal lock escalations allow read access to the locked tables, but force writes to wait for the application holding the lock to commit. Exclusive lock escalations also disallow reads, thereby reducing concurrency even further. Therefore, administrators should try to completely avoid lock escalations, by ensuring that the lock list is large enough to contain the locks for the concurrent activity in the SAP system. The size of the lock list is set by the LOCKLIST database configuration parameter, which can be found in the cockpit under Configuration Database Self-Tuning Memory Manager. Lock list utilization can be calculated using the lock_list_in_use monitor element and the lock list size. If utilization is high, consider increasing the lock list size. These details can be easily found within the Locks and Deadlocks section of the SAP DBA Cockpit for DB2, which is shown in Figure 2.7. By default, SAPinst enables DB2 STMM. Therefore, LOCKLIST is set to AUTOMATIC, allowing DB2 to dynamically adjust the size of the lock list to avoid
23

CHAPTER 2: Performance Monitoring

lock escalations and optimize overall system performance. Normally, lock escalation is extremely rare for databases with a properly configured lock list or for databases using STMM.

Figure 2.7: The Locks and Deadlocks tab displays information on lock management and deadlock occurrences.

Another parameter automatically tuned by STMM is MAXLOCKS. This specifies the maximum percentage of the lock list that can be consumed by a single application before lock escalation will occur for locks held by that application. Using STMM, DB2 can automatically adjust this percentage, depending on the number of concurrent transactions and the number of locks held by each concurrent unit of work.

If there is only one active transaction, DB2 will adjust this to a large percentage. However, if many applications are holding locks, this percentage might need to be lower to avoid a scenario where one application consumes most of the lock list, while the others quickly run out of space in the lock list and are forced to escalate. Properly configuring the LOCKLIST and MAXLOCKS parameters or using STMM will prevent lock escalations.
24

CHAPTER 2: Performance Monitoring

If lock escalations are occurring, then abnormally large values can be expected for the lock wait monitor elements, too. If lock escalations occur, other applications accessing that same table must wait for the escalating application to commit. In addition, if more applications are waiting for table locks to be released, there is a greater possibility that one of these waiting applications will already be holding a lock that will be requested by the escalating application. This would result in a deadlock, with each application waiting for locks already held by the other. Large lock wait values without lock escalations or deadlocks might indicate that custom applications are not efficiently committing their units of work. Custom applications should try to hold locks for as little time as possible, by performing efficient SQL statements and accessing only required records, and by performing related updates together, followed immediately by committing the unit of work. Infrequent commits can hold locks excessively long, and increase lock wait scenarios. A lock timeout occurs when an application waits to acquire a lock for longer than the LOCKTIMEOUT database configuration parameter, which is set to 3,600 seconds (1 hour). This default value is much larger than any application should be required to wait for locks. If a lock wait occurs, an application has probably hung in the middle of a unit of work, and is holding locks abnormally. In this scenario, an administrator will likely need to identify the hung database agent, and manually terminate that application using a command:
db2 force application (appl_handle)

This will cause a rollback and release the locks currently held by that application.

Logging
The transactional log files of the database maintain database integrity by containing a physical copy, on disk, of all committed database transactions. When data is updated, the changes are made directly in the DB2 buffer pool, and logged in the DB2 log buffer. When a transaction commits, each entry in the log buffer must be successfully written from the log buffer to the log files before the commit
25

CHAPTER 2: Performance Monitoring

returns successfully to the client. Since writes to the log files occur synchronously with each commit, fast SAP dialog response times depend on fast writes to the DB2 transactional log files. DB2 contains two kinds of log files: primary and secondary. The number and size of these log files are set with the LOGPRIMARY, LOGSECOND, and LOGFILSZ database configuration parameters. Primary log files are pre-allocated when the database is created. Secondary log files are allocated on demand, whenever active transactions exceed the total size of the primary log files. Therefore, the total size allocated to primary log files should be large enough to hold all the log records expected from concurrent transactions during normal database activity. Secondary log files should only be required for infrequent spikes in activity, which may require additional log space. Logging can be configured for either circular or archive logging. Circular logging reuses primary log files once they no longer contain log records required for crash recovery, which means that point-in-time recovery is not possible with circular logging. Therefore, circular logging is not suitable for production systems. Production systems require archive logging, which ensures that all log files produced during the entire lifetime of the database are saved, and that point-in-time recovery is always possible. When a primary log file becomes full, it is archived (copied) by DB2 to the locations set in the LOGARCHMETH1 and LOGARCHMETH2 database configuration parameters. Once the log file is no longer needed for crash recovery, it is renamed to the next log file sequence number, and its header is re-initialized for re-use. During normal workloads in properly configured systems, the next empty primary log file usually already exists when the current log file becomes full, and a transaction spanning multiple log files rarely incurs the overhead of allocating the next log file. The Logging tab, shown in Figure 2.8, displays the number and size of log files available and allocated in the system. If the database is using secondary logs, you can see the number currently allocated, and the maximum secondary log file space used by the database.
26

CHAPTER 2: Performance Monitoring

Figure 2.8 The Logging tab displays information on log file consumption and logging I/O.

This information can help determine if the primary log space is adequate for your current workload. In general, we recommend that the log file system should be 1.5 times the size of all primary and secondary log files configured for your system. This ensures enough space for all configured log files, plus extra space for inactive (online archive) logs waiting to be archived, or new logs being formatted for future use. If secondary log space is being used consistently, logging overhead may be reduced by allocating more primary log space. This is done by either increasing the number of primary log files or increasing the log file size. First, always ensure that the log file system is large enough to contain all of the configured logs. The cockpit also displays the database application ID with the oldest uncommitted transaction. This can help identify long-running transactions that might need attention. The Log Buffer Consumption section is valuable for determining the effectiveness of page cleaning. The LSN Gap value specifies the percentage of the SOFTMAX checkpoint that is currently consumed in log files by dirty pages. This
27

CHAPTER 2: Performance Monitoring

includes pages that have been changed in a buffer pool by both committed and uncommitted transactions, but which have not yet been written to disk in the table spaces. If this is above 100 percent, the page cleaners are unable to keep up with the transaction workload on the system, and more page cleaners might be required. The Restart Range value is similar, but corresponds to the percentage of SOFTMAX occupied in the log files by committed transactions. Statements in this Restart Range will need to be rolled forward during crash recovery. Again, if this is greater than SOFTMAX, more page cleaners might be required. The I/O characteristics of the log file system are also provided. The Log Pages Read displays the physical log file page reads required during rollback operations in the database, and the Log Pages Written displays the pages of transactional data written into the log files. The transaction commit time depends on the log file systems write performance. Therefore, having the fastest log file system possible minimizes dialog response time. A well-performing system should have log file system write times below 2 ms. Ideally, very few log buffer overflows should occur. This indicates the number of times any database agent has waited for log buffer flushes in order to write into the log buffer. These can occur when large transactions produce a series of log records larger than the buffer, or when high transaction volumes consume the entire buffer with many smaller log records simultaneously. When this occurs, all in-flight transactions must wait for the log buffer to be written to disk before they can continue writing log records into the buffer. This introduces I/O wait into all in-flight transactions and hurts performance. For optimal performance, the log buffer should be large enough to avoid overflows during normal workloads.

Calls
The Calls tab, shown in Figure 2.9, contains a summary of the different types of SQL statements issued, and their performance impact on the SAP system. This displays the number of rows read, deleted, inserted, selected, and updated. These can be compared to the number of DML and DDL statements executed and their execution time, to understand the average number of rows read per SQL statement, and the time spent processing those statements within the database.
28

CHAPTER 2: Performance Monitoring

Figure 2.9: The Calls tab displays how different types of SQL statements contribute to the load on the database.

The Hash Joins section shows some interesting statistics on the hash join operations performed by the database. DB2 performs hash joins when large amounts of data are joined by equality predicates on columns of the same data type (for example, tab1.colA = tab2.colB). First, the inner table is scanned, and the relevant rows are copied into memory and partitioned by a hash function. The hash function is then applied to the rows from the outer table, and the join predicates are then only compared for inner and outer table rows hashing to the same partition. If the hash join data exceeds sort heap memory, DB2 will consume temporary table space on disk to compute the join. Obviously, performance will be better if this can be avoided, and instead, the join can be done entirely within a sort heap. If the total hash join data exceeds the sort heap by less than 10 percent, this counts as a small overflow. If the number of small overflows is greater than 10 percent of the total overflows, avoiding these small overflows with a larger sort heap may improve performance. If a single partition of data from the hashing function (the set of rows hashing to the same value) is larger than the sort heap, a hash loop results. When this occurs, the intermediate join of that one section of data overflows to temporary table space, causing extra disk I/O for the join of individual hash partitions.

29

CHAPTER 2: Performance Monitoring

For performance reasons, always try to minimize the number of hash loops and hash join overflows. With DB2 9.5, the sort heap memory parameters default to automatic settings using the DB2 Self-Tuning Memory Manager. This allows DB2 to automatically adjust the available sort heap memory to avoid unnecessary hash join overflows or hash loops.

Sorts
The Sorts tab, shown in Figure 2.10, displays memory usage and overflows from database sorts. The Sort Overflows value is probably the most important one on this tab. Transactional systems should have less than one percent of total sorts overflowing from sort memory to temporary table space. BW systems may have more, but overall, sort memory should be configured to avoid most sort overflows.

Figure 2.10: The Sorts tab shows the memory consumed by database sort operations.

The private and shared sort heap parameters can be compared with the current allocated memory and high-water mark, to determine whether the sort memory heaps are properly configured. DB2 9.5 defaults to automatic shared sort memory and the Self-Tuning Memory Manager. This allows DB2 to manage sort memory allocation based on overall system requirements, which avoids unnecessary sort memory allocation and prevents most sort overflows.

30

CHAPTER 2: Performance Monitoring

XML Storage
The XML Storage tab provides I/O characteristics for XML Storage Objects (XDA). This is only valid for database tables using the XML data type to leverage the DB2 PureXML features for storing and accessing XML documents natively in XML format. As of the writing of this book, SAP currently does not use DB2 PureXML features. Therefore, this tab is really only valid for non-SAP databases cataloged into the cockpit, or for user tables created manually by SAP customers.

Performance: Schemas
There should be very few schemas existing within a SAP database. The vast majority of database access is done through the SAP connection users, which default to SAP<SAPSID> for ABAP systems, and SAP<SAPSID>DB for Java systems. The only other users who generally connect are the SAP admin user, <SAPSID>ADM, and the DB2 instance owner, DB2<DBSID>. The Schemas dialog screen can be used to identify the activity of users connecting to any database partition from outside the SAP application. I/O performance characteristics of reads and writes can be monitored for each schema.

Performance: Buffer Pool Snapshot


The default installation of SAP on DB2 creates all table spaces using 16KB pages. By default, the only visible buffer pool is IBMDEFAULTBP, which is also created with 16KB pages. If other table space sizes are created, then other buffer pools corresponding to each unique page size must be created, too. However, SAP recommends keeping everything in your system at a uniform page size of 16KB. This simplifies configuration and avoids the additional complexity involved when joining tables with different page sizes. The buffer pool snapshot provides the logical and physical read statistics for the data, index, and temporary table spaces on all database partitions. If different
31

CHAPTER 2: Performance Monitoring

buffer pools have been created for different database objects, this provides an easy interface to compare the individual statistics for each buffer pool on each database partition. The initial screen contains a list of all visible buffer pools created in the system, along with an overview of their hit ratios and read characteristics. Double-clicking on any buffer pool partition returns a more detailed buffer pool snapshot for that particular buffer pool on that particular partition, as shown in Figure 2.11. This displays the data and index read statistics, buffer quality, and utilization state of the buffer pool. It also includes tabs showing the detailed asynchronous and direct I/O operations, and performance characteristics for this buffer pool. All of these details are important for proper performance tuning of each individual buffer pool.

Figure 2.11: The Buffer Pool Snapshot displays detailed I/O information for an individual buffer pool.

32

CHAPTER 2: Performance Monitoring

As a safety net, DB2 is also pre-configured with hidden buffer pools for each possible page size (4K, 8K, 16K, and 32K). These hidden buffer pools ensure that an appropriate buffer pool is always available. These hidden buffer pools may be used if the system does not contain enough memory to allocate the defined buffer pools, errors allocating the buffer pools occur during the database activation, or if anything in the database performs I/O using a page size without a corresponding user-defined buffer pool. Since these hidden buffer pools are only 16 pages in size, performance will likely suffer if they are used. An entry is logged in the notification log whenever a hidden buffer pool is used.

Performance: Tablespace Snapshot


Evenly distributed I/O and fast access to the most frequently accessed data are critical for performance. The performance statistics for each individual table space can help administrators identify data hot spots or inadequate buffer pool configurations. The Performance: Tablespace Snapshot screen, shown in Figure 2.12, displays the I/O statistics of each table space on each partition.

First, the most frequently accessed table spaces should have the highest buffer pool hit ratios. Table spaces with a high number of logical reads should have a buffer pool quality of at least 95 to 98 percent. The frequently accessed index table spaces (with names ending in I) are especially critical for high hit ratios.

Next, the physical read and write times for all table spaces should be fairly fast. Ideally, both read and write times should be under 5ms. If all table spaces have slower I/O, you might simply have slow disks. However, this might also be a sign of disk contention, especially if more frequently accessed table spaces are slower than others. To improve performance, spread the data across a greater number of physical disks, or move one or more frequently accessed tables to a new table space on a new series of disks. The Tablespace Snapshot can be used, together with the Operating System Monitor Detailed analysis menu Disk Snapshot (from transaction ST06), to lay out table spaces and balance database I/O evenly across all SAPDATA file systems.
33

CHAPTER 2: Performance Monitoring

Figure 2.12: The Tablespace Snapshot displays the I/O characteristics of all table spaces.

Similar to the previous buffer pool snapshot, double-clicking any row displays a more detailed table space snapshot for the chosen table space and partition. This snapshot shows the detailed buffer pool statistics, and the asynchronous and direct I/O operations and performance characteristics.

Performance: Table Snapshot


Page reorganizations and overflow record accesses are two key performance indicators for individual tables. Page reorganizations occur when an insert or update is done to a data page that contains enough free space for the new data, but the free space is fragmented within that page. Before the insert or update is performed, the single page of data is reorganized to consolidate the free space at the end of the page, and then the insert or update proceeds. This extra overhead can hurt insert and update performance. However, if an update is being done, and a page reorganization cannot reclaim enough contiguous space for the updated row, the row must be moved to a new page. An overflow record (or pointer) is then created to point from the original location to the new location on the other
34

CHAPTER 2: Performance Monitoring

page. When this row is accessed, DB2 must perform two I/O reads instead of one: the first to read the pointer from the original location, and the second to read the data from the pointer. If a table contains a large number of page reorganizations or overflow accesses, both of these problems can be fixed by reorganizing the table. Double-clicking any table from the screen in Figure 2.13 loads the Single Table Analysis screen (explained fully in the Chapter 3). The table reorganization can be executed from Single Table Analysis, or scheduled through the DBA Planning Calendar (discussed in Chapter 4).

Figure 2.13: The Table Snapshot dialog displays data access characteristics of individual tables.

Also, if table space analysis has indicated unbalanced I/O, the table snapshot can be used to identify the most frequently accessed tables. If several heavily accessed tables reside in the same table space, I/O can be balanced by separating these tables into different table spaces on different sets of physical disks.

35

CHAPTER 2: Performance Monitoring

Performance: Application Snapshot


The main dialog of the Performance: Application Snapshot screen displays a summary list of all the database applications with active connections to the database. This overview gives descriptions of the applications, their status, their buffer quality, and the number of reads performed. Almost all of these will correspond to SAP work processes. Double-clicking any application in the initial list displays a detailed snapshot for that single application. Shown in Figure 2.14, this snapshot displays all of the key application statistics, organized conveniently into unique screen tabs.

Figure 2.14: The Application Snapshot contains many tabs for accessing detailed information on the resource consumption of the database applications.

The first Application tab describes the application on the host, and displays the client user and SAP application server executing this application. The Agents tab
36

CHAPTER 2: Performance Monitoring

describes the number of agents, processing time, and memory usage for this application. Note that with DB2 9.5, the parameters for the number of agents in the database default to automatic, and are dynamically maintained by DB2 to optimize memory utilization and performance. The Buffer Pool tab displays the applications detailed data, index and temporary table space read statistics, and buffer pool quality. The read statistics can indicate the I/O efficiency of the queries in this application. The performance details of the non-buffered I/O (e.g. LOB access, backup and restore) are shown in the Direct I/O tab. The Locks and Deadlocks, Calls, Sorts, and Cache tabs contain the same information as the database performance tabs, except that the details are specific to the currently selected application. If an application is holding too many locks, causing lock escalations, or involved in deadlocks, consider looking more closely at the application coding and SQL. A properly coded application will hold as few locks as possible and commit as frequently as possible, so that locks are released quickly. An application should commit as frequently as possible, and not perform unnecessary calculations inside SQL units of work. The SQL statements should also try to reduce the amount of data accessed during a query, and only return the rows of relevance for the application. The Unit of Work tab displays the length of time and log space consumption of the current transaction. The Statement tab shows the statistics of the current statement within the current unit of work. The Statement Text tab displays the current SQL statement being executed. This screen also contains buttons to load the optimizer execution plan for the statement, or to view the ABAP source code for the program executing this SQL statement. These tools can be used to analyze the program logic and SQL execution plans, to ensure efficient SQL and indexed access to the data pages being fetched.

Performance: SQL Cache Snapshot


If administrators are going to spend their valuable time tuning the performance of individual queries, then it makes sense to focus on the queries that most affect the
37

CHAPTER 2: Performance Monitoring

system. The Performance: SQL Cache Snapshot screen, shown in Figure 2.15, allows administrators to easily identify the queries that are consuming the largest amount of resources.

Figure 2.15: The SQL Cache Snapshot shows the execution time and resource consumption of queries that have run in the system.

In the screen, the columns listing numbers of executions, total execution time, and average execution time allow the DBA to identify the queries that take the most execution time. The buffer pool hit ratio is given for each query, to identify how much disk I/O the query is causing. The next few columns provide valuable information about SQL query quality and I/O quality. The Rows Read/Rows Processed column gives a ratio of how many rows must be read to identify the rows required for the final result set. The BP Gets/Rows Processed column indicates the number of pages that must be accessed from the buffer pool to read the final result set. The BP Gets/Execution column provides the number of pages read from buffer pool per query execution. If the number of rows read or the ratio of rows read to rows processed is high, the index advisor might help to identify a better index, to reduce the number of rows
38

CHAPTER 2: Performance Monitoring

evaluated by the query. If the BP gets are high, clustering the table differently might improve performance, or a table reorganization might help to reduce the number of pages read from disk. The last few columns of the Performance: SQL Cache Snapshot screen provide information on sorting. A query that displays a large number of rows written indicates sort overflows to disk in the temporary table space. The cockpit also displays the total number of sorts, number of sort overflows, and total time spent sorting during the query. If sort overflows are occurring, and the total sort time is a significant portion of the average execution time, further analysis of the query, indexes, and potentially sort parameters might be required to try to reduce sort overflows. Click the Explain button, and the optimizer execution plan is displayed, showing the query cost and join methods. From there, click the Details button to open a new window with all of the detailed optimizer data, including all indexes and database objects accessed, join methods, and cardinality estimates for each join. Click of the Index Advisor button, and the DB2 Advisor is run to suggest new, optimal indexes to optimize data access for this query. (Both the Optimizer Explain and the Index Advisor interfaces are explained in detail in Chapter 8.)

Performance: Lock Waits and Deadlocks


A lock wait occurs when one application acquires a lock on a database object, and then another application requests an incompatible lock on that same database object. When this occurs, the second application must wait for the first to release its lock, through either a commit or rollback. The amount of time applications wait to acquire locks is the lock wait. If the first application were to then request a lock already held by the second, the two applications would enter a deadlock scenario. In this state, both applications are waiting for locks held by the other, and neither can proceed. Deadlocks can affect any relational database. They are usually caused by infrequent or missing commit statements within custom applications. DB2 resolves deadlocks automatically, by periodically checking for their existence, and when found, randomly selecting one of the deadlocked applications to
39

CHAPTER 2: Performance Monitoring

rollback. The frequency of deadlock checks is set by the database configuration parameter DLCHKTIME, which defaults to 300,000 ms (5 minutes) in SAP NetWeaver 7.0 systems. The rolled back application fails with a SQL0911N error, and all of its locks are released. This allows the other application to acquire its locks and proceed. The active lock waits and deadlock scenarios can be seen through the Performance: Lock Waits and Deadlocks screen shown in Figure 2.16. The screen lists the database agents and lock types involved in all active lock waits and deadlocks, and includes buttons to view the last SQL statement from each unit of work involved in these scenarios. This provides real-time analysis of the applications causing locking issues.

Figure 2.16: The Lock Waits and Deadlocks dialog displays the current lock wait and deadlock scenarios that are actively occurring in the system.

40

CHAPTER 2: Performance Monitoring

Performance: Active Inplace Table Reorganizations


Reorganizations of large tables can take a long time to run. DB2 provides the ability to perform online table reorganizations in-place in the table space, so that it is not necessary to consume the table size in temporary table space to perform the reorganization. The Performance: Active Inplace Table Reorganizations screen allows the DBA to monitor and administer any active in-place reorganization jobs. There are even buttons to allow the DBA to pause, resume, or suspend active table reorganizations, if necessary.

Performance: HistoryDatabase
Catching performance problems in action is a reactive process. All administrators should try to be proactive about monitoring performance trends and taking action to prevent potential problems before they occur. Having easy access to these historical trends makes proactive analysis much easier, and SAP can be configured to collect this historical information when the system is registered into the DBA Cockpit. The Performance: HistoryDatabase screen shown in Figure 2.17 displays many key historical performance indicators. The main screen displays the average read and write times, the number of reads and writes, and the number of commits and rollbacks, as well as information on lock waits, lock escalations, and deadlocks. This information can be displayed in two ways. Select Total Day to display the averages and totals for each day over the configured monitoring period. Select Peak to display the maximum value for the day for all measured monitor elements. This gives a simple way to identify average and peak performance trends over time.

41

CHAPTER 2: Performance Monitoring

Figure 2.17: Daily historical performance data can be analyzed in this dialog.

Clicking any single day in the list displays the details gathered for each monitor element periodically throughout the day. This can be viewed in two different tabs. The Snapshot tab provides the details of each individual sample throughout the day. The Interval tab displays only deltas. Therefore, it will contain only entries for times when one or more monitor element value changed from its previous value.

Performance: HistoryTables
The performance history of individual tables is also available for proactive planning. For each table on each database partition, the Performance: HistoryTables screen displays the rows read and written, overflow records accessed, and page reorganizations. This information can be displayed for each day, week, or month. Both short- and long-term trends for table access can be easily analyzed, providing the DBA with the information needed to proactively plan for system changes to accommodate changing workloads.
42

CHAPTER 2: Performance Monitoring

Performance Warehouse
The new SAP Database Performance Warehouse provides an integrated historical performance analysis model for both the database and the SAP applications. Database performance data is extracted and loaded from all SAP systems into a central SAP NetWeaver BW warehouse. Historical performance data can then be mined, trended, and analyzed, using powerful SAP NetWeaver BW interfaces with charts, dashboards, and drill-down capabilities. The ABAP cockpit contains a Reporting link for analyzing performance data and a Configuration link for setting up the Performance Warehouse reporting parameters. The Reporting screen links directly into the Performance Reporting WebDynpro. An example of the data is given in Figure 2.18. This illustrates historic buffer pool quality over a two-week period. This data clearly displays recurring trends that can identify areas that might benefit from tuning. Only this brief introduction to the Performance Warehouse will be given in this book. More documentation on the Performance Warehouse can be found on the SAP Service Marketplace or SDN.

Figure 2.18: The SAP Performance Warehouse displays detailed reports on historical performance and resource consumption trends.

43

CHAPTER 2: Performance Monitoring

Summary
The performance section of the DBA Cockpit provides a comprehensive, single interface for all DB2/SAP database performance monitoring and tuning. All of the most important information is easily accessible, and displayed in an intuitive, meaningful way. Since all the information is in one location, it is easy to drill down from database monitors, to table space and table monitors, to application monitors, and even right down to SQL statement monitors. Administrators can start with a wide focus and methodically narrow that focus to the exact source of the problem under investigation. The best part is that the tool is part of SAP, so both SAP basis and database administrators can leverage this powerful tool in a familiar interface, to get the best performance from their SAP systems.

44

Chapter 3 Storage Management Flying Efficiently with Heavy Cargo


Properly managing the volume of data within your SAP system is key for efficient performance. The DBA Cockpit gives you all of the storage statistics and growth characteristics you need to optimize database storage, now and for the future.

torage is the most frequently overlooked aspect of database performance configuration, yet it can significantly contribute to how well a database performs, because disk I/O is the slowest part of any computer interface. A poorly configured data layout will ultimately be the constricting bottleneck, regardless of how well the SQL statement is formed or what access plan is chosen. DBAs can spend a lot of time designing and planning the layout of table spaces for a storage subsystem. Todays advanced storage subsystems offer many choices on how physical disk volumes can be grouped into RAID arrays, and within these arrays, how logical volumes (LUNs) can be defined and made available as usable storage to the database. Designing the placement of table spaces can be more like an art than a science. The problem in spending so much time on an elaborate design is that it is only appropriate for the quantity of data and
45

CHAPTER 3: Storage Management

workload at a given point in time. As the system matures and evolves, so must the storage layout. As companies adapt their SAP systems for future business needs, such as adding additional modules, the amount of data inevitably grows. Therefore, the data access pattern will evolve, rendering the initial data layout design obsolete. To keep the system running optimally, time-consuming and intrusive administrative tasks might be required regularly, to re-evaluate and re-optimize the data layout. Often, a simpler, more generic storage layout, like that provided by DB2s automatic storage feature, provides a better solution for high performance and low maintenance throughout the entire lifetime of the SAP application. DB2 table spaces store their data in physical storage objects known as containers. A table space can span one or more containers. Data within the table spaces are striped evenly across all containers for that table space. DB2 uses two types of table space concepts: System Managed Space (SMS) and Database Managed Space (DMS). With SMS, the storage allocation within the table space containers is managed by the operating system (OS). Containers are OS directories, and a unique file exists in each container for each database object residing in that table space. By default, I/O to these table spaces will be buffered by the file system, and the sizes of the files in the containers will be extended or reduced, depending on the quantity of data stored in the database objects. Addition and deletion of containers in SMS is only possible during a redirected restore. With DMS, the storage allocation within the table space containers is managed by DB2. The containers are either pre-allocated files or raw devices. I/O to these pre-allocated containers is handled mainly by the database, with little or no OS overhead. The OS is only involved when pre-allocated file containers are extended or reduced. Also, addition and deletion of containers is possible online via DDL statements. To simplify the administration of the table spaces, all table space containers should be spread as widely as possible on all disk spindles. Although an
46

CHAPTER 3: Storage Management

elaborately designed layout might briefly provide a slight performance benefit (of perhaps five percent), this simpler approach will provide a more consistent I/O pattern over time. It will also be less vulnerable to additions of new SAP modules, or changes in function and workload. DB2 has also introduced a feature called Automatic Storage, in which the database is given a pool of storage (generally two or more file systems), from which table space containers will be allocated. Automatic Storage is fundamentally a combination of DMS table spaces (used for the System Catalog and User table spaces) and SMS table spaces (used for Temporary table spaces). In the DBA Cockpit, SAP has not only made the monitoring of database performance metrics available, as described in Chapter 1, but it has also made the maintenance of table spaces, tables, and indexes available in the SPACES section.

Automatic Storage
Automatic Storage is the default storage layout when installing DB2 with SAP NetWeaver 7.0 and higher. During installation, SAPinst will use sapdata1 through sapdata4 as storage paths. Depending on the storage subsystem and the number of LUNs/file systems available, additional storage paths can be added at that time. Administrators can view the DB2 storage paths from the DBA Cockpit, as shown in Figure 3.1. Once the database has been created, additional storage paths can be added, if the original file systems containing the table spaces are getting full. Adding new storage paths at this time will create a new stripe set of storage for all table spaces. A new stripe set will not cause a rebalancing of data from the older set of containers into the new storage. The containers in the previous stripe set will be filled before the new stripe set begins to be used. Therefore, to provide equivalent performance, ensure that the I/O capacity of each stripe set is the same. This requires a similar number of disk spindles in each stripe set. The simplest way to achieve this is to always keep everything the same. Each time storage is extended by adding new automatic storage paths, add the same number of sapdata file systems, always using identical LUNs from the storage system.
47

CHAPTER 3: Storage Management

Figure 3.1: Automatic Storage storage paths can be managed from within SAP.

To add new storage paths, just click and enter the new file systems in the dialog shown in Figure 3.2. The new storage locations must exist and be accessible by the database. The bottom of the panel will display the DDL for the ALTER DATABASE statement, as confirmation of the changes made.

Figure 3.2: Click the Add button to add a new storage path.

48

CHAPTER 3: Storage Management

Table Spaces
Table spaces in SAP can either be of Automatic Storage or DMS/SMS type. By default, SAP NetWeaver 7.0 or higher will create the system catalog table space and all data table spaces using Automatic Storage, and the temporary table spaces using SMS. DMS/SMS table spaces can still be created for user data, even if the database uses Automatic Storage. As shown in Figure 3.3, the Tablespace screen displays the table spaces according to their type, in either the Automatic Storage tab or DMS/SMS tab.

Figure 3.3: Table spaces are arranged according to type.

Detailed data about both the logical and physical storage consumption for each table space is displayed in the following columns: ContentsThe Contents column shows whether the table space was created as a Regular, Large, System Temporary, or User Temporary table space:
49

CHAPTER 3: Storage Management

? Regular table spaces are the default for SMS, but they can also be used for DMS. They have smaller limits for maximum size and slots (rows per page) than Large table spaces, and cannot contain LONG/LOB data. ? Large table spaces are the default for DMS table spaces. They are only allowed for DMS. They can contain both user data and LONG/LOB data. ? System Temporary table spaces store the derived temporary tables used by DB2 for sorts or joins that are too large to perform in main memory. ? User Temporary table spaces store declared global temporary tables, which are used by SAP NetWeaver BW to improve reporting performance. TS StateThe state of a table space is usually Normal, but it could be in some other state, such as Quiesce, Backup, or Rollforward. KB TotalThis is the allocated size of the table space, in kilobytes. Page SizeThe page size for table spaces can be allocated in 4KB, 8KB, 16KB, and 32KB sizes. No. containersThis is the number of containers that have been allocated for the table space. KB FreeThis represents the amount of space in the table space that was allocated, but does not contain any data pages. High-Water MarkFor DMS, this represents the current size as represented by the page number of the first free extent following the last allocated extent of a table space. Percent UsedThis represents the total amount of space consumed up to the high-water mark, in relation to the total allocated table space size.
50

CHAPTER 3: Storage Management

Pending Free PagesThis is the number of pages in a table space that would become free if all pending transactions were committed or rolled back, and new space were requested for an object. Maintenance of table spaces can be performed by selecting one of the three choices shown in Figure 3.4: Change, Add, or Delete. Changing an existing table space should be done if any permitted technical setting needs to be modified (such as autoresize or prefetch size). Adding a new table space to DB2 can be done here. It should be followed by the addition of this new table space to a SAP data class in Configuration Data Class. This will ensure consistency between the DB2 table spaces and the SAP DDIC. Deletion of a table space will be reflected at both the database level and at the SAP data dictionary.

Figure 3.4: Table space maintenance can be performed directly from within SAP.

Adding a new table space in Automatic Storage requires the DBA to navigate and modify three tabs. However, the DBA must first provide a new table space name, beginning with Z or Y, for user customized objects.

The Technical Settings Tab


Notice in Figure 3.5 that AutoStorage is automatically selected if you create table spaces from within the Automatic Storage tab. Next, you select the table space content type (Regular, Large, System Temporary, or User Temporary), as described in the previous section of this chapter.

51

CHAPTER 3: Storage Management

Figure 3.5: Specify the technical settings when creating new table spaces.

The settings in the Size of I/O Units area of Figure 3.5 will influence how DB2 will store the data on disk and access it. The SAP default is 16KB pages, two pages per extent and prefetch size of automatic. The automatic value in the Prefetch Size is a computed value based on the number of containers, the number of disk spindles, and the extent size. The formula used for this calculation is as follows:
Prefetch size = (number of containers) * (number of physical disks per container) * (extent size)

52

CHAPTER 3: Storage Management

Disk Performance values are predefined, using default values. A different buffer pool could also be assigned to this new table space, although you should maintain only one buffer pool if all table spaces are of the same page size. DB2 requires that there be at least one buffer pool of the corresponding page size for each page size used by table spaces.

The Storage Parameters Tab


The Storage Parameters tab, shown in Figure 3.6, is used to determine the Initial Allocated Size of the table space, the incremental growth when the table space encounters a table space full condition (in either kilobytes or percentage), and if there is a maximum size to which the table space can extend. If no maximum size is specified, the table space will grow until it either reaches the maximum table space size or consumes all the storage available in the file system. With the default 16KB page size, the maximum table space size is 8TB in DB2 9 and 9.5, and 32TB in DB2 9.7.

Figure 3.6: Define the storage parameters for new table spaces.

53

CHAPTER 3: Storage Management

The Containers Tab


In Automatic Storage, the database determines the number of containers based on the number of storage paths assigned. Therefore, adding containers is not permitted in Automatic Storage table spaces, as you can see in Figure 3.7.

Figure 3.7: DB2 defines the containers itself for Automatic Storage table spaces.

DMS/SMS Table Spaces


The DMS/SMS tab of the Tablespaces screen is shown in Figure 3.8. As you can see, the information displayed is very similar to that of the Automatic Storage tab.

Figure 3.8: The table spaces for DMS/SMS are listed here.

54

CHAPTER 3: Storage Management

When adding a new table space under DMS/SMS, the only difference in the Technical Settings is that AutoStorage is not the default, as you can see in Figure 3.9.

Figure 3.9: Add a table space in the Technical Settings tab.

In the Containers tab for DMS/SMS, shown in Figure 3.10, the container information is now required. For DMS containers, you must specify a full path and file name for each container. For SMS, specify a directory for each container.

Figure 3.10: Container definitions are required for DMS and SMS table spaces.

55

CHAPTER 3: Storage Management

Containers
To view all the related containers for the table spaces, select the Containers option. The screen shown in Figure 3.11 will be displayed.

Figure 3.11: The Containers screen displays the containers for all table spaces.

The Containers screen displays storage parameters and statistics in the following columns: Stripe SetThe stripe set to which containers belong determines the set of containers across which DB2 will evenly distribute the data. In Automatic Storage, when additional storage pools are added through ALTER DATABASEADD STORAGE ON, or via the DBA Cockpit, a new stripe set is automatically created. In DMS table spaces, ALTER TABLE SPACEBEGIN NEW STRIPE SET will create a new stripe set. When adding storage to the database, it is always recommended to either extend all containers in the current stripe set by the same amount, or create a new stripe set. This keeps the containers balanced and avoids the data movement and I/O caused by rebalancing.
56

CHAPTER 3: Storage Management

Container NameThis contains the full path and file name of the container for DMS, or a full directory path name for SMS. KB TotalThis is the total allocated size, in kilobytes. Pages TotalThis is the number of allocated pages. The size reported in the previous column depends on the table space page size and can be found in the table space technical settings. AccessibleIf the table space is in a Normal state, all containers should be accessible. FS ID/FS Free SizeThese two columns relate to the file system on which the container resides. If Auto Resize is enabled for the table spaces occupying the file system, ensure that there is enough space in the file system for the table spaces to grow.

Tables and Indexes


An SAP ERP 6.0 system contains over 50,000 tables. You can display all these tables in the Tables and Indexes section, or you can limit what appears. When this section is first selected, the pop-up filter shown in Figure 3.12 is displayed. You can use this filter to narrow the choices of which tables you would like to see, choosing from the following criteria: A certain table space A specific table, or table names that match a pattern Tables greater than or equal to a given size Flagged tables, which are tables or indexes that have exceed the threshold for reorganization Large RIDs, which refers to displaying tables in large table spaces that have not yet been enabled for large RIDs Tables that have a status of Not Available Tables that have a status of Reorg Pending
57

CHAPTER 3: Storage Management

Tables that have Type-1 indexes, which were used in DB2 v7 and older, before Type-2 indexes were introduced to improve concurrency

Figure 3.12: Selection criteria can filter the tables displayed.

In the Table and Indexes screen, tables that meet the criteria in the filter are displayed. The tables displayed here are also dependent on a set of DB6 tables that are populated when the REORGCHK FOR ALL TABLES job is run in the DBA Planning Calendar. (Select Jobs DBA Planning Calendar.) If this job has never been run, it might be possible that no tables are displayed. The following columns are represented in the screen, which is shown in Figure 3.13: SchemaThis is the schema of the tables. Table NameThis is the name of tables that qualified.
58

CHAPTER 3: Storage Management

F1 REORGCHK formulaThis value represents the overflow rows, as a percentage of total rows in the table. F2 REORGCHK formulaThis value represents the table size divided by allocated space, as a percentage. F3 REORGCHK formulaThis value represents full pages divided by allocated pages, as a percentage. Table FlaggedIf this is flagged, the table needs to be reorganized. Index FlaggedIf this is flagged, the indexes on this table need to be reorganized. SizeThis is the table size, in kilobytes. REORG Check DateThis is the date when REORGCHK was last run against the table, or when RUNSTATS was executed from dmdb6srp. REORG Check TimeThis is the time when REORGCHK was last run against the table or when RUNSTATS was executed from dmdb6srp.

Figure 3.13: The Tables and Indexes dialog displays the storage characteristics of the individual database tables.

59

CHAPTER 3: Storage Management

In older versions of SAP NetWeaver, you can run REORGHK from this screen by clicking , which will be located on the Application menu bar, near the top of the screen. This opens a window, shown in Figure 3.14, to allow administrators to run a REORGHK on stale tables. New versions of SAP NetWeaver 7.0 no longer have a REORGHK button. Instead, a REORGHK is executed every time the table is loaded in Single Table Analysis.

Figure 3.14: You can run REORGCHK from the Application menu bar.

Single Table Analysis


Have you ever wanted to know all the details related to a table? In the Single Table Analysis screen, you can see all the technical information and statistical data of both the table and related indexes, and have the ability to run maintenance against it (e.g., RUNSTATS, REORGS, and Compression). You can also get to this screen from Space Table and Indexes, by selecting a table.

60

CHAPTER 3: Storage Management

The Table Tab


The tables information is categorized into tabs for easy viewing of related data. The Table tab is shown in Figure 3.15. The information in the REORG Check Statistics area of the Table tab is as follows: Last REORG CheckThis is the date and time REORGCHK was last run. Total Table SizeThis value represents the size of regular and long data in the table, in kilobytes. Total Index SizeThis value represents the size of all indexes for the table, in kilobytes. Free Space ReservedThis is the percentage of free space in the tables allocated pages. F1 Overflow RowsThis is the percentage of overflow rows. F2 Table Size/Allocated SpaceThis is the percentage of general fragmentation in the table. F3 Full Pages/Allocated PagesThis is the percentage of full pages fragmentation. REORG PendingThis indicates if REORG is pending. Last REORG of TableThis is when REORG was last run. Runtime of Last REORGThis is the elapsed time of the last REORG. The System Catalog area of the Table tab contains these values: Last RunstatsThis indicates when RUNSTATS was last run against table. TablespaceThis is the table space to which the table belongs. CardinalityThis is the number of rows in the table. Overflow RecordsThis is the number of rows that span two or more pages. No. of Pages with DataThis value represents pages that contain table data.
61

CHAPTER 3: Storage Management

Total Number of PagesThis is the total number of pages consumed by table. Value CompressionIf this is flagged, column-value compression is used. Row CompressionIf this is flagged, row compression is used. VOLATILEIf a table is marked as volatile, RUNSTATS is never run against it, as the cardinality of the table changes constantly. VBDATA is an example of a volatile table. Pooled, Cluster or Import/Export TableThis flag indicates whether the table is defined as a pooled table, a cluster, or an import/export table in the ABAP dictionary (e.g., CDCLS).

Figure 3.15: Storage details for individual tables are available in Single Table Analysis.

62

CHAPTER 3: Storage Management

The Indexes Tab


The Indexes tab, shown in Figure 3.16, displays the statistical information for indexes. This information is similar to the table content, except that the REORGCHK formulas are used to determine the cluster ratio and the index space fragmentation, as follows: F7 Ratio of Deleted Index EntriesWith Type-2 indexes, it is possible that RID entries in the index are pseudo-deleted. (Keys have been marked as deleted when the row is deleted or updated.) This value is the number of pseudo-deleted RIDs on non-pseudo-empty pages. F8 Ratio of Deleted Index LeafsThis is the number of pseudo-empty leaf pages, over the number of leaf pages. The System Catalog area of the Indexes tab contains the following statistics: Number of LeavesThis number indicates the leaf pages in the index B*Tree. Number of LevelsThis is the maximum number of pages traversed from the index root page to a leaf page. Sequential PagesThis is the number of index leaves physically located on the hard disk sorted by index without large intervals between them. DensityThis indicates the relative density of the sequential pages, as a proportion of the total number of index pages. First Key CardinalityThis is the number of unique values in the first column of the index. First 2 Key Cardinality This is the number of unique values in the first two columns of the index. First 3 Key Cardinality This is the number of unique values in the first three columns of the index. First 4 Key Cardinality This is the number of unique values in the first four columns of the index. Full Key Cardinality This is the number of unique values in all columns of the index.
63

CHAPTER 3: Storage Management

Figure 3.16: Index storage statistics are also available in Single Table Analysis.

The Table Structures Tab


The Table Structures tab, shown in Figure 3.17, holds the table columns definitions. Note that the column data type is based on DB2s definition, not the ABAP column data type.

Figure 3.17: The Table Structure tab displays the columns and their data types.

64

CHAPTER 3: Storage Management

The RUNSTATS Control Tab


The RUNSTATS Control tab is divided into two sections. The left side of the tab controls the scheduling and execution of statistics collection. The right side indicates the statistics profile (the type of statistics collected) for the table. The data in the left side depends on the configuration of AutoRunstats. When AutoRunstats is enabled (which is the default for SAP NetWeaver 7.0 installations), the screen appears as shown in Figure 3.18. The Statistics Attributes area of the screen contains the following: Not VOLATILE (AutoRUNSTATS Included)This indicates that the table is eligible for AutoRunstats. VOLATILE (AutoRUNSTATS Excluded)The table is volatile, so it is not eligible for AutoRunstats. If AutoRunstats is disabled, a Scheduling section appears in the tab along with a Statistics Attributes section. The Scheduling section provides the following information: AutomaticallyStatistics are collected by CCMS jobs scheduled from the DBA Planning Calendar. On User RequestCCMS does not process this table. Statistics must be collected manually by the user. Statistics Is Out-of-DateStatistics are stale, and RUNSTATS is recommended. DeviationThis value is the difference between the cardinality statistics and the estimated cardinality based on insert and delete activities. Collect Data for Application MonitorThe table is monitored by the Application Monitor (ST07).
65

CHAPTER 3: Storage Management

With AutoRunstats disabled, the Statistics Attributes are as follows: StatisticsStatistics are collected for this table. If the table is currently marked as volatile, it will be changed to not volatile as soon as statistics are collected for the first time. No Statistics and VolatileThe table is marked as volatile in the system catalog, and no statistics will be collected. The RUNSTATS profile for the table is displayed in the right side of the screen. This displays the type and detail of statistics collected on the table. The Table Analysis Method shows the options that RUNSTATS will use to collect statistics for the table. The Index Analysis Method shows the options that RUNSTATS will use to collect statistics for the indexes.

Figure 3.18: The RUNSTATS Control tab shows how DB2 collects statistics for this table.

66

CHAPTER 3: Storage Management

The Index Structures Tab


The Index Structures tab, shown in Figure 3.19, lists the Index columns. If the table contains more than one index, use the navigation buttons to proceed to the next index.

Figure 3.19: The Index Structures tab displays the database data type and size of the columns in each index on this table.

The RUNSTATS Profile Tab


RUNSTATS can be executed with a previously stored statistics profile to gather statistics for a table or a statistics view. This profile must have previously been created with the SET PROFILE option of the RUNSTATS command. If such a profile exists, it will be displayed in this tab. Click the RUNSTATS with Profile button at the bottom of this screen to execute RUNSTATS with this profile.

The Table Status Tab


The Table Status tab, shown in Figure 3.20, holds all the size and technical information of the table. The Physical Size and Logical Size sections break down the space consumed physically and logically by the different object types of the table. The following key pieces of information are shown within the Availability and Other Technical Information section:
67

CHAPTER 3: Storage Management

Inplace REORG StatusIf running an online, in-place REORG, the status could be one of the following: ABORTED, EXECUTING, NULL (no REORG is currently running), or PAUSED. Large RIDsIs the table using large RIDs? If the value is PENDING, the table supports large RIDs (that is, the table is in a large table space), but at least one of the indexes for the table has not yet been reorganized or rebuilt. Therefore, that index is still using smaller, 4-byte RIDs. It must be reorganized to convert it to the larger, 6-byte RIDs. Large SlotsDoes the table support more then 255 rows per page? If the value is PENDING, the table supports large slots (that is, the table is in a large table space), but there has not yet been an offline table reorganization or a table truncation operation. Therefore, the table is still using a maximum of 255 rows per page.

Figure 3.20: The table size and status are available in the Table Status tab.

68

CHAPTER 3: Storage Management

The Compression Status Tab


DB2 introduced table data compression in DB2 9.1, and further enhanced it with Automatic Data Compression (ADC) in DB2 9.5. Index and Temporary table compression are not supported as of DB2 9.5. ADC enables DB2 to automatically compress data that is loaded into a table created with the COMPRESS YES option, without running a REORG to build the compression dictionary. The Compression Status tab is shown in Figure 3.21. If compression has been enabled on the table, the Compression Details area of the tab displays the compression statistics. Otherwise, if a compression check has been executed on the table, the Compression Check Results can be used to evaluate the potential benefits of compressing that table. The following information is displayed in this section when the table is enabled for compression, and compression has already been applied to the data rows of the table: Current Dictionary SizeThis is the compression dictionary size, in bytes. The compression dictionary has a maximum of 150 KB, and can store up to 4,096 compression symbols per table object. Approximate Percentage of Pages savedThis is the percentage of data pages saved, by compression. The Compression Check Results section shows the estimation of what can be expected if compression were enabled on the data rows of the table. It contains the following values: Estimated Saved PagesThis is an estimate of the number of pages that will be saved after compression, as a percentage. Estimated Saved BytesCommonly referred to as the compression ratio, this is the estimated bytes that will be saved after compression, as a percentage. Rows too SmallThis is the number of rows that were too small to be used for compression calculations.
69

CHAPTER 3: Storage Management

Figure 3.21: DB2 Deep Compression statistics are shown in the Compression Status tab.

The Application menu bar, shown in Figure 3.22, provides options to run RUNSTATS, REORG, and Compression on the table. Select one of the buttons to start the action.

Figure 3.22: Some database utilities can be run from the Single Table Analysis Application menu bar.

When RUNSTATS is select to be run in the background, the dialog menu shown in Figure 3.23 is displayed, with choices on how statistics will be collected for both the table and index. Once the statistics collection method has been chosen, the job could be run once, or repeated on a schedule from the Recurrence tab.

70

CHAPTER 3: Storage Management

Figure 3.23: Set the RUNSTATS parameters when scheduling background statistics collection.

To check if a table is a good candidate for compression, click to schedule a background job. The resulting estimate will be displayed in the Compression Check Results section of the Compression Status tab screen. This action will not create the compression dictionary, turn on the COMPRESS flag, or compress any rows. To enable Compression, select . The dialog box in Figure 3.24 will prompt you for the method of compression to use. The Status section indicates if the COMPRESS YES flag was set on the table, either through CREATE TABLE or ALTER TABLE, and if a compression dictionary pre-existed. There are then two options for enabling compression: Enable CompressionThis will build the compression dictionary, but leave all existing data in the table uncompressed. New or changed data will be eligible for compression.

71

CHAPTER 3: Storage Management

Enable Compression and Run REORGThis will build the compression dictionary, run an offline REORG against the table, and compress all existing rows in the table. All new rows added to the table are eligible for data compression.

Figure 3.24: Select whether to compress just new rows, or both new and existing rows.

Virtual Tables
Virtual tables were introduced by SAP to save disk storage and help improve the performance of many utilities, such as Automatic RUNSTATS and Automatic REORG. The concept of virtual tables is simple. Do not materialize (create) an empty table in the database. Instead, just logically define it in the SAP DDIC. When the first row is inserted into a virtual table, the SAP Database Support Layer (DBSL) determines that the table does not yet exist in the database. It issues the CREATE TABLE statement to materialize the table before inserting that first row. SAP systems contain thousands of empty tables. Each empty table may consume as many as 11 extents (22 pages or 352K, with the default 16K page size and extent size of 2). These extents are consumed by the following:
72

CHAPTER 3: Storage Management

One Extent Map Page (EMP) extent One data extent Two extents for the index object One page for each index Two extents for a LONG field object Four extents for a LOB object

The first screen in the Virtual Table tab, shown in Figure 3.25, lists all of the virtual tables in the system. To materialize a virtual table manually, select it and click the Materialize button.

Figure 3.25: The Virtual Tables tab contains the list of virtual tables.

73

CHAPTER 3: Storage Management

The second tab in the Virtual Table screen, shown in Figure 3.26, lists all the empty tables that are eligible to be converted to virtual tables. If the Convert Empty Tables button was selected, all eligible tables will be dropped from the database and re-created as virtual tables in a background job. Eligible tables match the following criteria: Empty Non-volatile Does not have a partitioning key defined Non-MDC

Figure 3.26: Empty tables that can be virtualized are listed in the Candidates for Virtualization tab.

74

CHAPTER 3: Storage Management

Historical Analysis
The History Overview screen provides a general overview of the size and quantity of table spaces, tables, and indexes in the database. The Database and Tablespaces tab of the screen is shown in Figure 3.27.

Figure 3.27: A database size overview is provided in the Database and Tablespaces tab of the History Overview.

This tab provides the following statistics: Last AnalysisThis date and time indicates when the last analysis was run to collect the history information of the database objects. Total NumberThis is the number of table spaces in the database.
75

CHAPTER 3: Storage Management

Total SizeThis is the total size of all table spaces. Free SpaceThis is the amount of free space (in kilobytes) in all the table spaces. Used SpaceThis is the amount of space used, as a percentage of total space. Minimum Free Space in a TablespaceThis is the free space of the table space with the lowest amount of free space (in kilobytes). This information is of little value if the table spaces are of Automatic Storage Table Space or have RESIZE enabled, as they will be resized when they are filled. Maximum Used Space in a TablespaceThis is the percentage of used space of the table space with the most amount of data.

Database PartitionsThis value provides the number of partitions in a multi-partitioned SAP NetWeaver BW database. The Tables and Indexes tab of the History screen, shown in Figure 3.28, provides an overview of the quantity and space consumed by the databases tables and indexes.

Figure 3.28: The Tables and Indexes tab displays the size of the tables and indexes.

76

CHAPTER 3: Storage Management

The Database and Table Spaces


The HistoryDatabase and Tablespaces screen, shown in Figure 3.29, displays the change history for the database, tables, and indexes. You can use the information here to help plan capacity.

Figure 3.29: The Space tab displays the change history of database and table space storage consumption..

Double-clicking a row in the list of tables and indexes shown in Figure 3.30 displays a detailed history that documents the items size changes over time.

Figure 3.30: The Tables and Indexes tab displays the historical storage consumption for tables and indexes.

77

CHAPTER 3: Storage Management

In the example in Figure 3.31, the Delta Tables value for 07/03/2008 was negative, indicating that some tables were deleted from the database. In this case, they were converted to virtual tables.

Figure 3.31: Historic details of a databases size changes are available here.

Tables and Indexes


This HistoryTables and Indexes screen, shown in Figure 3.32, provides access to the historical data of tables and indexes. Initially, this screen displays all the tables and indexes found in the database with their respective sizes and changes, together with REORGCHK information.

78

CHAPTER 3: Storage Management

Figure 3.32: Histiorical size changes for the tables and indexes are displayed here.

Selecting any object and double-clicking its row provides more historical data for that object, as shown in Figure 3.33.

Figure 3.33: Double-click a table to see its historical size changes.

79

CHAPTER 3: Storage Management

Summary
Managing database storage, monitoring database object size, and planning storage capacity are all key operations for ensuring the stable, efficient, and cost-effective operation of any SAP system. The SAP DBA Cockpit provides easy access to many of the most important DB2 features for storage management. Regular maintenance tasks, such as reorganization and statistics collection, are all easily executed on-demand, scheduled as repeating jobs, or enabled for automatic DB2 maintenance with a couple of mouse clicks. Powerful performance and space-optimization features, such as compression and virtual tables, are also fully integrated by SAP into the DB2 cockpit, making the unique benefits of DB2 easy to implement in an otherwise complex environment.

80

Chapter 4 Job Scheduling Flying on Auto-Pilot


The DBA Planning Calendar saves time and reduces administrative effort by giving DBAs a simple interface to schedule the most common repeating maintenance jobs, and an efficient way to create, save, and schedule complex database administration tasks.
utomating tasks is one of the easiest ways to reduce workload. SAP on DB2 provides an integrated interface for both job scheduling and monitoring. Administrators can monitor all SAP systems on the central planning calendar, or modify recurring database jobs through the DBA Planning Calendar. There is even an interface to create and store custom scripts, and then schedule those scripts in the calendar.

The Central Calendar


The SAP Central Calendar, shown in Figure 4.1, displays the complete list of user-defined database background jobs scheduled on all SAP systems configured for central monitoring. It is a read-only, holistic view that provides administrators a single point from which they can monitor database jobs for remote systems
81

CHAPTER 4: Job Scheduling

running different versions of SAP, different relational databases, and databases for Java stack and non-SAP systems. When registering a remote system is registered in the DBA Cockpit, you must click the Collect Central Planning Calendar Data checkbox to allow SAP to update the central calendar with the job status for this system. Then, schedule the Central Calendar Log Collector job to run every morning on the system normally used for monitoring. This will collect and consolidate all the remote systems calendar data on that one SAP system.

Figure 4.1: The Central Calendar shows all database jobs for all registered SAP systems.

The central planning calendar shows a single entry for each system with a job scheduled for that day, in the format 001 <SID> 001, where <SID> is the SAP system ID. The first number indicates the number of jobs scheduled for that day for the given SID. The second number indicates the number of those jobs that have finished with the same, highest status severity. The severity will be indicated with a color code for that cell in the calendar.
82

CHAPTER 4: Job Scheduling

You can easily see which systems and jobs have had warnings or errors. Double-click any date on the calendar to view the details of all the jobs scheduled on all systems for that date. Double-clicking any specific entry takes you to the DBA Planning Calendar for that date and SAP system. This allows administrators to view the detailed job logs, and then modify or re-execute those jobs.

The DBA Planning Calendar


The DBA Planning Calendar, shown in Figure 4.2, is used to automate regularly recurring database jobs. Any recurring database process can be automated through the calendar. However, many of the most common jobs are predefined in an Action Pad within the calendar. Administrators need only drag and drop the jobs from the Action Pad to the calendar to schedule their execution on the SAP system. When a new SAP system is installed, no recurring jobs are initially scheduled. The administrator must determine the pattern of jobs required for that system. These jobs may, or may not, run in parallel. Therefore, the schedule must take into account dependencies between the jobs and their impact to the system. There are also a few database-related jobs that run regularly in every SAP system: Collection of database performance historyThis job runs every two hours, starting at 00:00. Monitoring database and database manager configuration parameter changesThis job runs daily at 08:00, 13:00, and 19:00. Collection of table and index space historyThis job runs weekly on Sunday, at 12:00. Keep these jobs in mind when planning the DBA background jobs in the calendar.
83

CHAPTER 4: Job Scheduling

Figure 4.2: The DBA Planning Calendar provides scheduling and monitoring of background database jobs.

The DBA Planning Calendar provides a wizard to help with the initial setup of the recurring administration tasks on each SAP system. To run the wizard, click the Pattern Setup button. This wizard steps through the setup of a backup schedule, automatic table REORG, and the scheduling of the REORGCHK for all Tables job. For each job, reasonable default times are provided, but these can be changed as desired. The remaining jobs can either be scheduled from the list of common jobs in the Action Pad next to the calendar, or created and scheduled in the calendar as a command line processor (CLP) script.

REORGCHK for All Tables


The REORGCHK for all Tables job must be scheduled in all DB2 SAP systems. This job performs much more than just the standard DB2 REORGCHK. It is very important for the proper display of Space data in the cockpit (formerly transaction DB02). This job performs the following tasks:
84

CHAPTER 4: Job Scheduling

Executes the DB2 REORGCHK tool to obtain REORG recommendations for tables and indexes Calculates the size of tables and indexes, which is used for creating incremental space consumption history Determines special situations for tables (e.g., REORG_PENDING) Can perform compression estimates, starting with SAP BASIS 7.0 SP12 All calculated data is stored in SAP database tables and displayed in the DBA Cockpit under Space Tables and Indexes. Therefore, the REORGCHK for all Tables job should be scheduled to run weekly, to ensure that accurate data is displayed in the cockpit. SAP recommends excluding the compression check from the recurring job, because a compression check of all (potentially over 50,000) tables can take a long time. If you need a full compression check, schedule it once during low workload hours, or use the /ISIS/ZCOMP ABAP report, attached to SAP Note 980067. (See SAP Note 1268290 for the most recent recommendations about the REORGCHK for all Tables job.)

Scheduling Backups
The calendars Action Pad now contains four options for backup: Database Backup into TSMBack up to Tivoli Storage Manager. Database Backup to DeviceBack up to a file system directory or tape device. Database Backup with Vendor LibraryBack up to a different storage manager, such as Veritas NetBackup. Snapshot BackupBack up using flash copy and split mirror. You can schedule full backups, or a combination of delta, incremental, and full backups to satisfy your time and recovery requirements. All backups scheduled through the DBA Planning Calendar are now done online.
85

CHAPTER 4: Job Scheduling

Archiving Log Files to a Tape Device


If archive logging is configured to a disk location in LOGARCHMETH1, the DB2 Tape Manager can be used to move those archived logs to a tape device. The Archive Log Files to Tape job can be scheduled periodically in the DBA Planning Calendar to perform this task. The default behavior of the Tape Manager is to copy log files to the specified tape device, and then remove the log files from the file system specified in the LOGARCHMETH1 database configuration parameter. Options can be specified in the calendars job scheduler to archive each log file to two different locations on tape (the Double Store option), overwrite expired tapes, or eject the tape at the end of the operation. You should always keep two redundant copies of each archive log file. Therefore, either set LOGARCHMETH1 and LOGARCHMETH2 to different file systems, or use the Double Store option of the Tape Manager to keep two copies of each log file on tape.

Updating Statistics
By default, DB2 updates statistics automatically using its Real Time Statistics (DB2 9.5) and Automatic RUNSTATS features. Every two hours, a daemon process checks tables for change activity and updates the table statistics, if necessary. With DB2 9.5 Real Time Statistics, if the optimizer determines that statistics are too stale to provide acceptable query performance, it invokes statistics collection or estimation during query optimization. This removes almost all need for administrators to worry at all about table statistics. By default in SAP, both regular and distribution statistics are collected for all tables, and detailed statistics are collected for all indexes using sampling. This augments the regular table statistics with additional range histogram data for all columns of the tables, and collects detailed statistics for the indexes by sampling the individual index keys in each index. For SAP NetWeaver BW tables, table statistics are only collected on key columns. If a different statistics collection method is desired for certain tables, administrators can either update the statistics profile using the DB2 RUNSTATS command, or schedule the RUNSTATS and REORGCHK for Single Table job, which allows the RUNSTATS parameters to be tailored specifically for that RUNSTATS invocation.
86

CHAPTER 4: Job Scheduling

Table Reorganization
There are several jobs and maintenance settings for reorganizing tables. Although there is an Automatic REORG job in the planning calendar, the native DB2 automatic REORGis recommended instead for tables smaller than 1GB. This is explained further in Chapter 6. For larger tables, REORG jobs can be run on demand, or scheduled periodically through the calendar. Since larger tables are excluded from automatic REORG, a REORGCHK must be performed on these tables periodically, to determine when a REORG is required. This can be done by scheduling the REORGCHK for All Tables job. This job is also a prerequisite for the correct functioning of the Space Tables and Indexes dialog screen in the DBA Cockpit (formerly transaction DB02). Therefore, this must be scheduled to occur regularly in every SAP system. You should run it at least once weekly. The Action Pad then contains three additional table REORG jobs: REORG and RUNSTATS for Set of TablesThis job allows the administrator to enter a list of tables to be periodically reorganized. The REORG can be done in either offline (read-only) or online mode. REORG and RUNSTATS of Flagged TablesThis job reads the flagged table details from the REORGCHK for All Tables job, and generates a list of tables to be reorganized. Since the list of tables is generated when the job is scheduled, this job does not recur; it must be scheduled each time it is to be executed. The administrator can select all or part of the list for offline table reorganization, and specify an optional maximum runtime for this job. REORG of Tables in Tablespace(s)This job allows the administrator to select one or more table spaces for reorganization. An offline REORG will then run on all tables in that table space. The administrator can again select a maximum runtime for this job.

Custom Job Scripts


Custom command line processor (CLP) scripts can be written, saved, and scheduled for recurring maintenance or administrative tasks not available in the Action Pad. Simply select the CLP Script job from the action pad, and you can write
87

CHAPTER 4: Job Scheduling

new scripts, load scripts from text files, or select predefined CLP scripts created from the SQL Script Maintenance dialog screen. These custom scripts can then be scheduled in the calendar in the same manner as any other job. It is convenient to create background-job CLP scripts in the SQL Script Maintenance screen and save these scripts within SAP. You can then select these scripts from a drop-down list when scheduling recurring CLP script jobs. All jobs are listed on the DBA Planning Calendar with a color code to specify status. Any entry on the calendar can be clicked to display its details. Future jobs can be modified. Completed jobs can only be viewed, and also contain a tab to display the Job Log. The Job Log contains the status messages and output produced by the background job. This provides a good first point of problem determination for jobs that do not complete successfully.

The DBA Log


The DBA Log, shown in Figure 4.3, displays a color-coded list of database background jobs run on the current system. It contains the start and end times, the action performed, and the return code status of the job.

Figure 4.3: The DBA Log displays the status of database background jobs.

88

CHAPTER 4: Job Scheduling

The display defaults to the list of jobs executed during the current week. Previous weeks can be displayed by double-clicking dates from the calendar. The display can also be filtered by severity, by clicking the status icons ( ) in the Summary. This gives administrators a very easy way to view weekly job status and identify any jobs that did not complete successfully.

Back-end Configuration
The Change Back End Configuration screen, shown in Figure 4.4, provides an interface to control the execution of the DBA Planning Calendars background jobs on different SAP systems. For each system, a unique background server, user, and job priority can be configured.

Figure 4.4: The Change Back End Configuration dialog configures the server, priority, and user for executing background database jobs.

89

CHAPTER 4: Job Scheduling

SQL Script Maintenance


The SQL Script Maintenance screen, shown in Figure 4.5, provides a SAP repository for custom-coded DB2 CLP scripts. Administrators can add, modify, or delete custom SQL scripts. All saved scripts appear in a list in this screen, from where they can be selected and executed. The user is prompted for the SAP system on which to run the script, and then the output is displayed back into the SAP GUI screen.

Figure 4.5: SQL Script Maintenance allows administrators to store frequently run SQL scripts within SAP, so they can be scheduled easily in the DBA Planning Calendar.

More commonly, the saved SQL scripts can be scheduled from the DBA Planning Calendar, by selecting the CLP Script job from the Action Pad. The saved scripts can be selected from a drop-down list, and then scheduled to recur as required. The status of these jobs is then displayed in the DBA Log. To view detailed results of a job, double-click it from the DBA Planning Calendar or from the Job Overview (transaction SM37).
90

CHAPTER 4: Job Scheduling

Summary
The DBA Cockpit for DB2 contains all the functionality administrators need to easily schedule and monitor any type of recurring database task in SAP systems. Predefined jobs and centralized monitoring greatly simplify normal SAP database maintenance, and the custom repository provides the flexibility to easily define, maintain, and schedule more complex database maintenance tasks.

91

Chapter 5 Backup and Recovery Reviewing Your Flight Logs


The DBA Cockpit Backup and Recovery option displays all the information that a DBA needs to verify the successful operation of the database backup and log archival processes.

AP applications store their data in the underlying database, in this case DB2. Objects like application and technology tables, ABAP programs, and user data (customizations and transactional data) are all stored in DB2 database objects. Therefore, administrators need to protect the database, so it can be recovered in case of a problem (such as a user or application error) or a major catastrophe (such as a disk crash). Two components work in conjunction to protect the database: database backups and transaction log files. Backups can recover the database up to the point when the backup was taken. Log files can recover the remaining transactions that were committed after the backup. Transaction log files work in a circular mode by default, meaning that they are not archived; old transactions are overwritten by new ones. The first step to enable the necessary protection to the data is to enable archival logging for the SAP database. To do that, the DBA needs to configure the
92

CHAPTER 5: Backup and Recovery

LOGARCHMETH1,

and optionally, LOGARCHMETH2 database configuration parameters and take a full offline backup of the database. When archival logging is enabled for the database, the transaction logs are automatically archived to the method(s) specified in these parameters, so no manual intervention is necessary. The second step to protect the database is to take backups regularly, so that fewer transaction log files are applied to recover the database to the latest consistent point in time. Backing up the database is a recurring task. The best way to program these backups is to schedule jobs in the DBA Cockpit. Finally, it is also crucial that the DBA validates the backup procedure by checking the message logs, and most importantly, through programmed restores on a test system. We recommend a restore test once every three months.

The Backup Strategy


The DBA needs to define a backup strategy for the SAP database based on different options and considerations: Frequent database backupsMore backup images means less dependency on transaction logs for recovery. However, backups consume resources like memory and I/O, so they can affect the performance of the database. Using the backup utility in throttled mode is an option to alleviate the performance impact. Few database backupsWith few database backups, the DBA relies more on the transaction logs for a possible database recovery. The problem with this approach is that there could be many transaction logs to apply, so recovery could take longer. Use of full and incremental backupsThe combination of full and incremental backups can be used in the backup strategy. Incremental backups tend to be faster and smaller than full backups, but do not contain all the data in one single backup image to restore the database completely.
93

CHAPTER 5: Backup and Recovery

Another fact worth noting is that the backup utility spawns multiple DB2 agents, to increase parallelism during the backup procedure. DB2 uses at most one agent per table space. In some cases, therefore, you might consider separating some tables (usually the largest ones) in their own table spaces, to increase parallelism during the backup of the database.

Utility Throttling
As a DB2 administrator, you must perform some regular maintenance tasks to keep the database running at optimal performance while protecting its data. Many of these maintenance tasks are performed through DB2 utilities (in both online and offline mode), such as these: BACKUP, a data backup utility covered in this section REORG, which defragments tables and indexes RUNSTATS, for statistics collection REBALANCE, which rebalances extents among all containers of a table space The fact that these utilities must execute regularly causes a dilemma for the DBA, since they consume system resources and can affect the performance of the database. You can opt to run these utilities in an offline maintenance window, but in a 24x7 world, such windows are getting smaller or are even non-existent. Therefore, in most cases, these utilities must execute online with user transactions. Your challenge is to minimize their impact on the system. DB2 provides a feature called adaptive utility throttling, which allows maintenance utilities to run concurrently with other user transactions, while keeping their system resource consumption in controlled limits. Before running utilities in throttled mode, the DBA has to enable a database manager configuration parameter, called UTIL_IMPACT_LIM. This parameter dictates the overall limit at the instance level that all utilities together can affect. Values for this parameter range from one to 100, and the unit of measure is a percentage of allowable impact on workload within this DB2 instance. For example, setting this parameter to 100 means that all utilities run in unthrottled mode.
94

CHAPTER 5: Backup and Recovery

Once this instance-wide limit is specified, you can run utilities in throttled mode when they are started or after they have started running. To run in throttled mode, a utility must also be invoked with a non-zero priority. For example, to run the backup utility in throttled mode, specify the following option when launching the BACKUP command:
backup database <SID> util_impact_priority 60

The UTIL_IMPACT_PRIORITY option accepts values between one and 100, with one representing the lowest priority, and 100 the highest. If the UTIL_IMPACT_PRIORITY keyword is specified with no priority, the backup will run with the default priority of 50. If UTIL_IMPACT_PRIORITY is not specified, the backup will run in unthrottled mode. If there were another utility running at the same time in throttled mode (for example, a RUNSTATS with priority 50), both utilities combined should affect the system at a maximum limit of UTIL_IMPACT_LIM. The utility with higher priority would get more of the available resources. The DBA also has the option to specify the backup priority directly on the DBA Cockpit, when the backup job is scheduled through the DBA Planning Calendar (described in the following section). Again, this will only have an effect if the UTIL_IMPACT_LIM (impact policy) has been set to a value other than 100. (SAPs standard configuration has this parameter set at ten percent.)

Scheduling Backups in the DBA Cockpit


As described in Chapter 4, database tasks are scheduled through the DBA Planning Calendar. Four backup options are offered in the Action Pad: Database Backup into TSMBack up to Tivoli Storage Manager. Database Backup to DeviceBack up to a file system directory or tape device. Database Backup with Vendor LibraryBack up to a different storage manager, such as Veritas NetBackup.
95

CHAPTER 5: Backup and Recovery

Snapshot BackupBack up using storage copy technology, such as Flash Copy and Split Mirror. When one of these backup actions is dropped into the calendar, a new window pops up, Schedule a New Action, shown in Figure 5.1. Backup options are specified in this window.

Figure 5.1: Schedule a new backup action here.

In the Action Description area of this window, the DBA can redefine the action, date, and time. In the Action Parameters tab, you can choose different options for the backup, such as these for Backup Mode:

96

CHAPTER 5: Backup and Recovery

OnlineThe database is available for other applications, but the backup image is not consistent. Log files must be applied to bring the database to a normal state, in case of recovery. Online Include LogsThe database is available for other applications, and the backup image produced contains all the information necessary to bring the database to a consistent state, should a recovery be necessary. There is no dependency on separate log files to bring the database to a normal state. Note that online backups are only possible when log archival mode is enabled. (Archive logging ensures that log files are saved when they fill up, and are not reused.) There are three options on the tab for Backup Type: FullThe entire database is backed up. IncrementalOnly changes since the last full backup are copied. Incremental DeltaOnly changes since the last successful backup (whether full or not) are copied. The TRACKMOD database parameter needs to be set to YES to use the options for incremental backups. Clicking the Compress checkbox in the tab means the backup image is created in compressed format. There are also several optimization parameters on the tab: Number of BuffersThis is the number of backup buffers used for the backup operation. Buffer SizeThis is the size of each backup buffer. ParallelismThis is the number of table spaces that can be read in parallel by the backup utility.

97

CHAPTER 5: Backup and Recovery

These parameters are not mandatory. If not specified, DB2 will define optimal values for the parameters without explicit values. The remaining two parameters in the tab are as follows: PriorityThe backup will run in throttled mode, with the priority specified, where one represents the lowest priority, and 100 represents the highest. Throttling allows the administrator to regulate the performance impact of the backup operation. Device/DirectoryThis is the device or directory path specified to store the backup image created. Once the backup options are specified, add the task to the DBA Planning Calendar using the Add button. The backup can be monitored in the Job Log tab, shown in Figure 5.2.

Figure 5.2: The Job Log screen can be used to monitor the progress of the backup.

98

CHAPTER 5: Backup and Recovery

If you have access to a terminal and can log into the machine where the database server resides (as db2<SID> or another user with the necessary authority), you can display more details about the backup job using the LIST UTILITIES command. The output of this command, shown in Figure 5.3, includes interesting information about all utilities that are running at the moment. For backups, some of the information presented includes the database name, description, state, throttling mode, and percentage complete.

Figure 5.3: The LIST UTILITIES command can also be used to monitor the progress of the backup.

With database backups, logs archived, and validation tests, you now provide the necessary protection to the SAP database. Should a recovery be needed, you would restore the database using one of the backup images, and then apply the logs up to the time of interest.

Multi-partition Databases
To handle a multi-partition database, the DBA Cockpit offers the options to back up each partition individually or to back up all of them in one single job. On DB2 9.5, there is a new feature called single system view, in which a multi-partition
99

CHAPTER 5: Backup and Recovery

database is managed similarly to a single-partition one, in terms of backups and configuration of parameters. To back up all partitions of a database in a single job, just select the option All in the Partition field (only available when the database is multi-partition), displayed in the Schedule a New Action window.

Advanced Backup Technology


Today, SAP databases often grow very large. It is common to see SAP systems in the order of tens of terabytes. With databases this large, instead of using the regular DB2 backup utility, you might substitute backups taken with storage technologies like Flash Copy. In this case, DB2 provides a means to stop I/O operations on the database, so storage commands can be applied to copy the LUNs used by the SAP database. For recovery, DB2 provides the db2inidb command. On DB2 9.5, this type of backup is integrated into the backup utility, so fewer configurations are necessary. This backup option is available in the DBA Planning Calendar (Action Pad) as Snapshot Backup.

The DB2 Recovery History File


Every DB2 database contains a history file. This file is used to record database activities, such as database and log files backups, database restores, table space management, table load, and table reorganizations. The contents of this file can be displayed with the LIST HISTORY command. For example, this is the command to see information about all backups for a particular database:
list history backup all for db <SID>

The recovery history file can grow very quickly, so DBAs might have to prune some of the old information. This is done with the command PRUNE HISTORY. The num_db_backups and rec_his_retentn database parameters can be used to manage the amount of information kept in the history file. The num_db_backups
100

CHAPTER 5: Backup and Recovery

parameter specifies the number of backups to keep active in the history file. Once the number of backups exceeds this value, the oldest backups are marked as expired in the history file. The entries for these expired backups are then deleted from the history file when the next backup is performed. The rec_his_retentn parameter specifies the number of days to keep backup information in the history file. This configuration parameter should be set to a value compatible with the value of num_db_backups. For example, if num_db_backups is set to a large value, rec_his_retentn should be large enough to support that number of backups. The PRUNE HISTORY <timestamp> command will remove backup information from the history file for backups older than the value of this parameter. If this value is set to -1, um_db_backups determines the expiration of history file entries. If the following command is run, the archived log files will also be removed from the archive storage location:
PRUNE HISTORY <timestamp> AND DELETE

However, the DB2 backup images still need to be manually deleted after they expire. With DB2 9.5, DBAs can also set the database configuration parameter auto_del_rec_obj=on, which enables DB2 to automatically do the following operations when either the PRUNE HISTORY AND DELETE or BACKUP commands are run: Delete expired backup images from the file system. Delete corresponding expired archived log files from the archive media. Prune the expired entries from the history file. Setting these parameters allows DB2 9.5 administrators to simply schedule normal backups. When those backups complete, DB2 will automatically maintain the required number of backups, archived logs, and history entries, and automatically delete anything that has become expired.
101

CHAPTER 5: Backup and Recovery

The Backup and Recovery Overview Screen


The Backup and Recovery Overview screen displays information about past backups and log archival information. These two pieces of information are separated by two different tabs: Database Backup and Archived Log Files.

The Database Backup Tab


The Database Backup tab, shown in Figure 5.4, contains information about backups that were scheduled and have been processed. By default, the DBA Cockpit displays backup information from the last 30 days. (This can be changed to see older information.) For details, double-click in the backup execution of choice. The DBA Cockpit also categorizes the backup executions using a color scheme. An execution displayed in green means that it finished successfully. If a backup execution is red, it was aborted with an error. The DBA can then diagnose the backup failure using the Diagnostics option of the DBA Cockpit (discussed in Chapter 8).

Figure 5.4: The execution status of previous database backups can be checked here.

The Archived Log Files Tab


The Archived Log Files tab, shown in Figure 5.5, displays information about transaction log files that were moved from the active log directory to the archive destination, such as a different directory or a Storage Manager (TSM, Veritas, Legato, etc). Notice that the DBA Cockpit also displays the log chain in which a
102

CHAPTER 5: Backup and Recovery

log file belongs. A log chain is a DB2 feature used to control log files that have the same name but different contents. With this feature, the DBA doesnt need to manually control which logs to apply in a recovery scenario. DB2 manages this automatically.

Figure 5.5: The log files that have been archived are displayed here.

Logging Parameters
The Logging Parameters screen shows information about the transaction log files. Transaction log files are used to keep track of database transactions. These files are used in recovery scenarios (crash or roll-forward recovery). The following recovery scenarios require log files: Crash recoveryThis is the procedure to recover the database to a consistent state when the database aborts in an abnormal way. Log files containing non-committed transactions, or transactions that have been committed but not yet applied on the table spaces, are used in this case. These logs are called active log files. Roll-forward recoveryThis procedure is used when recovering the database from a backup image and log files. In this scenario, log files that have been archived are also used. This screen is divided into multiple tabs: Log Directory, RCHMETH1, and possibly ARCHMETH2. (The ARCHMETH2 tab will be displayed when you have enabled
103

CHAPTER 5: Backup and Recovery

two methods to archive log files.) The archival methods are controlled by the database configuration parameters LOGARCHMETH1 and LOGARCHMETH2.

The Log Directory


The information displayed in the Log Directories tab, shown in Figure 5.6, relates to active and online archive log files. Some of the data provided here includes the directory name, the number of files and directories, the first active log file (for crash-recovery purposes), the size of the log files, and the number of primary and secondary logs. From here, you can also monitor the space used and available in the file system. This monitoring is necessary to avoid log full error messages, when there is no more space available for new log files. You can set the blk_log_dsk_ful database configuration parameter, so that the DB2 database manager will repeatedly attempt to create the new log file until the file is successfully created, instead of returning disk full errors. For performance reasons, the log directory should also be mounted on separate disks, preferably on RAID 10 LUNs.

Figure 5.6: The Log Directory tab displays information about log files, as well as log space usage.

104

CHAPTER 5: Backup and Recovery

The ARCHMETH1 Tab


The ARCHMETH1 tab, shown in Figure 5.7, displays information about the archival method specified in the database configuration (parameter LOGARCHMETH1). This parameter specifies the location of the archived transaction logs. LOGARCHOPT1 specifies log archive options, which are used when the log files are archived to Tivoli Storage Manager.

Figure 5.7: The ARCHMETH1 tab displays information about the logs saved by the log archive method specified.

Summary
Database backups and log file management are essential activities to protect the SAP system against unplanned situations. Planned situations, such as system cloning, are also related to backups and log files activities. Such activities can be easily scheduled and monitored through the DBA Cockpit, as described in this chapter.

105

Chapter 6 Configuration Optimize Your Flight Patterns


The default DB2 configuration settings are optimized for normal SAP workloads and provide the best possible performance out of the box. The DBA Cockpit provides DB2 DBAs with an SAP interface to further tune their configurations for the unique workloads on their SAP systems.
atabase configuration is one of the areas that greatly influences database performance. Therefore, it is a major area of database performance tuning. A well-configured database environment can ensure the database manager runs smoothly in a multi-user system and responds to each user application quickly by using the resource available on the database server, effectively and efficiently. With DB2 LUW, four areas of a database environment can be configured: 1. 2. 3. 4. Operating system environment variables DB2 profile registry variables Database manager configuration parameters Database configuration parameters

All of the variables and configuration parameters in these areas have default values supplied by DB2. However, the DB2 default values will usually not meet the
106

CHAPTER 6: Configuration

performance required by SAP systems. Therefore, SAP provides its own set of default or recommended values for these variables and configuration parameters. SAP default values can be obtained from the following SAP notes, one for each supported DB2 version, respectively: 584952, DB6: DB2 UDB ESE Version 8 Standard Parameter Settings 899322, DB6: DB2 9 Standard Parameter Settings 1086130, DB6: DB2 9.5 Standard Parameter Settings In these notes, some parameter values are recommended by SAP and should not be changed. Other parameter values, though, are initial values that should be adjusted according to the particular system workload, as well as the hardware resource available. These SAP default values will also be set automatically during the SAP installation. On one hand, a large number of variables and configuration parameters can be tuned. This book only discusses the most important ones. Configuration parameters also vary in different DB2 version. Our discussion is based on the latest DB2 version, 9.5. On the other hand, many configuration parameters can be simply set to AUTOMATIC so that DB2 can automatically set the parameter values, or tune them dynamically based on the system workload and resource. Autonomic computing is one of the strategic directions of DB2 product. The ultimate goal for DB2 is to become self-configuring, self-healing, self-optimizing, and self-protecting. Hence, DB2 becomes a zero-administration database. By sensing and responding to situations that occur, autonomic computing shifts the burden of managing a database system from database administrators to DB2 technology. This greatly reduces the total cost of ownership (TCO). In addition to these DB2 variables and configuration parameters, the DBA Cockpit also provides maintenance tool for other areas of database and system configuration. All of these tools are organized in the following sections under the Configuration menu:
107

CHAPTER 6: Configuration

Overview Database Manager Database Registry Variables Parameter Changes Database Partition Groups Buffer Pools

Special Table Regarding RUNSTATS File Systems Data Classes Monitoring Settings Automatic Maintenance Settings

The Overview Screen


Choose Configuration Overview, and you will be able to view the general information about the database and the operating system, as shown in Figure 6.1. The general information about the database includes the database name, the instance name, the database version, and the fix pack level. If the database is installed as a High Available Disaster Recovery (HADR) database, the detailed HADR status information will also be displayed here.

Figure 6.1: The Overview screen shows general information about the database and the operating system.

108

CHAPTER 6: Configuration

The Database Manager


A number of configuration parameters are defined on the database manager (or instance) level. These parameters control the database execution environment, database diagnostics and monitoring options, database security, system resource utilization (CPU and memory), network connectivity, etc. For some database manager configuration parameters, the database manager must be stopped (db2stop) and restarted (db2start) for the new parameter values to take effect. Other parameters can be changed online. These are called configurable online configuration parameters. Some parameters support the AUTOMATIC value, which means the database manager will tune the runtime value automatically based on the current system workload and the system resource available. Choose Configuration Database Manager, and you will be able to view and maintain database manager configuration parameters. All parameters are nicely grouped in a tree structure, as shown in Figure 6.2. To view parameters belonging to a particular group, such as Memory, click its name to expand the tree. Each parameter has a short description, a technical name, the current value, and the deferred value. The current value is the active value stored in the memory, while the deferred value is the value stored in the configuration file on the disk, which will not take effect until the database manager (or instance) is restarted next time.

109

CHAPTER 6: Configuration

Figure 6.2: View and maintain database manager configuration parameters here.

Note that some parameter values are associated with a unit. For example, the parameter INSTANCE_MEMORY is measured in 4KB unit. If this parameter is set to 250,000, its actual value is 250,000 multiplied by 4KB, i.e., 1,000MB. To change a parameter value, follow these steps: 1. Double-click the parameter that you want to change. Detailed information about this parameter is displayed in a new group box in the lower part of your screen, as shown in Figure 6.3.

Figure 6.3: Detailed information about a parameter is displayed here.

110

CHAPTER 6: Configuration

2. Click (the Display <-> Change button), and enter the new configuration parameter values. Some configuration parameters are enabled for automatic value adjustment. In this case, the checkbox AUTOMATIC is displayed. If you select it, the value will automatically be maintained by DB2. You can also enter the new value, which will be used as the starting value for automatic adjustment. 3. Click the Execute Change button to confirm the change. Table 6.1 lists some parameters that require tuning after the system is installed. For other parameter settings, please refer to the SAP notes mentioned earlier in this chapter.

Table 6.1: Examples of Database Manager Configuration Parameters Technical Name


INSTANCE_MEMORY

Description
This parameter specifies the maximum amount of memory that can be allocated for a database partition.

Recommended Value
<Value> 4KB The <value> should be the total amount of physical memory that is allowed to be consumed by the database manager. In a partitioned database, this is the memory allocated by a single partition. 0 When this parameter is set to zero, no private sort will occur. All sort operations should be done in the database shared memory, not in the agent private memory. By allocating sort heap from the database shared memory, it allows the sort heap size to be tuned by DB2 automatically.

SHEAPTHRES

This parameter is an instance-wide soft limit on the total amount of memory that can be consumed by private sorts at any given time.

111

CHAPTER 6: Configuration

Table 6.1: Examples of Database Manager Configuration Parameters Technical Name


MON_HEAP_SZ

Description
This parameter determines the amount of the memory, in pages, to allocate for database system monitor data. This parameter specifies whether the database manager can use intra-partition parallelism. This parameter specifies the maximum degree of intra-partition parallelism that is used for any SQL statement executing on this instance of the database manager.

Recommended Value
AUTOMATIC The monitor heap can increase as needed until the instance_memory limit is reached. This parameter should only be turned ON in SAP NetWeaver BW and related systems, such as APO and SEM, based on a single-partition database. 1 or <value> If INTRA_PARALLEL = NO, this parameter should be one. Otherwise, use a value equal to the number of CPUs allocated to the database partition.

INTRA_PARALLEL

MAX_QUERYDEGREE

The Database
There are a large number of configuration parameters defined on database level. Some parameters are informational, as they show the database attributes (such as database codepage) and the database states (such as backup pending and roll-forward pending). Most of the other parameters are configurable, as they are used to control system resource utilization (CPU, memory, and disk I/O), transaction logging, log file management, database automatic maintenance, database high availability, and so on. In a partitioned database (DPF), each partition is an independent runtime environment because DPF is based on the share-nothing architecture. Therefore, each partition of the same database has its own set of configuration parameters. The value for the same database configuration parameter could vary from partition to
112

CHAPTER 6: Configuration

partition, although it is recommended to maintain uniform parameter values among all partitions belonging to the same database. Like the database manager configuration parameters (DBM CFG), most of database parameters (DB CFG) are configurable online. In addition, many parameters can be simply set to AUTOMATIC so that DB2 will tune the values dynamically. In particular, starting in V9.1, DB2 introduced a new memory-tuning feature that simplifies the task of memory configuration by automatically setting values for several memory configuration parameters. This feature is called Self-Tuning Memory Manager (STMM). When enabled, the memory tuner dynamically distributes available memory resources among several memory consumers, including the sort heap, the package cache, the lock list, and the buffer pools, in response to significant changes in workload characteristics. Choose Configuration Database, and you will be able to view and maintain database configuration parameters. As you can see in Figure 6.4, all parameters are nicely grouped in a tree structure similar to the database manager configuration parameters. The same interface layout is used to view and modify the parameter values.

Figure 6.4: All parameters are grouped in a tree structure.

113

CHAPTER 6: Configuration

You might also notice the little Show Value History icon beside the configuration parameters in the Self-Tuning Memory Manager group. By clicking the icon, you will see the value change history for the corresponding parameter. The result for a parameter is displayed in a separate window. By default, the value history information is displayed as a chart, as shown in Figure 6.5. To switch to a tabular view, click the List button. To limit the history time frame, choose From date and/or To date.

Figure 6.5: Clicking the Show Value History icon for an STMM configuration parameter displays a chart of value history information.

In a multi-partitioned database environment, each database partition has its own set of database configuration parameters. In general, we recommend that all data partitions have the same parameter values if the workload and the system resources are the same on these partitions. With the DBA Cockpit, it is easy to compare the database configuration parameter settings for multiple partitions. On the Configuration: DatabaseDisplay screen, click the button. Select the partitions that you want to compare in the Select Partitions to Compare pop-up window, and then click Compare. A
114

CHAPTER 6: Configuration

Configuration: DatabaseCompare Partitions screen will be displayed, as shown in Figure 6.6.

Figure 6.6: Clicking the Compare button on the DatabaseDisplay screen displays this comparison.

115

CHAPTER 6: Configuration

Table 6.2 highlights some important parameters related to database memory settings.

Table 6.2: STMM (Self-Tuning Memory Manager) Parameters Technical Name


DATABASE_MEMORY

Description
This parameter specifies the amount of memory that is reserved for the database shared memory region. If this amount is less than the amount calculated from the individual memory parameters (for example, LOCKLIST, utility heap, bufferpools, and so on), the larger amount will be used. This parameter indicates the amount of memory that is allocated to the lock list. There is one lock list per database, and it contains the locks held by all applications concurrently connected to the database. This parameter defines a percentage of the lock list held by an application that must be filled before the database manager performs lock escalation.

Recommended Value
AUTOMATIC When it is set to AUTOMATIC, the initial database shared memory allocation is the configured size of all heaps and buffer pools defined for the database. The memory will be increased as needed. AUTOMATIC This parameter is enabled for self-tuning.

LOCKLIST

MAXLOCKS

AUTOMATIC This parameter is enabled for self-tuning .The value of LOCKLIST is tuned, together with the MAXLOCKS parameter. Therefore, enabling self-tuning of the LOCKLIST parameter automatically enables self-tuning of the MAXLOCKS parameter.

PCKCACHESZ

This parameter is used for caching sections of static and dynamic SQL statements on a database.

AUTOMATIC This parameter is enabled for self tuning.

116

CHAPTER 6: Configuration

Table 6.2: STMM (Self-Tuning Memory Manager) Parameters Technical Name


SORTHEAP

Description
This parameter defines the maximum number of memory pages to be used for sorts.

Recommended Value
AUTOMATIC This parameter is enabled for self tuning. The self-tuning of SORTHEAP is allowed only when the sort heap is allocated from the database shared memory, i.e., shared sorts. AUTOMATIC This parameter is enabled for self-tuning. See more details on the database manager configuration parameter SHEAPTHRES in Table 6.1. ON This parameter enables memory self-tuning. Because self-tuning redistributes memory between different memory areas, there must be at least two memory areas enabled for self-tuning to occur.

SHEAPTHRES_SHR

This parameter represents a soft limit on the total amount of database shared memory that can be used by sort memory consumers at any one time.

SELF_TUNING_MEM

This parameter determines whether the memory tuner will dynamically distribute available memory resources, as required between memory consumers that are enabled for self-tuning.

Registry Variables
Two types of variables can be maintained in the Registry Variables section of database configuration: operating system environment variables and DB2 profile registry variables. These variables control how to start up and run the database manager. Only a handful of variables need to be set in the OS environment. Most variables can now be set in the centrally controlled DB2 profile registry.

117

CHAPTER 6: Configuration

Environment Variables
In an SAP database instance, you will find some DB2-related OS environment variables being defined in db2<dbsid>, <sid>adm, and sap<sid> user profiles, such as these:
DB2INSTANCE=db2<dbsid> INSTHOME=/db2/db2<dbsid>

These OS environment variables are defined automatically during the SAP instance installation, and will not be changed. Hence, no ongoing maintenance is required on the environment variables.

Registry Variables
Registry variables are centrally controlled by DB2 profiles. There are four profile registries: The DB2 Instance Level Profile RegistryMost of the DB2 environment variables are placed within this registry. The environment variable settings for a particular instance are kept in this registry. Values defined in this level override their settings in the global level. The DB2 Global Level Profile RegistryIf an environment variable is not set for a particular instance, this registry is used. This registry is visible to all instances pertaining to a particular copy of DB2 ESE. One global-level profile exists in the installation path. The DB2 Instance Node Level Profile RegistryThis registry level contains variable settings specific to a database partition in a partitioned database environment. Values defined in this level override their settings at the instance and global levels.
118

CHAPTER 6: Configuration

The DB2 Instance Profile RegistryThis registry contains a list of all instance names associated with the current copy. Each installation has its own list. You can see the complete list of all the instances available on the system by running db2ilist. DB2 configures the operating environment by checking for registry values and environment variables, and resolving them in the following order: 1. Environment variables set with the set command (or the export command on UNIX platforms). 2. Registry values set with the instance node level profile (using the db2set -i <instance name> <nodenum> command). 3. Registry values set with the instance level profile (using the db2set -i command). 4. Registry values set with the global level profile (using the db2set -g command). Choose Configuration Registry Variables, and you will be able to view these variables.

119

CHAPTER 6: Configuration

As you can see in Figure 6.7, the environment variables and the DB2 profile registry variables are displayed on the same screen. They are identified by different scopes.

Figure 6.7: Environment variables and the DB2 profile registry variables are displayed on the same screen.

You will notice that the first registry variable is DB2_WORKLOAD, which is an aggregate variable. An aggregate registry variable allows several registry variables to be grouped as a configuration that is identified by another registry variable name. As of DB2 9.5, the only valid aggregate registry variable is DB2_WORKLOAD. When DB2_WORKLOAD is set to the value SAP, DB2 engine implicitly sets a list of registry variables, depending on the current DB2 version and fix pack, to the values that are optimized for SAP systems. These variables, shown in Figure 6.8, can influence different areas of the database manager, such as the DB2 optimizer, locking behavior, table object creation, and MDC usage. These variables and their respective values are chosen by the SAP and IBM DB2 development team to optimize the database manager for SAP applications, based on the teams customer experience and knowledge of the SAP applications. They cannot be changed in the DBA Cockpit screen because they are not intended to be tuned by customers. Some of these variables are even undocumented. The workload values can be superseded by explicitly setting these registry variables to different values. However, this should only be done on the advice of SAP global support or IBM DB2 support, to address a specific need. In general, SAP customers only need to ensure DB2_WORKLOAD is set to SAP.
120

CHAPTER 6: Configuration

Figure 6.8: Here is the DB2 workload optimized for SAP.

Parameter Changes
Choose Configuration Parameter Changes, and you will be able to view the current and previous settings of the registry variables, database manager, and database configuration parameters. You can also view the date and time of the change. This feature can help DBAs keep track of the parameters change history.

The initial screen, shown in Figure 6.9, only displays the active values for the variables and configuration parameters. To see the change history, select History in the Parameter field. You can also specify the period of the change history, as well as the Parameter Type, which can be set to either Registry Variables, DB Manager, or Database.
121

CHAPTER 6: Configuration

The parameter change history data is collected by a standard DBA job, Collection of DB/DBM Config History, on an hourly basis. The data collected is saved in an SAP table and can be displayed on this screen.

Figure 6.9: The initial Parameter Changes screen displays the active values for the variables and configuration parameters.

Database Partition Groups


In an SAP NetWeaver BW or BW-based system, you can use the DB2 database partitioning feature (DPF) to deploy the SAP database on multiple partitions. This supports the high performance and scalability required by a large data warehouse.

122

CHAPTER 6: Configuration

In a multi-partitioned database, you can use a partition group to define a set of partitions, on which a table space can be created. A table created within that table space can be distributed across this group of partitions. Choose Configuration Database Partition Groups, and you will be able to view and maintain database partition groups. By default, the SAP installation program (SAPinst) will only create a database with a single partition (partition number 0000). Therefore, all predefined partition groups will be defined on this partition initially, as shown in Figure 6.10.

Figure 6.10: All predefined partition groups will be initially defined on partition 0000.

After you add a new partition, you can use the Edit button on this screen to modify the existing partition group, or use the Add button to define a new partition group. You can also use the Delete button to remove a partition group on which no table space exists.

Buffer Pools
A buffer pool is an area of main memory that has been allocated by the database manager for the purpose of caching table and index data as it is read from disk. A DB2 database can have one or multiple buffer pools.
123

CHAPTER 6: Configuration

Unlike other memory pools in the database, a buffer pool is considered a database object, and its size is not controlled by a configuration parameter. To create a new buffer pool, change the size of an existing buffer pool, or delete an existing buffer pool, choose Configuration Buffer Pools. By default, the SAP installation program (SAPinst) creates a default buffer pool named IBMDEFAULTBP, with a 16K page size, as shown in Figure 6.11. Buffer pools usually take up the biggest portion of the database shared memory. You can specify a buffer pool size either to a fixed size or to AUTOMATIC. If the buffer pool size is set to AUTOMATIC, and STMM is enabled, the actual buffer pool size will be tuned by DB2 automatically, in response to workload requirements.

Figure 6.11: This is the default buffer pool.

When you create a new table space, you need to associate it with a buffer pool of the same page size. Therefore, if you have table spaces created on different page sizes, you have to create multiple buffer pools corresponding to those page sizes. In a partitioned database, a buffer pool will be created on all database partitions, by default. However, you can also specify the partition group in which a buffer pool will be created. To view the buffer pool size, page size, associated partitions, and table spaces, double-click the buffer pool from the list shown in Figure 6.11. Detailed information about the buffer pool will be displayed, as shown in Figure 6.12.

124

CHAPTER 6: Configuration

Figure 6.12: The buffer pools detailed information is displayed here.

Special Tables Regarding RUNSTATS


Before a SQL statement can be executed by DB2, it needs to be compiled, so that an execution plan can be generated by the DB2 optimizer. To generate an efficient execution plan, the optimizer needs to have intimate knowledge about the tables involved in the SQL statement, such as the table size, table cardinality, and data distribution. This information is called table statistics. Table statistics changes when table content is modified, for example, when new rows are inserted, or existing rows are updated or deleted. Hence, the table statistics needs to be refreshed from time to time, so that the optimizer has up-to-date information to decide on the execution plan. Table statistics can be refreshed manually by using DB2s RUNSTATS command, or automatically by using DB2s automatic statistics feature. The updated statistics will be stored in the database catalog tables.
125

CHAPTER 6: Configuration

We recommend that you enable DB2 automatic statistics feature for a SAP system. To do this, either update the database configuration parameter AUTO_RUNSTATS, or select Configuration Automatic Maintenance Setting in the DBA Cockpit. There are some special tables whose cardinality and content can vary greatly in run time. These tables are called volatile tables. For volatile tables, statistics data collected by RUNSTATS often becomes inaccurate. Therefore, the statistics of these tables should not be collected and should not be used by the optimizer. Volatile tables are marked in the DB2 system catalog table, so that the optimizer can identify these tables. The automatic statistics feature will not apply to these tables. To mark a table as volatile, use DB2s ALTER TABLEVOLATILE command. Alternatively, in the DBA Cockpit, select Space Single Table Analysis Runstats Control. To see a list of volatile tables, choose Configuration Special Tables Regarding RUNSTATS. A list similar to Figure 6.13 will be displayed.

Figure 6.13: A list of volatile tables is shown here.

126

CHAPTER 6: Configuration

File Systems
Choose Configuration File Systems, and a list of file systems is displayed, as shown in Figure 6.14. The information displayed on this screen can help you to determine how much free space is available in these file systems. (This function is not available for systems monitored using a remote database connection.)

Figure 6.14: The File Systems screen can help you to determine how much free space is available.

Data Classes
A data class is used by the SAP DDIC (Data Dictionary) to define the physical area of the database (i.e., the table space) in which the table should be created. On DB2 LUW databases, each data class is mapped to two table spaces, the Data Tablespace and the Index Tablespace. This function can be used to maintain the relationship between a data class and DB2 table spaces. It is only available for SAP ABAP systems. Choose Configuration Data Classes. A list of SAP ABAP data classes and their corresponding DB2 table spaces is displayed, as shown in Figure 6.15. On this screen, you can click the Edit button to modify the data class and table spaces mapping, the Add button to create a new data class as well as its association to table spaces, or the Delete button to drop a data class.
127

CHAPTER 6: Configuration

Figure 6.15: A list of SAP ABAP data classes and their corresponding DB2 table spaces is displayed here.

A table space must be created before it can be associated to a data class. To create a table space from the DBA Cockpit, select Space Tablespaces. A new data class name must also conform to SAP naming convention. (For details, see SAP Note 46272.)

Monitoring Settings
Choose Configuration Monitoring Settings to set the user-defined function libraries (UDFs) path, and change the retention periods for the history data. There are a few DB2 UDFs developed by SAP. They are required for monitoring remote DB2 database system through the DBA Cockpit. These UDFs are packaged in a shared library file named db6pmudf, which is part of the SAP kernel. On the Configuration: Monitoring Settings screen, you need to set the path for this library, as shown in Figure 6.16. Normally, this path should be the standard SAP kernel path, /usr/sap/<SID>/D*/exe. To be sure about this, click the Test button to test the UDF library loading.
128

CHAPTER 6: Configuration

Figure 6.16: Set the path for the UDFs library here.

During the SAP installation, SAP defines a number of standard DBA jobs, such as Collection of DB Performance History, Collection of DB/DBM Config History, and Collection of Bufferpool History. The history data collected by these jobs will be saved to internal SAP tables. You can specify the retention period of history data on the screen shown in Figure 6.17.

Figure 6.17: Specify the retention period of history data here.

It is also a good practice to archive the DB2 diagnostic log file db2diag.log regularly, so that it will not grow to an unmanageable size. Do this by clicking
129

CHAPTER 6: Configuration

the Switch Weekly checkbox for this file. The current db2diag.log will be saved under a new name with a timestamp, and a new db2diag.log file will be created automatically.

Automatic Maintenance Settings


DB2 provides automatic maintenance capabilities for performing database backups, keeping statistics current, and reorganizing tables and indexes as necessary, to reduce the cost of database administration. Performing maintenance activities on your databases is essential in ensuring that they are optimized for performance and recoverability. These automatic maintenance capabilities are fully integrated into the SAP DBA Cockpit. To enable and configure these capabilities, use the functions provided by Configuration Automatic Maintenance Settings.

Automatic Backups
Automatic database backups help to ensure that your database is backed up properly and regularly, so that you dont have to worry about when to back up or know the syntax of the DB2 BACKUP command. An automatic database backup can be either online or offline. It is triggered by predefined conditions, based on the considerations of database recoverability and performance impact. Using the Starting Conditions area of the Automatic Backup tab shown in Figure 6.18, you can choose a predefined condition or customize the condition by specifying the number of days and amount of log space created since the last backup. You also need to specify the backup media.

Figure 6.18: Choose a predefined starting condition or customize the condition here.

130

CHAPTER 6: Configuration

In general, automatic backup should be enabled on a small database or on development and test systems. For a production database, schedule a backup job on a specified time and frequency through the DBA Cockpits Jobs DBA Planning Calendar.

Automatic RUNSTATS
Automatic statistics collection can improve the database performance by maintaining up-to-date table statistics. This feature is fully supported and works very well with SAP systems. Therefore, you should enable automatic RUNSTATS for all SAP systems. Automatic statistics collection is a background process that runs approximately every two hours. The process evaluates all active tables, to check whether or not tables require statistics to be updated. It then schedules RUNSTATS jobs for tables whose statistics are out of date. The background RUNSTATS jobs always run in online and throttled mode, which means they do not affect the normal access to the tables. By default, automatic RUNSTATS jobs collect the basic table statistics with distribution information and detailed index statistics using sampling. (The RUNSTATS command is issued, specifying the WITH DISTRIBUTION and SAMPLED DETAILED INDEXES ALL options.) You can customize the type of statistics collected by enabling statistics profiling, which uses information about previous database activity to determine which statistics are required by the database workload. You can also customize the type of statistics collected for a particular table by creating your own statistics profile for that table. As you can see in Figure 6.19, volatile tables are excluded from automatic RUNSTATS.

Figure 6.19: Volatile tables are excluded from automatic RUNSTATS.

131

CHAPTER 6: Configuration

Automatic REORG
Automatic reorganization determines the need for reorganization on tables and indexes by using the REORGCHK formulas. It periodically evaluates tables and indexes that have had their statistics updated, to see if reorganization is required. If so, it internally schedules reorganization on the table and indexes. Automatic reorganization on a table is always performed in offline mode, which means any write access to the table currently being reorganized is not allowed. On the other hand, automatic reorganization on an index can be performed in either online or offline mode, which can be selected on the tab shown in Figure 6.20. Since the reorganization of large tables will generally take a long time, you should enable automatic reorganization only on small tables. SAP has defined a policy to select tables for automatic reorganization. The policy is based on the table size. The default table filter size is set to 1GB, although this can be changed on the Automatic REORG tab. A filter size of 1GB allows tables smaller than that to be qualified for automatic reorganization. Larger tables would need to be reorganized manually, using the DBA Cockpits Jobs DBA Planning Calendar or Space Single Table Analysis. If you want to specify a more granular table filter policy, you need to use the DB2 Control Center tool.

Figure 6.20: Set automatic reorganization options here.

132

CHAPTER 6: Configuration

All automatic maintenance activities will only occur within a specified time period, called the maintenance window. An online maintenance window is used to specify the time period for performing online activities, such as automatic RUNSTATS, online automatic database backup, or online automatic index reorganization. An offline maintenance window is used to specify the time period for performing offline activities, such as offline automatic database backup and offline table reorganization. Both online and offline maintenance windows can be defined on the General tab of the Automatic Maintenance Settings screen, shown in Figure 6.21.

Figure 6.21: Set automatic maintenance settings here.

Summary
Database configuration is critical to system performance, and to ensure smooth operations. In an SAP environment, the database configuration must be tuned to meet the demands of SAP applications, and to be consistent with SAP system configuration, such as SAP Data Classes and the ABAP Dictionary (DDIC).
133

CHAPTER 6: Configuration

The SAP DBA Cockpit provides easy tools to help maintain every area of database configuration and the database-specific SAP configuration. The joint IBM-SAP development team has made a huge effort to optimize DB2 databases for SAP applications and to enhance the autonomic computing features of DB2 database. The goal is to make a DB2 database a zero-administration database, so that DBAs can concentrate on higher value work, and thus lower the total cost of ownership (TCO).

134

Chapter 7 The Alert Monitor Avoiding Air Turbulence


The DBA Cockpit for DB2 allows SAP DB2 administrators to monitor their CCMS database alerts and thresholds in the same transaction used for database diagnostics and performance monitoring. Thus, the cockpit makes it easier to maintain the health of your SAP database.
he CCMS alert monitors for the DB2 database are integrated into the Alerts section of the DBA Cockpit. All database alert monitoring, the alert message history, and some alert configuration parameters are now easily accessible here. The monitors include thresholds for disk space consumption, memory utilization, buffer pool quality, locking, database backup, and log archival. If the database exceeds the defined thresholds, emails can automatically notify administrators, who can then implement corrections before the system is affected. First, however, background monitoring must be activated. Execute transaction RZ21 and click Technical Infrastructure Local Method Execution Activate Background Dispatching. Then, return to RZ21, in the Methods section, select Method Definitions and click the Display Overview button. Search for, and double-click either CCMS_OnAlert_Email or CCMS_OnAlert_Email_V2. Configure the Parameters tab with the proper email sender, recipients, subject, etc. Then, the specified recipients will be alerted via email when an alert threshold is crossed.
135

CHAPTER 7: The Alert Monitor

The CCMS system in SAP comes with pre-configured alert categories, parameters, and thresholds for the DB2 database. Experienced users may modify this configuration or change threshold values in transaction RZ21. In most cases, though, we recommend keeping the default values for these thresholds.

The Alert Monitor


The DBA Cockpit provides an overview of all database alert monitor elements under Alerts Alert Monitor. This screen, shown in Figure 7.1, displays an easily readable, color-coded, hierarchical list of alert categories and monitors for all DB2 database partitions on the current SAP system. Elements operating in their normal range appear with green squares. The warning thresholds are flagged yellow, and the error thresholds are flagged red.

Figure 7.1: The Alert Monitor displays a clear overview of overall system health.

Administrators can drill down through the categories to the individual monitor elements, see status messages, and compare current values with the assigned
136

CHAPTER 7: The Alert Monitor

threshold values. For more detail, load the CCMS Monitor Sets (transaction RZ20), and drill down through SAP CCMS Monitor Templates Database DB2 Universal Database for NT/UNIX. You will be able to view the complete monitor element details for the database.

The Alert Message Log


SAP saves the history of alert messages in the Alert Message Log, shown in Figure 7.2. By default, this screen displays all of the error and warning alerts from the previous week, ordered by date and time. The Summary section provides an overview of the number of alerts for each category and severity. The Current Selection provides the ability to filter alert logs based on Severity, Category, Object, and Attribute. Historical alert messages can be accessed for very specific objects, to identify any trends or recurring issues.

Figure 7.2: The Alert Message Log displays the history of alert messages.

137

CHAPTER 7: The Alert Monitor

Alert Configuration
The Alert Configuration screen provides access to the alert threshold properties from transaction RZ21. The main screen, shown in Figure 7.3, provides a list of all alert monitors and threshold values.

Figure 7.3: The Alert Configuration screen displays a list of database alert monitor elements from SAP CCMS.

Double-click any individual row to see the detailed information on that monitor element, including threshold value details and data collection schedules. Through this screen, shown in Figure 7.4, you can enable or disable email notification for certain monitor thresholds, and activate or deactivate monitor elements. For elements not related to performance (such as the backup elements), the alert thresholds can also be configured within the DBA Cockpit. However, for any of the elements related to performance, attribute and threshold value maintenance must be done within transaction RZ21.

138

CHAPTER 7: The Alert Monitor

Figure 7.4: Alert thresholds can be changed here for elements not related to database performance.

Summary
The integration of the SAP CCMS database monitor elements into the DBA Cockpit alert monitor simplifies the process of proactive problem analysis. Everything is easily visible within a single transaction, and automatic alert notification ensures that the proper people are notified as soon as warning and error thresholds are crossed. This allows problems to be caught and prevented before they affect the system.

139

Chapter 8 Database Diagnostics Dealing with Air Turbulence


The database diagnostics in the DBA Cockpit for DB2 give DB2 database administrators an integrated set of powerful tools for problem determination, application optimization, and reference documentation.

ne of the tasks that a DBA must perform every day involves monitoring the health of the database to look for possible problems and inconsistencies. No database is perfect, and administrators will face a challenge sooner or later. What differentiates database managers from one another is the way they deal with challenges, based on the mechanisms and tools available. In that sense, DB2 and SAP offer a variety of tools that can help the DBA quickly identify, diagnose, and solve a problem. The deep integration of DB2 and SAP is showcased again in the DBA Cockpits diagnostic option. It is composed of many tools that you can use to troubleshoot diverse problems, such as database security, query performance, concurrency, and inconsistencies between ABAP and database objects.

140

CHAPTER 8: Database Diagnostics

The Audit Log


The Audit Log, shown in Figure 8.1, keeps track of actions executed against the database from the DBA Cockpit. Such changes include SQL statements (whether executed successfully or not); configuration changes made at the database manager (instance) and database level; and table space creation and deletion. Changes performed using native DB2 tools (CLP for instance) are not tracked here.

Figure 8.1:The Audit Log displays information about actions performed at the database level.

By default, changes that happened in the current week are displayed. However, the calendar can be used to choose a different week. DBAs can also change the number of days for the messages. The fields listed in the Audit Log are explained in Table 8.1.

141

CHAPTER 8: Database Diagnostics

Table 8.1: Audit Log Fields Field


Date Time System Action Command Object User

Meaning
Start date of action Start time of action System affected Type of action Command (SQL, add table space, delete table space, edit configuration) Object modified SAP user who performed the action

The EXPLAIN Option


As described in Chapter 2, the DBA Cockpit offers many different views under the Performance options, which allow the DBA to quickly analyze whether the system is performing optimally. Under normal circumstances, the key database performance indicators displayed by ST04 give the DBA a very good idea of what needs to be tuned in the database to maintain good overall performance in the system. However, there are special situations that can bring the performance down for a particular application, or sometimes even affect the performance of the entire system. In such cases, the DBAs must apply their knowledge to analyze and resolve the performance issue using diagnostic tools, historic data for comparison, and their best judgment. One of the most important parts of performance troubleshooting is isolating the problem. In many cases, a performance problem can be isolated to a poorly performing SQL statement. This is the granularity for which you should aim. Once the bad SQL is discovered, you can use diagnostic tools to analyze it deeply. The EXPLAIN option of the DBA Cockpit plays a major role here. In DB2, every DML (select, insert, update, and delete) statement sent by an application goes through a
142

CHAPTER 8: Database Diagnostics

compilation phase. One of the components involved in this phase is the DB2 cost-based optimizer. For query processing, one of the tasks performed by the optimizer is to develop diverse strategies, called access plans, to process the SQL statement. The optimizer attributes a certain cost (optimizers best estimate of the resource usage for a query) to each plan, using an arbitrary IBM unit called timerons. The optimizer then chooses the plan with the lowest cost, and follows its execution strategy. Of course, the optimizer chooses the plan based on the information available, so providing correct information is vital for a good optimizer decision. Some data used by the optimizer include the following: Statistics in system catalog tables. (If statistics are not current, update them using the RUNSTATS command or configure the AUTO RUNSTATS feature through the DBA Cockpit.) Configuration parameters. Bind options. The query optimization class. Available CPU and memory resources. The execution strategy can include such factors as which objects will be used to execute the query (index or table scan), the join methods (nested loop, hash, merge, etc.), whether the query involves multiple tables, the access order of the objects, and the use of auxiliary tables. The EXPLAIN option of the DBA Cockpit allows the administrator to generate the access plan used by the optimizer in a particular query. Based on this information, you can study the internal characteristics of the objects involved, and take the proper actions. Some of these actions can include the following: Statistics collection of objects included in the planOutdated statistics information might lead the optimizer to choose the wrong plan. For instance, it might decide to run a full table scan rather than using an index because it doesnt have the correct row count information.
143

CHAPTER 8: Database Diagnostics

Table or index reorganizationTables and indexes get fragmented over time. This might prevent the optimizer from being able to select the optimal access plan. For example, suppose a query that has been performing well suddenly starts to take a long time to execute. By checking the EXPLAIN output, you conclude that the optimizer is not choosing the best index for the query. In this case, the optimizer might be picking a different index because the optimal one has become too fragmented and contains more levels than the current one. Reorganizing the original index will fix this problem. Creation of new indexesBy looking at the EXPLAIN output and analyzing the predicates involved in the query, you might find that a new index will improve execution time. The index advisor can also be used in these situations, which allows DB2 itself to offer recommendations about new indexes. (This is explained in more detail in the next section of this chapter.) To access the EXPLAIN facility, choose Diagnostics EXPLAIN. (Alternatively, you can call it from the Performance option, Performance Applications and Performance SQL Cache, or from transaction ST05.) Using Diagnostics EXPLAIN, you can paste in a SQL statement, click the Explain button, and retrieve an access plan similar to the one shown in Figure 8.2.

Figure 8.2: The EXPLAIN option allows you to display the SQL access plan.

144

CHAPTER 8: Database Diagnostics

Notice that the information in Figure 8.2 is displayed in a tree format, containing the operators and objects used in the query. The cost of the access plan is also displayed in timerons, as well as the optimization level and the degree of parallelism. A set of extra options is provided via buttons at the top of the screen. If you need to study the access plan in more detail, or if you need to collect data to sent to SAP support, use these buttons, as follows: DetailsWhen you click this button, you will see very detailed information about the query execution plan. CPU speed, buffer pool size, optimization level, optimized statement, and estimated number of rows are just some of the information displayed. If you select an operator, only information related to that operator is displayed. OptimizerYou might be able to change the access plan by specifying optimizer parameters, like the optimization level and query degree. These options can be accessed through the Optimizer button. Optimization levels range from zero to nine, and define how much optimization effort (use of resources) is necessary to generate the access plan. The higher the number, the more resources the optimizer uses to create the access plan. The default optimization level is five, which is adequate in most cases. Higher optimization levels might be used in very complex queries, but the compilation time and memory usage can increase significantly, as well. In this option, however, you might increase the optimization level and re-explain the query just to see if it would make a difference; no changes are actually made. Another parameter that can be changed for testing purposes is the query degree. A degree of one (the default) means that no intra-partition parallelism (parallelism inside the partition) is used. A value greater than that might activate intra-partition parallelism, provided that this functionality is activated at the database manager.

145

CHAPTER 8: Database Diagnostics

DB CatalogWhen you select a database object like a table or an index and click this button, a window with the objects characteristics is displayed. This information is retrieved from the DB2 system catalog, which is a set of internal tables that contain metadata information about the database objects. Some tables used in this option are SYSCAT.TABLES, SYSCAT.INDEXES, and SYSCAT.COLUMNS. DictionaryThis button displays the ABAP dictionary definition for the table chosen in the access plan. Test ExecuteThis button lets you execute a query using different optimizer options (which are set using the Optimizer button), so you can test the real execution time of the query. Other pieces of information, like buffer pool access and locks wait, are also provided. Tree InfoAdditional information can be displayed or hidden in the access plan tree with this button. EditThis button allows you to edit the original query and explain it again. CollectSometimes, even an experienced DBA might need to seek help diagnosing a poorly performing query. The DBA Cockpit provides a very convenient way to collect the necessary information to send to SAP support. By clicking just one button, you can collect information, such as the DB2 version, configuration files, table structure, statistics, and the explain information. Each piece of information is copied to a file, so you can zip them up and quickly send them to SAP support. This type of functionally is also provided through the DB2 support tool, called db2support.

The New Version of EXPLAIN


The DBA Cockpit also offers a new version of the EXPLAIN facility, developed on the WebDynpro technology. In this case, the EXPLAIN option is displayed in a
146

CHAPTER 8: Database Diagnostics

web browser, as shown in Figure 8.3. However, it contains basically the same options as the traditional version.

Figure 8.3: Here is an access plan displayed in the new version of EXPLAIN.

Missing Tables and Indexes


The DBA Cockpit also allows administrators to do a consistency check in the SAP system. In SAP, objects like tables, indexes, and views are defined in the ABAP dictionary, and then the necessary objects are created in the underlying database. There might be some situations, however, in which the ABAP dictionary is not in sync with the database. Some objects might be defined in the dictionary, but dont exist in DB2, and vice versa. The administrator can use the Diagnostics option of the DBA Cockpit to check if there are any inconsistencies between the ABAP dictionary and the database.
147

CHAPTER 8: Database Diagnostics

Access this option by choosing Diagnostics Missing Tables and Indexes. The results will look similar to Figure 8.4.

Figure 8.4: Discrepencies between the ABAP dictionary and the database are displayed here.

The information displayed in Figure 8.4 includes the following: Objects missing in the databaseThe ABAP dictionary might contain objects that do not exist in the database. This could be caused by an error in the database during creation of the object or by somebody with enough privileges dropping an object after its creation. In this scenario, the DBA Cockpit allows the creation of the missing object in the database, thus avoiding transport errors. Unknown objects in ABAP dictionaryObjects that are not known by the ABAP dictionary do not belong to SAP. These are objects created directly at the database level. For this reason, the list displayed here is purely informational; no action can be taken from the DBA Cockpit. (This usually should not occur in SAP systems, because all objects should always be created in the ABAP dictionary.)
148

CHAPTER 8: Database Diagnostics

Inconsistent objectsThe definitions of objects (tables, indexes, and views) in the ABAP dictionary might not be the same as in the database catalog, and vice versa. The database contains an internal catalog that contains the definition of all database objects, and in some cases, this might not be in sync with the ABAP dictionary. The administrator should review these inconsistencies and take action. Other checksOther consistency checks are performed, including checking whether the primary indexes of the tables defined in the ABAP dictionary were created exclusively in the database instance, and whether there are objects in the SAP base tables that cannot be described (or not described completely) in the ABAP dictionary. Optional indexesThis check is related to secondary indexes. It reports mismatches between the ABAP dictionary and the database, regarding secondary indexes. Click the Refresh button to run a new consistency check.

The Deadlock Monitor


DB2 uses internal memory structures called locks to provide concurrency control and prevent uncontrolled data access by multiple applications. Locks must be acquired by applications when they need to access or modify data. Occasionally, DBAs can face a situation called deadlock, when two or more applications are trying to acquire locks that are not compatible with those already held by other applications. Each application is waiting for a lock that is owned by a different application, but at the same time, the waiting application itself is holding a lock that is wanted by others. In this situation, none of the applications can advance until one gives up on a lock. Concurrency and performance problems will inevitably occur in systems with frequent deadlocks. Although the consequence of many deadlocks is reflected in the database performance, the reason for their existence is mostly attributed to the applications that
149

CHAPTER 8: Database Diagnostics

are accessing and modifying the database. There are application development guidelines that specifically deal with avoiding deadlocks, including these: Perform frequent commits so locks can be released. Avoid lock escalations by not locking too many rows. Use less-strict isolation levels. Avoid too many reads before a write. Modify tables in a certain order in all applications. On SAP systems, most deadlocks are caused by customized programs (Z-programs), rather than standard SAP code. DB2 has mechanisms that monitor and resolve deadlock situations in specific intervals, dictated by the database configuration parameter DLCHKTIME. When a deadlock is detected, the database manager resolves the situation by randomly picking one of the participating applications (the victim) to roll back, which allows the other application to continue. In a system with many deadlocks, it is important to understand what might be causing these undesirable situations. The Deadlock Monitor option in the DBA Cockpit can help analyze past deadlocks and study the SQL statements involved, so that corrective actions can be taken. This feature is especially useful when the deadlock can be reproduced. The DBA Cockpit displays the deadlock occurrences in a user-friendly way, making them very straightforward to analyze and diagnose. Here are the steps to follow to display the deadlocks that must be analyzed: 1. Create the Deadlock Monitor. 2. Enable the Deadlock Monitor. 3. Analyze the information collected by the Deadlock Monitor. 4. Stop the Deadlock Monitor. 5. Reset or drop the Deadlock Monitor.
150

CHAPTER 8: Database Diagnostics

Creating the Deadlock Monitor


It is necessary to create a Deadlock Monitor, if one does not already exist in the system. Choose Diagnostics Deadlock Monitor and click the Create Deadlock Monitor button. If the system does not have a Monitor, a wizard will be used to help create one, as shown in Figure 8.5. The wizard asks for some inputs, like the buffer size for the monitor (28,000 pages is recommended) and the table space used for the tables written to by the monitor (a dedicated table space is recommended).

Figure 8.5: Use the wizard to create the Deadlock Monitor.

Enabling the Deadlock Monitor


Once the Deadlock Monitor is created, it needs to be started. To do this, click the Start Monitor button.

Analyzing the Information Collected


After running the system for a while, you can check whether deadlocks were detected in the system using the Performance option of the DBA Cockpit. (See
151

CHAPTER 8: Database Diagnostics

Chapter 2 for more information.) If so, the information about these deadlocks can be analyzed using the historic information collected by the Deadlock Monitor. The information is displayed when the screen is refreshed or the Monitor is stopped. As shown in Figure 8.6, the deadlocks recorded are displayed, and each occurrence can be expanded into more detail. Information on each occurrence is contained in a root folder called Deadlock Victim: <application that got rolled back>. Inside the folder, there is a summary of the agents involved in the deadlock. Information about the agents includes the client PID, host, authorization ID, and waiting lock information (table, type, mode, etc.). Special arrow buttons can be used to expand and collapse the detailed information.

Figure 8.6: This is an example of a deadlock situation captured by the Deadlock Monitor.

To find out about the SQL statements involved in the deadlock, click the Statements History button. This information can also be viewed separately, for each agent involved in the scenario. Click the Agent Details button, and the Agent Details window opens. This window has two tabs:
152

CHAPTER 8: Database Diagnostics

Locks HeldThis tab shows information about the locks held by the agent and the locks that are needed (waiting). Statement HistoryThis tab lists the SQL statements executed by this particular agent. The SQL statement history is one of the most important pieces of information to diagnose a deadlock scenario. As you can see in Figure 8.7, it contains the full stack of SQL statements executed by the agent in the transaction involved in the deadlock. By looking at the statements involved, the administrator can easily find which ABAP program or report generated the SQL, and then talk to the developer of the application. The problem might not necessarily be caused by the program found here, but the developer and the administrator can work together to see if more commit points can be introduced so locks are released faster, or whether more significant changes need to be made.

Figure 8.7: The statement history information can also be viewed here.

153

CHAPTER 8: Database Diagnostics

Stopping the Deadlock Monitor


Gathering deadlock information constantly will cause an overhead in the system. Therefore, the administrator should only enable the Deadlock Monitor for the period of time needed to get information about deadlocks. To stop it, click the Stop Monitor button.

Resetting or Dropping the Deadlock Monitor


When a new study on deadlocks is necessary, the administrator can opt to delete the old information collected (assuming that the old occurrences have been solved) by clicking the Reset button. Afterwards, only the relevant, new information is displayed. The administrator can also drop the monitor by choosing the Monitor menu option and selecting Drop Monitor. If you drop the monitor, it will have to be created again if another deadlock analysis is necessary.

The SQL Command Line


DB2 offers a variety of tools that can be used to access the database. Probably the most frequently used of these tools is the Command Line Processor, also known as the CLP. The CLP is a character-based interface that accepts multiple commands, including SQL statements. The DBA Cockpit offers an interface to the CLP, which allows administrators to run SQL statements and some administrative commands. To access the interface, select Diagnostics SQL Command Line. The administrative commands that can be executed through this interface are the ones supported by the ADMIN_CMD stored procedure. This procedure is used by applications to run administrative commands using the SQL CALL statement. Figure 8.8 shows an example of the commands that can be executed in this interface.

154

CHAPTER 8: Database Diagnostics

Figure 8.8: You can execute SQL and administrative commands in the SQL Command Line interface.

The administrative commands in the following table are supported by ADMIN_CMD. This interface will most commonly be used for SQL statements, since most administrative operations can be performed graphically using other options of the DBA Cockpit. SQL commands not supported by the ADMIN_CMD procedure will fail. SQL statements that modify data are not permitted, either.

155

CHAPTER 8: Database Diagnostics

ADD CONTACT ADD CONTACTGROUP AUTOCONFIGURE BACKUP (online only) DESCRIBE DROP CONTACT DROP CONTACTGROUP EXPORT FORCE APPLICATION IMPORT INITIALIZE TAPE LOAD PRUNE HISTORY/LOGFILE QUIESCE DATABASE QUIESCE TABLESPACES FOR
TABLE

RESET DATABASE
CONFIGURATION

RESET DATABASE MANAGER


CONFIGURATION

REWIND TAPE RUNSTATS SET TAPE POSITION UNQUIESCE DATABASE UPDATE ALERT
CONFIGURATION

UPDATE CONTACT UPDATE CONTACTGROUP UPDATE DATABASE


CONFIGURATION

UPDATE DATABASE MANAGER


CONFIGURATION

UPDATE HEALTH NOTIFICATION


CONTACT LIST

REDISTRIBUTE REORG INDEXES/TABLE RESET ALERT CONFIGURATION

UPDATE HISTORY

The Index Advisor


As explained earlier, the lack of proper indexes on a table can cause severe performance problems for a query or a set of queries. Depending on how heavily the table is accessed, the performance degradation can spread to the entire system. Thankfully, the DBA Cockpit comes to the rescue again. One of the most interesting features provided in the Diagnostics option is the Index Advisor. The Index Advisor is a subset of the DB2 Design Advisor. It is used to help you find better indexes to support your workload. You can use the Index Advisor to create virtual indexes and to let DB2 recommend indexes for SQL statement.
156

CHAPTER 8: Database Diagnostics

Indexes Recommended by DB2


Click the Recommend Indexes button in the Index Advisor to have DB2 recommend indexes for the SQL statement specified in the text field.

Creating Virtual Indexes


A virtual index is a user-defined index that only exists virtually within the index advisor. It does not exist yet in the database. To create a virtual index in the Index Advisor, click the Add Virtual Index button. As shown in Figure 8.9, a new window pops up to specify the schema, table, and columns that are part of the virtual index.

Figure 8.9: Use the Index Advisor to define a virtual index.

After defining virtual indexes, you can explain the query again and have the optimizer consider the virtual indexes, as well as the existing ones, when building the access plan. If the optimizer selects a virtual index (whether user-defined or
157

CHAPTER 8: Database Diagnostics

recommended by the Index Advisor), you can create such an index in the database, with the touch of a button. In Figure 8.10, for example, the Index Advisor is recommending one new index to support the execution of the query, and there is one user-defined virtual index.

Figure 8.10: The EXPLAIN option is shown here with existing, recommended, and user-defined indexes.

Right beneath the indexes is an EXPLAIN button, which can be selected to re-explain the query using different options: Only existing indexes Existing and recommended indexes Existing, recommended, and user-defined indexes

158

CHAPTER 8: Database Diagnostics

You can compare the plans and the costs (in timerons), and based on the EXPLAIN outputs, decide whether or not to create the indexes in the database and the ABAP dictionary. To do that, just click the appropriate button (the magic wand) next to the recommended index, and fill out the index description information.

The Cumulative SQL Trace


SAP business applications, such as ERP, CRM, and SRM, run on top of SAP NetWeaver, which is the SAP application platform. All SAP business applications are database-independent, so that access to the data is transparent. The core component of the SAP NetWeaver platform is the Web Application server. It is the component that directly interfaces with the database. Therefore, it has a layer of code that abstracts the differences of the native databases, and provides a common database interface to higher layers of SAP code. This abstraction layer is called the Database Support Layer (DBSL). SQL statements coming from business applications go through the DBSL and are translated into native DB2 SQL (in most cases). For this reason, tracing the DBSL layer can give the DBA a good idea of the SQL statements that can be affecting the performance of the database. SAP provides a way to use a cumulative trace of the database interface, so that the information collected can be analyzed by the administrator later. To use the cumulative trace, you must first activate it. There are two different ways to do this: Activate the trace dynamically via profile parameter. Run transaction RZ11 and enable the profile parameter dbs/db6/dbsl_cstrace =1. Activate the trace using an environment variable: DB6_DBSL_CSTRACE=1. Note that you might have to restart the SAP system if all work processes are to be traced. Configuration through the profile parameter is dynamic, but not permanent.
159

CHAPTER 8: Database Diagnostics

All SAP systems that use the database interface, and are now executed, write trace data to the table DB6CSTRACE in the table space PSAPUSER1D. The data collected can be analyzed directly in the DBA Cockpit, by selecting Diagnostics Cumulative SQL Trace. Alternatively, you can run report RSDB6CSTRACE using transaction SA38, and analyze the data from there.

No statements are displayed when the trace has never been activated. After the trace is activated and SQL statements are being logged, click the Refresh button to refresh the window.

The actions PREPARE, EXECUTE, and FETCH are summed up in tabs, as shown in Figure 8.11, and can be evaluated separately.

Figure 8.11: Trace information collected by the Cumulative SQL Trace facility helps DBAs in their performance monitoring activities.

To display more detailed information, double-click a line, or select a line and click (the Details icon). The following information is displayed:
160

CHAPTER 8: Database Diagnostics

Statement informationInformation is provided about the SQL statement, the application server where the statement was executed, and all of the ABAP reports in which the statement can be found. The source code of the ABAP report can also be accessed from here. Time histogramsHistograms display the distribution times of the selected SQL statement. On this detailed view, the administrator has the option to run the EXPLAIN facility. Click the button to display the access plan. (For more information on how to activate the cumulative SQL trace, refer to SAP Note 139286.)

The DBSL Trace Directory


SAP provides two other ways to trace the Database Support Layer (DBSL), in addition to the cumulative trace option: the sequential DBSL trace and the deadlock DBSL trace. The cumulative trace is suitable for performance analysis work performed over long periods of time (where information is aggregated). The sequential and deadlock traces are mostly used for shorter periods of time, when the problem has been isolated to a certain degree.

The Sequential DBSL Trace


The sequential DBSL trace logs all important function calls sent from the database interface in R/3 programs (for example, disp+work, tp, R3trans, and so on) to the database. Trace data is logged in log files at the operating system level. By default, the trace files are stored in the /tmp/TraceFiles directory on UNIX systems or in the \usr\sap\TraceFiles folder on Windows systems. The sequential trace can be activated using three different methods: Using transaction SM50 for individual work processes (disp+work)From SM50, select the work process to be traced, and choose Process Trace Components. Then, select trace level 3 and the component database
161

CHAPTER 8: Database Diagnostics

(DBSL). Run the transaction. The trace information will be logged in the work directory of the instance. For all processes of a LOGON sessionThis method allows tracing for different SAP processes, like disp+work, tp, and saplicense. The administrator enables a set of environment variables for the <sid>adm user. Some variables include DB6_DBSL_TRACE, DB6_DBSL_TRACE_DIR, DB6_DBSL_TRACE_FLUSH, and DB6_DBSL_TRACE_STRING. Using SAP profile parametersThe trace can be activated using the profile parameter dbs/db6/dbsl_trace = <tracelevel> (where 3 is the highest trace level). The remaining optional parameters are set with the above-mentioned environment variables. Note that the trace directory must exist and be accessible, for all of these methods. Refer to SAP Note 31707 for more details on how to activate the sequential DBSL trace.

The Deadlock Trace


As explained earlier, deadlocks can affect the concurrency and performance of a database. DB2 provides internal mechanisms to alleviate these unwanted scenarios. The DBA Cockpits Deadlock Monitor can help analyze the occurrences of deadlocks. SAP provides another way to track deadlocks. The DBSL deadlock trace can be enabled in the following ways: Dynamically activate the DBSL deadlock trace for all work processes via transaction RZ11, by changing the profile parameter dbs/db6/dbsl_trace_deadlock_time = <seconds>. SAP recommends a time interval of 20 to 26 seconds. The other parameter is dbs/db6/dbsl_trace_dir = <tracepath>.
162

CHAPTER 8: Database Diagnostics

Activate the trace for all processes of a LOGON session. Set the following environment variables for user <sid>adm: DB6_DBSL_TRACE_DEADLOCK_TIME = <time in seconds> and DB6_DBSL_TRACE_DIR = <path>. The default trace path is /tmp/TraceFiles for UNIX and \\sapmnt\TraceFiles for Windows. (Refer to SAP Note 175036 for more information about the DBSL deadlock trace.) To access information on the sequential DBSL trace and the DBSL deadlock trace, choose Diagnostics DBSL Trace Directory in the navigation frame of the DBA Cockpit. Figure 8.12 shows that the trace directory is set to the default, /tmp/TraceFiles. A subdirectory <SID> is created under the trace directory, which is where the trace files are generated. Notice that there are sequential trace files (TraceFile<Appl-ID>.txt) and Deadlock Trace files (DeadlockTrc<App-ID>.txt) in this directory, since both traces are using the default directory. To see the contents of each file directly from here, double-click it.

Figure 8.12: You can see the trace files generated here.

163

CHAPTER 8: Database Diagnostics

Trace Status
SAP provides three different ways to trace the Database Support Layer: cumulative SQL trace, sequential DBSL trace, and deadlock trace. These traces work independently of each other, so one trace can be activated despite the fact that the others are disabled. They can also all be activated at the same time. You can check if a cumulative DBSL trace is activated by checking whether new records are being inserted in table sap<SID>.DB6CSTRACE. For sequential and deadlock traces, check whether files are being updated or created in the trace directory. You can also check environment variables and profile parameters. None of this is really necessary, however, because the DBA Cockpit provides a very convenient way to check which DBSL traces are active at the moment. To access this information, just select Diagnostics Trace Status. In the example in Figure 8.13, you can see that all three DBSL traces are currently activated. For the sequential trace, some options can be updated from this same screen.

Figure 8.13: All three DBSL traces are currently activated here.

164

CHAPTER 8: Database Diagnostics

Besides checking the status of the traces, you can also activate and deactivate traces dynamically from this window, by using the icon. The DBSL trace requires the Trace Level information before being activated, and the deadlock trace requires the Detection Interval value.

The Database Notification Log


So far, you have seen many different diagnostic tools that can help solve specific problems, like ABAP dictionary consistency, deadlocks, and performance. However, there are other areas in the database that can potentially report a warning or an error. It would be not feasible to have additional options in the Diagnostic folder for each one of them. Therefore, to have an overall look at the health of the database, you can use two diagnostic files. The first one is the Database Notification Log (also known as the Administration Notification Log), which is located in the directory specified by the DIAGPATH database manager configuration parameter. The name of the file is <instance name>.nfy. Since it is an ASCII file, it can be opened directly on the database server machine, using an editor. The DB2 database manager writes the following kinds of information to the Administration Notification Log: The status of DB2 utilities, such REORG and BACKUP Client application errors Service class changes Licensing activity Log file paths and storage problems Monitoring and indexing activities Table space problems A database administrator can use this information to diagnose problems, tune the database, or simply monitor the database.
165

CHAPTER 8: Database Diagnostics

To access the Database Notification Log directly from the DBA Cockpit, choose Diagnostics Database Notification Log. You can filter what messages get displayed by choosing the date and the starting time. You can also filter by the severity of the messages, which can vary from informational to error. The level of detail reported in the Database Notification Log is controlled by the NOTIFYLEVEL database manager configuration parameter. It ranges from zero to four. The default value of three is appropriate for most systems.

The Database Diagnostic Log


The other diagnostic file used by DB2 is the Database Diagnostic Log (db2diag.log), which is probably the most important file used to troubleshoot problems with the database. This file is also located in the directory specified by the DIAGPATH database manager configuration parameter. Its level of detail is controlled by the DIAGLEVEL database manager configuration parameter. The DIAGLEVEL accepts values from zero to four as well, and the default value of three is suitable for most systems. A higher level (four) should only be used for a very short period of time and when explicitly requested by SAP support, since at this level, DB2 records many details and the file can grow very quickly. The db2diag.log file can grow very big even at the default level, so from time to time, the administrator should archive it. DB2 offers a tool for that, called db2diag. By using the A option (db2diag A), the current db2diag.log file gets a timestamp appended to it, and a new log file is created. To access the contents of the db2diag.log file directly from the DBA Cockpit, choose Diagnostics Database Diag Log. You can also filter which messages to display. Filters are available for date, time, and severity of the message. The db2diag.log can also be accessed directly on the database server machine, since it is an ASCII file. We usually recommend this method, since you can use OS commands like grep (in UNIX/Linux systems) to apply other filters on the db2diag.log file. Alternatively, you can use the db2diag tool, which provides grep-like and tail-like functionality (among others), so more restrictive filters can be applied.
166

CHAPTER 8: Database Diagnostics

DB2 Logs
The DB2 Logs option, shown in Figure 8.14, is available in DB2 version 9.5. This option shows the combined information of the Database Notification Log, the Database Diagnostic Log, and the Statistics Log (information generated by the autonomic computing daemon, db2acd). There are several filters you can apply to display only a subset of the information: Log FacilityChoose from the main logs (Diagnostic and Notification), the Statistics Log, or all logs. Record TypeChoose from diagnostic records, event records, or all records. Minimum Impact LevelChoose from the options Critical, Immediate, Potential, Unlikely, None, or All. Messages FromToSpecify a range of date and time. After applying the filters, press the Find button to refresh the messages.

Figure 8.14: Check the DB2 message logs here.

167

CHAPTER 8: Database Diagnostics

The Dump Directory


DB2 uses an internal mechanism to collect diagnostic information automatically when database errors occur. The term used for the set of diagnostic files collected is First Occurrence Data Capture, or simply FODC. The files captured are located in the directory specified by the DIAGPATH database manager parameter. These are some of the files collected: Database Notification Log Database Diagnostic Log (db2diag.log) DB2 event logs FODC packages DB2DART directory STMM log directory To access these diagnostic files in the DBA Cockpit, select Diagnostics Dump Directory. To open a specific file, just double-click it. As shown in Figure 8.15, the directory where these files are located (DIAGPATH) is also displayed. This gives you the option to log onto the machine where the database server resides, and access the files directly.

Figure 8.15: You can view the DB2 diagnostic files here.

168

CHAPTER 8: Database Diagnostics

The DB2 Help Center


You can find a lot of documentation about DB2 online, including all the manuals, which can be downloaded in PDF format. However, you might spend a lot of time searching for specific information, if each individual PDF file has to be opened and searched. For this reason, IBM has created an online tool called the DB2 Help Center (also known as the DB2 Information Center). The DB2 Help Center, shown in Figure 8.16, provides a searching capability based on keywords across all DB2 manuals, so minimal time is spent researching a commands syntax or getting information about a certain database feature. The DB2 Help Center can be accessed through a browser. It can also be accessed directly from the DBA Cockpit by choosing Diagnostics DB2 Help Center.

Figure 8.16: The DB2 documentation can be viewed directly from the DBA Cockpit.

Summary
Just like a pilot must deal with air turbulence during a flight, a DBA must deal with problems that might occur in the database. The DBA Cockpit provides
169

CHAPTER 8: Database Diagnostics

diverse tools to quickly diagnose the most common problems in a SAP database, such as ABAP consistency, SQL performance, and concurrency. For other problems, the DBA Cockpit provides a convenient way to access FODC information captured by DB2. Even novice DB2 administrators can easily access these vital files without needing to log onto the database server machine and know their locations. Troubleshooting database problems can be intimidating and time-consuming for most administrators, but the DBA Cockpit leverages the most important DB2 diagnostic tools in an easy-to-use graphical interface. This significantly simplifies problem analysis and reduces resolution time.

170

Chapter 9 New Features Flying into the Future


SAP and IBM continue to jointly develop new technology and integrate new DB2 features into the DBA Cockpit. DB2 SAP users benefit from having timely access to the latest and greatest database technology, integrated seamlessly into their SAP applications.
ew technology is continuously being developed by both SAP and IBM, and the partnership between SAP and IBM allows SAP users to exploit this new technology immediately. For example, the latest versions of SAP and DB2 provide users more control over resource allocation and better support for performance tuning. The previous chapters have outlined the current benefits of the integrated SAP DBA Cockpit for DB2 LUW. In this chapter, you will see that these benefits continue to grow as the SAP-DB2 partnership continues to mature.

Workload Management (WLM)


DB2 9.5 includes new Workload Management (WLM) features that can ensure Service Level Agreements (SLAs) are achieved for overall system performance. Using WLM, client requests are grouped into workloads defined in DB2, based
171

CHAPTER 9: New Features

on connection attributes. These workloads are mapped to different service classes, which define the resource limits, alert thresholds, and priorities of those workloads within the database. SAP Enhancement Package 1 for SAP NetWeaver 7.0 integrates DB2 9.5 Workload Management into the SAP kernel. SAP delivers a predefined WLM configuration proposal, which defines workloads and service classes for each unique work process type. This basic configuration can then be enhanced by creating one additional workload and service class, which can prioritize work based on the SAP user, SAP transaction, or SAP application server.

Workloads and Service Classes


Configuration of the WLM settings is integrated into the DBA Cockpit, which then launches the Workloads and Service Classes content area in a web browser-based user interface, as shown in Figure 9.1. The Overview tab provides a graphical view of the workloads and their associated service classes. The Workloads tab displays the details of each workload, including service class mappings and workload status. The Service Classes tab (shown in Figure 9.1) displays the status of each service class, and its agent and pre-fetch priorities.

Figure 9.1: Workloads and service classes from DB2 Workload Management are now integrated into SAP.

172

CHAPTER 9: New Features

The service class priorities can be maintained within the General tab in the bottom half of the display. The Statistics tab contains detailed information and graphical histograms displaying performance characteristics of the applications that have run within that service class.

Critical Activities
The Critical Activities screen, shown in Figure 9.2, provides an administrative interface for the thresholds defined for WLM. There is one area to maintain and configure thresholds for various database activities, and another to view historical information on threshold violations.

Figure 9.2: Threshold violations can be viewed within the Critical Activities screen.

The thresholds define the Service Level Agreements for the system. The threshold violations allow administrators to quickly identify performance problems related to these SLAs, and then take measures to resolve any issues.

173

CHAPTER 9: New Features

Finally, the SAP WLM Setup Status provides an overview of the WLM configuration. This displays the status of the various WLM configuration steps, and displays the areas of WLM that have been successfully set up.

BI Administration
DB2 provides several key, unique features to improve the performance and manageability of large SAP NetWeaver BW data warehouses. Here are two of these key features: Database Partitioning Feature (DPF)DPF allows large database tables to be distributed across multiple database partitions, perhaps on multiple physical servers. This can drastically improve performance and manageability, because each partition only operates on the portion of the data that is distributed on that partition. Multi-Dimensional Clustering (MDC)MDC allows the rows of SAP NetWeaver BW objects to be clustered on disk by multiple key columns. Each data page on disk only contains rows for one unique combination of MDC column values. Any query containing restrictions on any of these columns only reads the data pages containing relevant rows, and every row on those pages is relevant to the query. This drastically reduces I/O during large BW reports, and can improve performance by orders of magnitude. Due to the importance of these features, SAP has integrated DB2 DPF and MDC tooling into the DBA Cockpit, within a folder named either BW Administration or Wizards, depending on the release of SAP being used.

BI Data Distribution
DB2 table spaces are created in partition groups. When an SAP NetWeaver BW system is installed on a partitioned DB2 database, the objects in the BW table spaces may be distributed across multiple database partitions. If a DBA changes the partition layout (usually by adding partitions to the BW partition groups), the data residing in those table spaces needs to be redistributed, so that the same
174

CHAPTER 9: New Features

amount of data resides on each partition. This ensures that each partition has nearly the same workload when processing large BW reports. For example, if a partition group with four partitions is altered to add two new partitions, the data previously distributed across the original four partitions must be redistributed across all six partitions. Data redistribution is an online utility. It is throttled by the UTIL_IMPACT_LIM database manager configuration parameter setting. However, it may require the movement of a large amount of data, and therefore, take a very long time to run. It is usually recommended that you redistribute data during a maintenance window, or during low system usage. There are several steps involved in changing the partitioning scheme of a database: 1. Alter the partition group(s) to add or remove partitions. 2. Create temporary table space containers on all new partitions. 3. Redistribute the existing data across the new partition layout. The BW Data Distribution Wizard in the DBA Cockpit provides a very simple interface for this process, shown in Figure 9.3. First, you select the partitions for each partition group from a grid of checkboxes. Next, the wizard defines temporary table space containers, based on the default SAP container paths. Finally, you schedule the redistribution job to run during low system usage. The wizard immediately alters the partition groups, creates the temporary table space containers, and schedules the redistribution job in the DBA Planning Calendar. Once the redistribution job completes, the partition layout changes are done.

175

CHAPTER 9: New Features

Figure 9.3: The BI Data Distribution wizard guides users through the steps required to repartition a DB2 SAP BW system.

The MDC Advisor


As mentioned earlier, DB2 Multi-Dimensional Clustering is a type of clustered index that can be created on SAP NetWeaver BW objects. MDC indexes point to extents rather than rows, and every row within an MDC extent contains the same MDC column values. When large SAP NetWeaver BW reports are generated, MDC indexes can drastically reduce the number of pages read from disk, because each page only contains rows for the requested values of the MDC indexed columns. Creating proper MDC indexes can greatly improve SAP NetWeaver BW performance. However, finding the best columns for the MDC index on a BW object can be challenging. The MDC index will benefit performance most if its columns are frequently used as query restrictions in the WHERE clause of many large BW queries. Therefore, optimal MDC index selection requires you to search through the SQL cache for BW object queries, and identify the frequently used columns. Those columns will be the best candidates for MDC dimensions on that table.
176

CHAPTER 9: New Features

Then, you must identify the cardinality (number of unique values) of each potential MDC dimension. High-cardinality columns might not be desirable, because DB2 will allocate one extent for each unique combination of MDC index values. Therefore, if a unique index is included in the MDC index, each extent will only contain one row, resulting in wasted disk space. The best MDC index columns are low-cardinality columns frequently used in query restrictions. This can improve performance without increasing table size. DB2 contains several Advisors, which are able to assist you with some of the more intensive tasks. DB2 has both a traditional Index Advisor (discussed in the previous chapter), and an MDC Index Advisor. The MDC Index Advisor collects queries run on selected tables, analyzes their characteristics, and recommends optimal MDC indexes to improve performance without increasing disk consumption. This takes into account all queries run during the collection period, and greatly reduces the effort involved in MDC index creation. SAP Enhancement Package 1 for SAP NetWeaver 7.0 includes a graphical interface to the DB2 MDC Index Advisor in the DBA Cockpit, under BI Administration MDC Advisor. The Input tab, shown in Figure 9.4, contains methods for collecting and analyzing queries for specific BW objects (InfoCube FACT tables and the active table of DataStore Objects).

Figure 9.4:Add InfoProviders to the MDC Advisor, and let DB2 recommend beneficial MDC indexes.

177

CHAPTER 9: New Features

The steps for query analysis are as follows: 1. Click the Add InfoProvider button to input the BW object(s) you want to analyze. 2. Select one or more InfoProviders to analyze, and click the Start Collection button to begin collecting query information from those objects. 3. Execute the BW reports that run on the objects being analyzed. The queries that execute against the selected BW objects will be stored in database tables in the SYSTOOLS table space. 4. After the reports finish, select the BW object(s) with a status of RUNNING, and click the Stop Collection button to halt their collection processes. 5. Select the BW object(s) to analyze, and click the Analyze button to start the query analysis process. The analysis is scheduled as a background job, which can be monitored through the DBA Planning Calendar. Once the analysis job completes, the MDC Advisor displays the results and deletes any saved BW temporary tables and query information. The MDC proposals can be viewed in the Result tab, shown in Figure 9.5. The recommended MDC index is listed beneath each analyzed InfoProvider, with estimates for performance and space consumption. The Estimated Improvement gives the overall performance improvement expected for all queries on that InfoProvider. The Estimated Space Increase specifies the percentage that the InfoProvider may increase in size. The MDC Advisor will only recommend MDC indexes with an estimated space increase of less than 10 percent. The proposed MDC indexes can then be implemented from transaction RSA1.

178

CHAPTER 9: New Features

Figure 9.5: The Results tab contains the MDC Indexes recommended by DB2.

Summary
These pages have presented countless examples of DB2 administration integrated into the core SAP NetWeaver technology. SAP DBAs can perform almost any DB2 administrative task through standard SAP transactions. This integration simplifies many SAP database administration tasks, and eases the transition from other relational databases to DB2. The partnership between DB2 and SAP, and the complete integration of DB2 into the DBA Cockpit, are two of the many reasons why DB2 is the preferred and recommended database for SAP systems. The DBA Cockpit provides SAP database administrators a single interface for almost all DB2 monitoring and administration, such as the following: Monitoring key performance indicators Space management and administration tasks Analysis of backup and recovery processes Changing DB2 configuration parameters Defining database partitions for SAP NetWeaver BW and redistributing data Optimizing buffer pool memory allocation Configuring automatic RUNSTATs, REORGs, and backups Creating, scheduling, and monitoring both standard and custom database jobs in the DBA Planning Calendar
179

CHAPTER 9: New Features

Monitoring CCMS database health alerts and thresholds Auditing DBACOCKPIT activity Performing consistency checks between SAP and DB2 metadata Executing SQL commands Analyzing optimizer access plans Recommending and testing traditional and MDC indexes Tracing database calls Viewing diagnostic log files Accessing the online DB2 Help Center As IBM releases new DB2 technology, new features are continually integrated into the SAP DBA Cockpit. This enables SAP database administrators to easily exploit the new technology in their SAP systems. To close with one final airline pilot analogy: fly the latest and greatest jet. Select the cockpit that allows you the most control to perfect the performance of your aircraft. Pilot the best technology, which is integrated completely and optimized specifically for your cockpit. Launch your SAP business systems into the future on DB2.

180

SAP DBA COCKPIT


Flight Plans for DB2 LUW Database Administrators
DB2 is now the database most recommended for use with SAP applications, and DB2 skills are now critical for all SAP technical professionals. The most important tool within SAP for database administration is the SAP DBA Cockpit, which provides a more extensive administrative interface on DB2 than any other database. This book steps through every aspect of the SAP DBA Cockpit for DB2. Readers will quickly learn how to use the SAP DBA Cockpit to perform powerful DB2 administration tasks and performance analysis. This book provides both DB2 beginners and experts an invaluable reference for the abundance of information accessible from within the SAP DBA Cockpit for DB2. It makes it easy for SAP NetWeaver administrators, consultants, and DBAs to understand the strengths of DB2 for SAP, and how to leverage those strengths within their own unique application environments.

EDUARDO AKISUE Certified DB2 9 Administrator Certified Informix Administrator Certified SAP Technology Consultant SAP Certified OS/DB Migration Consultant JEREMY BROUGHTON SAP Certified Basis Consultant for DB2 on NetWeaver 2004 SAP Certified OS/DB Migration Consultant IBM Certified DB2 9 Administrator

LIWEN YEOW Certified Technology AssociateSystem Administration (DB2) for SAP NetWeaver 7.0 SAP Certified Technology Consultant for DB/OS Migration PATRICK ZENG Certified DB2 Solutions Expert Certified SAP Technology Consultant

MC Press Online, LP 125 N. Woodland Trail Lewisville, TX 75077

You might also like