You are on page 1of 11

Benchmark Methodology for Oracle Applications

Suresh James Oracle Corporation Steven Schuettinger Sun Microsystems Introduction


When sizing hardware, some customers with unique workloads wish to conduct a benchmark in order that they may size their hardware accurately and/or evaluate the hardwares performance. This paper discusses the procedure by which this may be accomplished through planning, designing and implementing a valid application benchmark. This paper arises from the work done at the Sun-Oracle Application Technology Center (SOATC). The SOATC may be reached via email at soatc@sun.com. This paper is divided into 6 sections: 1. Benchmark Process 2. OLTP with Batch Benchmark Structure 3. Batch-Only Benchmark Structure 4. Benchmark Development 5. Benchmark Tuning 6. Benchmark Reporting This is the development phase of the benchmark. Instrumentation This is the execution phase of the benchmark.

Define the Acceptance Criteria


There are a number of issues which should be addressed before any benchmark project is undertaken. Failure to address these issues may cause unanticipated delays in the delivery of a timely and meaningful benchmark report. As with any well-run project, success will be forthcoming when planning is thorough. The first issue to be resolved is that of scope determination. I.e., Is the experiment intended to represent a technical data point in a decision-making process or are there some strategic ramifications that need to be addressed? Typically strategic goals may be met by utilizing industry-standard benchmarks such as TPC results while tactical benchmarks are intended to explore an application-specific set of objectives. If it has been determined that TPC results are insufficient, then benchmark goals must be defined. Issues such as throughput and online response time need to be weighed as to their relative importance in the decision making process. Batch-oriented benchmarks can only measure throughput while OLTP benchmarks can measure both. If online response time is deemed critical, the network infrastructure and client tier configuration alternatives should yield additional data points to be measured. This becomes significant where there exists a large number of existing desktop machines or where network latency is likely to be significant. It should be noted that network latency is particularly difficult to simulate due to the wide variety of networking hardware and software options, which may be present in a particular network topology. Once the benchmark objectives have been defined, it becomes necessary to describe the benchmark deliverables. Typically these are tabular reports with specific performance observations combined with graphical representations of key data items. Graphs are important because they highlight thresholds where

1 Benchmark Process What is Involved


When planning a benchmark, the following steps need to be followed in order that the experiment is well defined, designed and executed and keeping with its objectives. They are : Define the Acceptance Criteria This step provides the objective and defines the scope of the benchmark. Identify Test Scenarios This step identifies what is being tested. Identify Resources This step identifies the resources needed to design, build and run the benchmark. Checkpoint : consider the cost From what is known, consider the cost and analyze the risks. Benchmark Development

Revised: 4 April, 1998

bottlenecks are likely to occur. The critical success factors here are the ability to capture everything that can possibly be captured during the benchmark instrumentation and report on the key differentiators only. These items will be discussed in detail in subsequent sections of this paper. Undoubtedly the most underestimated issue is the determination of success/failure criteria. Interestingly enough, this issue is almost never considered to the extent that it should be. A benchmark that is initiated without success/failure criteria having been determined typically ends up being an engineering exercise rather than a decision supporting deliverable. Success/failure criteria should be defined as the minimum and maximum business process throughput and end-user response time requirements that should be observed during a benchmark. The experiment should realistically represent all or part of the environment that is being simulated. The benchmark is complete when the success/failure criteria have been met. A very common mistake is to deviate from these criteria after the benchmark should have ended. With very few exceptions, the subsequent data points will do little to serve the decision making process since they have no context.

This procedure should be repeated for all business processes. Once all of the possible transaction rates and mixes have been determined, it becomes necessary to speculate on a reasonable hardware architecture that will be utilized during the benchmark. (This is often required as part of any hardware vendors benchmark engineering process.) If too much hardware is requested, the project may be delayed due to resource availability. A good rule-of-thumb is to ask for everything including the database server, application servers, database disk requirements, networking hardware, tape drives, and emulation hardware and software. Remember that this environment is intended to emulate a particular combination of transaction mixes and may not represent a production environment. The final, and largely underestimated resource, which is required for any benchmark is the acquisition of the test database. Optimally, a copy of an existing database that contains representative volumes of data should be used. However, in the absence of such a resource, a reasonable set of data may need to be generated from a legacy database. The scope of this deliverable is almost always underestimated. (Anyone who has ever been involved with a data migration can tell you this task alone can take months to complete.)

Identify Test Scenarios


Once the scope and deliverables of the benchmark have been defined, it is becomes necessary for the benchmark team to brainstorm and determine all of the possible test scenarios that could exist. One common mistake in this phase is to prematurely consider constraints such as people and hardware resources. The optimal approach is let the thoughts flow freely. A typical test scenario would be to simulate a month end close or the daily transaction mix multiplied by a factor of n. The key component of test scenario identification is the accurate determination of all business transactions and their mixes. E.g., A typical transaction mix would look something like this: 200 users enter a sales order, query a customer balance, and print an invoice six times per hour. Experience tells us that these are particularly difficult metrics to determine. A reasonable approach would be to query a subject matter expert (I.e., Typically a power user.) as to the number of orders processed in a particular period and the number of users who perform this process and simply do the math.

Identify Resources
After all possible test scenarios have been identified and the benchmark resources have been determined, it will be time to take a reality check. With very few exceptions, the scope of the benchmark will be significantly larger than the resources available to create the benchmark. An unrealistic benchmark scenario would be - in one month, with no additional resources, create a benchmark, which emulates the entire production environment. (As naive as this might sound, it is not an uncommon request.) The key success factor here is to prioritize the test scenarios by their perceived value in the decision making process. Look at each test scenarios cost and payback (I.e., value). Concentrate on low cost, high return experiments. High cost, low return test scenarios should be dropped form any further consideration.

Revised: 4 April, 1998


HIGH COST HIGH VALUE LOW VALUE Perhaps No LOW COST Yes Perhaps

Test Scenario Considerations At this point it should be readily apparent that there is virtually always a difference between what can be done and what should be done. This delta represents the marginal cost if the benchmark to the requesting organization.

Checkpoint: consider the cost!


From what has been said and done thus far, is this benchmark worth the effort? Are there existing benchmark results that could be used? Are there existing customers with configurations that may be used as references? similar

High degree of transaction granularity. (I.e., Transactions should be developed such that they may be combined into arbitrary mixes.) Effective transaction pacing methodology (Discussed in subsequent sections of this paper.) The database should be able to be refreshed. (This is typically accomplished by restoring an image copy.) Arbitrary transaction think timing. (Discussed in subsequent sections of this paper.) Arbitrary user factoring. (I.e., The ability to easily increase benchmark users.) Arbitrary rate factoring. (I.e., The ability to easily increase the rate at which benchmark transactions are executed.) Arbitrary ramp up, settle down, steady state, and shutdown phase-length determination.

What is the risk involved if the benchmark is not developed?

Benchmark Development
The primary reason many benchmarks yield ambiguous results and end up adding little or no value to the decision making process is that the project team tends to circumvent the definition and planning phases discussed in this paper. The actual development of a benchmark should be a mechanical process, which can be accomplished in a reasonable amount of time given that it was planned properly. The first phase of the development effort should be to determine the benchmark infrastructure. The benchmark infrastructure should include all of the following considerations: The benchmark machines (Including database server, application servers, and emulation drivers.) should be able to be rebooted arbitrarily. Arbitrary logical volume reconstruction. Arbitrary machine identification. (I.e., Never hard code the names or addresses of benchmark machines in scripts.) Arbitrary selection of CPU quantity. Arbitrary selection of filesystem type. (I.e., raw or Unix filesystem)

After the benchmark infrastructure has been defined, the development effort can finally begin. With transaction mixes having been defined and prioritized during the planning phase, the subsequent tasks are surprisingly simple. These tasks include: Creation of the benchmark database. Capturing of the OLTP transactions. (This is facilitated by the emulation tool that has been selected. Typically this software listens on the network for activity between a client and a server. The software then generates a script.) Parameterization of the OLTP transactions. (Since one would typically want to vary the input data for each emulated user, it is necessary to parameterize the script.) Development of the scripts which drive benchmark. (Shell scripts are preferred to executables because they tend to be less platform dependent.) Development of a benchmark suite, like any software development effort, must be followed by a testing phase. A test plan should be created and the entire suite should be exercised. The observed results should be compared with the expected results. If the benchmark must be executed on multiple hardware platforms and/or operating systems, these combinations must all be validated before taking the suite to the vendor. (Not surprisingly, this is seldom done.) The last task that needs to be completed prior to instrumentation is packaging. The benchmark suite should be easy to install and setup. Failure to construct a benchmark that meets these criteria may cause the instrumentation phase to be compromised thus

Revised: 4 April, 1998

affecting the results of the benchmark. It is important to remember that a benchmark is an aid in a decision making process not merely an engineering exercise.

Instrumentation
The actual execution of the benchmark is known as instrumentation. A well-planned and constructed benchmark should be equally easy to instrument. However, there are a number of key points to consider. These include: Documentation of the benchmark environment. Since a good benchmark is one that can be reproduced, it is critical to know everything about the environment under test. Include details such as exact hardware and software revision and configuration as well as all volume management considerations. (I.e., striping and mirroring techniques) This can be automated and is often scripted during development. Transaction mix prioritization. More often than not, the time allocated to run a benchmark is shorter than the time it would take to run all test scenarios. Therefore, it is critical that the tests be prioritized as to their relative value to the decision making process and executed in that order. The basic flow of a well-constructed benchmark looks like this: 1. Restore the database. 2. Run a single test. 3. Evaluate, quantify, and document the results. (Statements like It seems faster. just wont do!) 4. If success criteria have been met, stop the test. Otherwise, tune one attribute and return to the first step. The remainder of this paper discusses the actual benchmark instrumentation process. Topics include tuning and bottleneck resolution.

These parameters control the amount of work the benchmark simulation does or is supposed to do. For example we can specify 10 users each entering a journal entry, 12 times an hour. This would translate to a total of 10(users) * 12(rate) = 120 journal entries per hour. For each user executing an event, the rate is controlled by a pace function. The nature of the pace function is to spread a given load over a specified period of time. It is critical that the benchmark do exactly the amount of work is it supposed to do. Transaction rates that are artificially high or low yield questionable results. E.g., If the rate specified is 6 events/hour/user, the pace function will cause the user to execute the event every 10 minutes. If the execution time of an event is less than 10 minutes, the user sleeps until the allotted 10 minutes are up. If the event executes for more than 10 minutes, then the user does not sleep at all. Rather, it executes the next event immediately. Think time is the time a typical user would delay or think between submitting commands. The most common think time variables are think_avg specifies the average think duration, usually in milliseconds think_sd specifies the think time standard deviation value that specifies a range around the mean think time (think_avg) Think time variables time the execution of an event by inserting a wait time between commands submitted from the client to the server. This is typically done between SQL, HTTP or SOCK commands such as sqlexec, http_send, sock_send requests.

Phases
A benchmark run is divided into the following five phases: Setup: In this phase, the related machines are rebooted, the database disk volumes may be rebuilt, the database restored and started. (This phase may also include pinning packages, procedures and functions in the shared pool.) Ramp Up: Here, all users are logged into the database. This phase may also consist of some sample transactions to warm up the database caches in memory. Settle: Users begin to execute events after a random wait. This phase is particularly useful to disperse user activity so that they are not in lock step with each other.

2 OLTP with Batch Benchmark Structure


This section deals with a benchmark that has both OLTP and batch components.

Parameters
These are the knobs to adjust the rate and speed of transaction execution. The most common are number of concurrent users rate (number of events per user per unit of time) think time

Revised: 4 April, 1998

Steady State: This phase is the natural extension from the Settle phase where the Oracle and OS statistics are now started. The statistics will be stopped as soon as this phase ends. This duration is the critical period where measurement is done. Ramp Down: In this phase the statistics are stopped and report generation may start asynchronously. User processes begin to log off from the database. Any reports that need to be run from the database, such as batch completion reports from the fnd_concurrent_requests table before shutting down the database, should be executed at this time. Ramp-Up Setup Settle

need not be captured and parameterized. Rather, batch jobs are typically submitted to the concurrent managers. Batch-only benchmarks, like OLTP experiments, yield results in terms of throughput and completion time. Throughput is typically throttled by changing the number of concurrent managers and/or the number of queues. Resource consumption that is attributable to batch-only simulations must be viewed in the context of a systems total utilization requirements.

4 Benchmark Development
Oracle Applications are now present in different forms. Character-based, GUI-based, Html-Web-based and Java-Web-based. Although programming OLTP transactions for each of these are different, there are certain components that are similar. They are: 1. Unique keys: Applications use unique keys that are generated through sequences. There are two types of keys. One is a unique number and the other is an alphanumeric key consisting of alphabets followed by a sequence number. E.g., SEQ_192832. The values of these sequences are returned from a query and then used as a constant value in any number of places in the program. The sequence will need to be parameterized by a variable and all instances of that number will need to be replaced by the variable. E.g., SEQ_+next_val which concatenates the value of the variable next_val to the string SEQ. 2. User data input: When the user inputs data into a form, the data is seen in one or more queries of the transaction. During a benchmark run this data needs to be entered at run-time so there are two options to get it. a. Query table in database to get a value (This option is not advisable, as such database access will add artificial load to the benchmark workload.) b. Get a value from a data file (Queries access the database and spool out to a data file before the benchmark starts. At run-time only this file is accessed and access may be shared.) In either method, it is necessary that the data be randomly chosen from the valid superset. 3. Fetched Data: there are cases where data fetched from a previous query is used by subsequent queries. The three cases just mentioned, parameters can be identified by the following technique: Capture a given transaction with certain user input.

Steady State

Ramp-Down

t System Load Vs Time

Result Verification
Event Response and/or completion times: These are defined in the acceptance criteria. Response time is the time taken to commit the data on a form. (There can be more than one response time measured per business event.) Completion time is the elapsed clock time an event takes to complete from beginning to end. OLTP event completions: There must exist a procedure to verify that the events actually completed and are committed in the database. This can be done by routines that may query key tables for new rows inserted. Batch event completions: Batch event completions are recorded in the database in the fnd_concurrent_requests table. The event status is recorded as well. Check for failed jobs. (A shell script that produces a batch completion time and throughput report is available from the SOATC upon request. Please direct correspondence to soatc@sun.com.) System Statistics: CPU, disk and network utilization guidelines should be met. Usually there are only CPU utilization guidelines specified in the acceptance criteria. (E.g., There should be an average of 30% (idle + wait-for-io) CPU during the steady state of the benchmark run.)

3 Batch-Only Benchmark Structure


Benchmarking only batch jobs is normally simpler than benchmarking OLTP events. I.e., Online transactions

Revised: 4 April, 1998

Capture the same transaction following exactly the same forms and options but with different user input. Compare the transaction files using diff (UNIX). The difference in parameter values will indicate which values need parameterization.

without doubt useful as a starting point. At the SOATC, we recommend: Random access: stripe width = db_block_size Sequential access: stripe width =
db_block_size

5 Benchmark Tuning OS Parameters


Use ISM (Intimate Shared Memory) if available. ISM is shared memory that is locked down on real physical memory. This feature is especially useful for the Oracle Buffer Cache. On Solaris 2.5, make sure that free swap available is at least 2 times the shared memory segment. If sufficient swap is not available it is normally handled internally and ISM is not used. However, this is not a requirement on Solaris 2.6. If the database server has a lot of memory on it, it may be prudent to check the default system parameters that default to a percentage of total memory. An example of this is minimum free memory of a system. On Solaris, this parameter desfree is derived from lotsfree which is derived from the total memory available and could result in substantial loss of available memory on servers with large amounts of memory.

(db_block_size * multiblock_read_count) ---------------------------------------------(#spindles in stripe) A typical transaction mix, utilizing a variety of OLTP and batch Oracle Applications products, will access datafiles in of both random and sequential patterns. Thus a practical way to determine the initial stripe width is to calculate both values for random as well as sequential access. This is the range of stripe widths. Depending on the ratio between of the random and sequential access estimated on a given tablespace, calculate the value along the range. Guidelines when creating volumes and stripes: Create a volume for each tablespace involved if UFS-based or a volume for each datafile if using raw devices. (This makes monitoring disk and striping possible at the tablespace level.) If possible, do not share disks between volumes. Avoid creating volumes across controllers. Create volumes striped across different targets on a controller. Avoid different speed disks on a disk pack. (The entire disk pack may be synced to run at the lowest speed drive in the pack.) Enable fast writes (I.e., read and/or write caching) on the controller if available Network: Check for collisions per second. considered too high.) Memory: Check for swap-in activity. indicates a memory shortage.) (5% is

Operating System Statistics


CPU: The ratio between (idle + wait-for-io) CPU and (system + user) CPU usage will give an indication if tuning is needed. If wait-for-io CPU usage is high, there may be a disk layout problem. Wait-for-io is considered part of CPU idle time by many monitoring tools. This is because the CPU can be interrupted from its current job which is on an IO call. High system to user CPU usage would indicate system problems h/w dependent. (I.e., The system is spending more time taking care of itself than doing user-related work.) Disk: disk service times are the first things to look for. Threshold for good service times depends on the disk vendor. Service time is measured in milliseconds. Higher than manufacturer-specified threshold times would indicate that more disks need to be added to the volume and striped. On already striped volumes, check for the service times on each stripe. Dissimilar service times on all disks of the stripe would indicate that the stripe width needs to be changed. Use the read and write blocks per transfer statistics to guide you in the stripe width. There is not a one formula fits all for disk stripe width but they are

(Any activity

Oracle Statistics (utlstats)


DB Cache: Check for cache buffer hit ratios and increase/decrease the db_block_buffers init.ora parameter if necessary. Shared Pool: Check the library cache statistics for excessive reloads. There may be a few thousand reloads for the SQL AREA and a few (usually < 10) reloads in the TABLE/PROCEDURE. The other categories should have 0 or near 0 reloads.

Revised: 4 April, 1998

Latches: Check for latch free event count. This value could rise quite dramatically with larger loads and could limit scalability. Related to this are library cache and shared pool latches whose statistics can be seen in the latch statistics section. With a rise in latch free waits, there is a fall in hit ratio in the library cache and shared pool. (Watch for is the redo allocation latch statistics as well.) Rollback Segments: In the rollback segment statistics section of the utlestat report check the value of TRANS_TBL_WAITS. The values should be near zero. If they are not, consider adding rollback segments. How many? Consider the database parameter transactions_per_rollback_segment. A good value to reduce rollback segment header contention is something less than 10. From this value and the known number of active transactions at any given time, calculate the number of rollback segment required. E.g., With a load of 1000 active transactions and 10 transactions per rollback segment, we need 100 rollback segments. Create the segments with 512K or 1M initial and next extents. Min extents should be set to 2 and pct increase to 0. Do not set OPTIMAL. After running a benchmark, check dba_extents for each rollback segment and determine how large they are. Calculate the average size and re-create the rollback segments with a new value for min extents. In batchonly benchmarks where parallelism is used, this exercise becomes a necessity. The use multiple rollback segment tablespaces along with good striping will reduce rollback disk contention. Another technique to optimize the performance of batch-only benchmarks would be to reduce trasactions_per_rollback_segment

to 1 and creating 1 rollback segment per active transaction. (This may not be feasible for large OLTP situations.)

6 Benchmark Reporting
Reporting the results of a benchmark is necessary to validate the observed throughput and response times for a given hardware-software configuration or tuned attribute. Benchmark reporting is usually done for the steady state period only. The main items reported in an OLTP benchmark are: OS :CPU :(idle + wait-for-io) and (user + system) times Disk: service times on individual disks as well as on volumes. Network: packets and collisions Memory: swap-ins should be minimal Oracle: The default utlstat scripts provided should suffice and will document the Oracle parameters used. Event Completion Time & Throughput Report: Completed OLTP and batch events are listed with number of events completed per event and the minimum, maximum, average and 95th percentile completion/response times. Below are three sample reports that may be used to tune and verify the benchmark. They are: 1. Event Completion Report 2. OS Statistics Report 3. Disk Stripe Statistics

Event Completion Report


COMPLETION TIME & THROUGHPUT SUMMARY *********** OLTP Events *********** Date: Start Time: Stop Time: Duration: Mix: Unit: COMMAND ID ---------E_accnt_in E_accnt_re E_add_item E_asset_iq Fri Mar 7 01:45:28 PST 1997 00:28:40 01:28:40 3600 seconds 1491 users seconds NUM --582 578 63 281 MEAN ---19.10 16.46 26.96 25.35 ST DEV -----0.49 0.44 1.76 0.73 MIN --17.96 15.47 23.77 23.81 50th 19.05 16.41 26.88 25.30 PERCENTILES ----------70th 80th 90th 19.25 19.43 19.69 16.65 16.79 17.06 28.12 28.75 29.12 25.71 25.94 26.24 MAX --95th 19.97 17.25 29.65 26.44 21.38 18.65 31.31 29.29

Revised: 4 April, 1998

E_asst_add E_atp E_bom_mult E_bom_sing E_call_log E_cm_inq E_def_sch E_e_fc_set E_ent_inv2 E_ent_vend E_enter_or E_fc_dt_rp E_fsg E_i_onhand E_i_subinv E_ins_cust E_inv_item E_it_usage E_itm_attr E_itm_cost E_jnl_inq E_jour_ent E_launch_m E_load_mds E_manl_add E_o_cancel E_oe_app E_oe_maint E_ord_resc E_pay_appl E_pay_inq E_plan_ord E_po_det_r E_po_inq E_prep_mas E_prod_act E_rpt_bom E_sup_dmd E_trial_ba E_upd_cus E_v_cycl E_v_line E_v_ordr E_v_pick E_v_ship E_view_com E_view_fc E_view_inv E_view_job E_vw_bc_fc E_vw_cust E_wrk_ordr TOTAL

89 13.33 0.34 12.63 13.27 13.38 13.55 13.78 14.04 14.32 293 17.50 0.50 16.63 17.43 17.66 17.89 18.16 18.35 19.73 63 21.86 8.52 17.38 19.06 20.69 21.90 28.30 36.30 74.05 279 27.22 2.41 23.17 26.92 29.02 29.37 30.38 30.92 33.25 277 27.97 0.94 26.41 27.84 28.31 28.63 29.09 29.59 33.00 1341 4.96 1.03 3.51 4.54 4.82 6.28 6.81 7.06 8.49 130 17.46 0.50 16.44 17.42 17.66 17.82 18.08 18.19 19.95 132 23.60 0.64 22.32 23.54 23.75 23.97 24.32 24.69 26.79 126 54.98 1.71 51.89 54.94 55.77 56.33 57.28 57.70 61.29 128 37.63 1.44 35.03 37.55 38.37 38.76 39.49 40.10 42.96 168 142.19 2.25 137.18 141.96 143.14 144.20 145.27 146.07 148.83 86 26.37 0.62 24.80 26.30 26.68 26.92 27.18 27.49 27.91 175 14.88 0.45 13.98 14.86 15.08 15.25 15.42 15.76 16.18 605 11.84 1.30 9.88 11.50 12.20 12.79 13.47 14.07 19.11 298 12.59 1.41 10.86 12.14 13.05 13.51 14.11 15.04 18.62 275 30.49 0.90 28.51 30.36 30.82 31.25 31.75 32.05 33.60 272 35.28 0.82 33.69 35.15 35.54 35.97 36.32 36.69 39.12 297 11.41 2.30 8.81 10.66 12.08 12.99 14.22 16.30 22.60 389 13.77 1.17 11.34 13.47 14.16 14.63 15.32 15.86 19.22 286 22.68 1.37 20.40 22.35 23.07 23.55 24.67 25.49 29.02 568 22.74 0.50 21.64 22.70 22.95 23.10 23.38 23.58 25.15 547 33.27 1.00 31.19 33.12 33.66 34.06 34.65 35.20 37.75 129 37.28 0.87 35.52 37.27 37.82 38.00 38.43 38.79 39.20 131 26.75 0.56 25.44 26.73 27.08 27.28 27.41 27.63 28.51 268 38.47 1.15 36.45 38.23 39.11 39.56 40.00 40.23 42.18 345 31.89 7.12 21.91 30.14 35.76 37.89 41.95 45.46 53.20 510 52.79 1.30 50.29 52.61 53.32 53.82 54.50 55.00 60.12 90 76.88 4.68 71.69 76.25 77.77 78.52 80.62 82.61 112.08 87 26.70 0.59 25.72 26.63 27.02 27.25 27.36 27.70 28.38 127 33.04 1.55 30.35 33.11 33.87 34.17 34.71 35.94 37.49 405 4.88 0.22 4.44 4.86 4.97 5.03 5.14 5.23 6.72 88 23.91 0.57 22.58 23.83 24.24 24.42 24.68 24.84 25.52 84 18.04 0.54 16.80 18.06 18.35 18.45 18.60 18.80 20.13 276 29.88 1.69 26.88 29.60 30.42 30.94 32.49 33.71 34.95 284 26.91 0.97 25.20 26.77 27.45 27.83 28.28 28.73 29.57 42 18.98 0.90 17.66 18.92 19.20 19.91 20.15 20.31 21.77 87 30.43 0.65 29.21 30.37 30.81 30.93 31.30 31.45 32.16 399 9.79 1.31 8.16 9.35 10.16 10.66 11.31 12.50 15.89 176 16.59 0.41 15.74 16.55 16.76 16.88 17.13 17.36 19.03 288 20.91 0.66 19.69 20.78 21.21 21.38 21.69 22.06 24.54 590 15.63 0.42 14.72 15.59 15.80 15.92 16.09 16.27 18.33 261 31.51 32.40 12.82 17.51 21.81 48.86 65.23 74.83 217.97 546 33.25 0.66 31.57 33.17 33.49 33.72 34.13 34.40 36.79 591 14.11 1.53 12.83 13.58 13.83 14.03 16.50 17.17 22.87 177 18.15 7.23 14.70 15.74 16.02 16.63 24.82 34.02 56.66 297 12.53 0.69 11.20 12.41 12.73 12.99 13.47 13.90 15.60 130 15.29 0.43 14.32 15.27 15.48 15.60 15.86 16.04 17.03 400 8.12 0.31 7.51 8.08 8.22 8.33 8.53 8.67 9.80 399 8.13 0.47 7.17 8.06 8.29 8.52 8.80 9.02 9.75 1 18.08 0.00 18.08 18.08 18.08 18.08 18.08 18.08 18.08 371 22.84 0.65 21.64 22.75 23.09 23.37 23.67 24.02 25.60 294 14.63 0.41 13.58 14.60 14.78 14.90 15.17 15.38 15.99 ----- ------ ------ ------ ------ ------ ------ ------ ------ -----16201 22.23 17.83 3.51 17.89 26.17 31.31 36.07 51.90 217.97

************ Batch Events ************ Completion Time Unit: seconds Function Type Account Analysis - Foreign Cur Asset Additions Report Bill of Material Structure Rep Check Event Alert Financial Statement Generator Forecast Detail Report Load/Copy/Merge MDS Order Reschedule Report Planned Order Report Posting Process demand interface Production Activity Report Purchase Order Detail Report Total Average Min Max Std_Dev 90%ile 150 7.08 5.79 11.57 .91 7.09 25 95.60 63.66 271.99 52.81 96.62 25 17.22 10.42 79.86 16.40 17.61 95 3.58 1.16 10.42 1.56 3.55 55 10.10 3.47 19.68 3.34 10.07 30 36.34 4.63 180.56 41.81 38.84 5 9.95 5.79 21.99 6.78 9.95 23 56.86 5.79 87.96 18.60 56.49 27 52.04 4.63 130.79 46.30 52.81 268 8.37 4.63 19.68 3.04 8.04 2 955.44 908.57 1002.32 66.29 955.44 18 9.20 6.94 12.73 2.04 9.26 21 62.28 30.09 142.36 39.49 65.30

Revised: 4 April, 1998

Trial Balance - Summary 1 Work Order Shortage With Reple ------------------------------Batch Throughput: 885 -------------------------------

66 75

20.76 29.38

10.42 9.26

42.82 503.47

8.09 73.77

20.36 31.05

OS Statistics
Hostid: 80800b78 Hostname: "fuji" Version: 6.00 Command: Elapsed Time Statistics 3601.82 time (seconds) 100.00 % Start time: Fri Mar 36.39 idle time 1.01 % 2233.00 user time 62.00 % 284.29 system time 7.89 % 1048.14 wait time 29.10 % CPU Stats CPU 0 CPU 1 CPU 2 CPU 3 CPU 4 CPU 5 CPU 6 CPU 7 CPU 8 CPU 9 Totals idle% 0.9 1.1 1.2 1.2 0.9 1.0 0.8 1.0 0.8 1.1 10.1 user% 63.7 57.2 56.2 57.1 66.1 63.2 68.1 62.0 68.7 57.6 620.0 system% 10.4 10.6 8.4 7.3 6.2 6.3 5.7 8.1 5.8 10.2 78.9

7 00:28:45 1997

wait% Total% 25.0 100.0 31.1 100.0 34.3 100.0 34.4 100.0 26.8 100.0 29.4 100.0 25.3 100.0 28.9 100.0 24.7 100.0 31.1 100.0 291.0 1000.0

Total (secs) 3601.8 3601.8 3601.8 3601.8 3601.8 3601.8 3601.8 3601.8 3601.8 3601.8 36018.2

Average Load Statistics 3600 secs - monitoring interval 2.87 avg jobs waiting on I/O 1.39 avg runnable processes 0.63 avg runque occupancy 0.00 avg swapped jobs 0.00 avg swap device occupancy Average Swap Statistics in Pages 2093326.25 avg freemem 461509.72 avg reserved swap 422726.66 avg allocated swap 1999505.38 avg unreserved swap 2038288.38 avg unallocated swap Sysinfo Statistics (per second) 0.00 phys block reads 0.69 phys block writes (sync+async) 1.95 logical block reads 2.32 logical block writes 314.97 raw I/O reads 11.20 raw I/O writes 6704.60 context switches 1231.01 traps 3595.66 device interrupts 5806.58 system calls 2042.29 read+readv syscalls 1800.32 write+writev syscalls 259651.56 rdwr bytes read 491656.26 rdwr bytes written 0.62 forks + vforks 0.32 execs 0.00 msgrcv()+msgsnd() calls 247.82 semop() calls 34.86 pathname lookups (namei) 1.14 ufs_iget() calls 0.00 inodes taken w/ attach pgs 0.00 inodes taken w/ no attach pgs 13.27 directory blocks read 0.00 inode table overflows 0.00 file table overflows 0.00 proc table overflows 100 % bread hits 2323.32 intrs as threads(below clock) 15.59 intrs blkd/released (swtch) 2300.98 times idle() ran (swtch) 0.03 rw reader fails (swtch) 0.78 rw writer fails (swtch) 1115.14 involuntary context switches 2241.17 xcalls to other cpus 0.71 thread_create()s 1430.76 cpu migrations by threads 1010.94 failed mutex enters 0.00 times module loaded 0.00 times module unloaded 0.63 physical block writes (async) Vminfo Statistics (per second) 0.04 page reclaims (w/ pageout) 0.01 page reclaims from free list 0.00 pageins 0.00 pages paged in 0.00 pageouts 0.04 pages paged out 0.00 swapins 0.00 pages swapped in 0.00 swapouts 0.00 pages swapped out 59.37 ZFOD pages 0.01 pgs freed by daemon/auto 0.00 pgs xmnd by pgout daemon 0.00 revs of page daemon hand 0.00 minor pgflts: hat_fault 164.01 minor pgflts: as_fault 0.00 major page faults 12.45 copy-on-write faults 17.97 protection faults 356.14 faults due to s/w locking req 0.32 kernel as as_flt()s 0.00 times pager scheduled Directory Name Cache Statistics (per second) 115.33 cache hits ( 93 %) 8.24 cache misses ( 6 %) 0.02 enters into cache 0.00 enters when already cached

Revised: 4 April, 1998

0.00 long names tried to enter 0.00 long names tried to look up 0.00 LRU list empty 0.00 purges of cache Segment Map Operations (per second) 27.32 number of segmap_faults 0.00 number of segmap_faultas 45.43 number of segmap_getmaps 0.00 getmaps that reuse a map 37.73 getmaps that reclaim 7.70 getmaps reusing a slot 0.00 releases that are async 0.00 releases that write 0.02 releases that free 0.02 releases that abort 0.00 releases with dontneed set 0.00 releases with no other action 0.02 # of pagecreates Buffer Cache Statistics (per second) 1.98 total buf requests 1.98 buf cache hits 0.00 times buf was alloced 0.00 times had to sleep for buf 0.00 times buf locked by someone 0.00 times dup buf found Inode Cache Statistics (per second) 1.13 hits 0.00 misses 0.01 mallocs 0.00 frees 0.00 puts_at_frontlist 0.00 puts_at_backlist 0.00 dnlc_looks 0.00 dnlc_purges Char I/O Statistics (per second) 0.00 terminal input chars 0.06 terminal output chars Network Statistics (per second) Net Ipkts Ierrs Opkts Oerrs Colls Dfrs Rtryerr hme0 287 0 292 0 26 0 0 hme1 1 0 1 0 0 0 0 hme2 627 0 535 0 5 0 0 hme3 626 0 534 0 5 0 0 hme4 625 0 532 0 0 0 0 lo0 0 0 0 0 0 0 0 Disk I/O Statistics (per second) Disk util% xfer/s rds/s wrts/s rdb/xfr wrb/xfr wtqlen svqlen srv-ms sd0 0.5 0.3 0.0 0.3 4827 8911 0.00 0.03 89.0 sd1 0.2 0.1 0.0 0.1 4608 5277 0.00 0.00 19.9 sd2 0.2 0.1 0.0 0.1 5325 4651 0.00 0.00 29.5 sd3 0.2 0.1 0.0 0.1 6144 7393 0.00 0.00 30.6 sd4 0.2 0.1 0.0 0.1 6144 6080 0.00 0.00 25.3 ssd0 2.4 2.3 1.9 0.4 8192 8192 0.00 0.03 11.0 ssd1 0.9 0.8 0.5 0.3 8192 8192 0.00 0.01 12.4 ssd10 1.8 1.6 1.5 0.1 8192 8192 0.00 0.02 11.1 ssd100 0.0 0.0 0.0 0.0 0 0 0.00 0.00 0.0 ssd101 3.7 4.7 4.6 0.1 8192 8192 0.00 0.04 8.2 ssd102 17.3 30.3 29.9 0.4 9211 8192 0.00 0.21 7.0 ssd103 17.0 29.4 29.1 0.4 9453 8192 0.00 0.20 6.8 ssd104 8.9 9.5 9.3 0.2 8790 8192 0.00 0.10 10.2 ssd105 8.6 9.2 9.1 0.2 8811 8192 0.00 0.09 10.2 ssd106 2.0 2.4 2.2 0.2 8192 8192 0.00 0.02 8.4 ssd107 0.0 0.0 0.0 0.0 8192 8192 0.00 0.00 5.4 <etc>

Disk Stripe Statistics


------------------------------------------------------------------------------Volume: ar_idx TY NAME v ar_idx pl ar_idx-01 sd soatcd04-01 sd soatcd05-01 sd soatcd03-01 sd soatcd06-01 Disk ssd1 ssd11 ssd20 ssd21 util% 0.9 1.0 1.0 1.2 ASSOC fsgen ar_idx ar_idx-01 ar_idx-01 ar_idx-01 ar_idx-01 KSTATE ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED LENGTH 4096000 4097920 1024480 1024480 1024480 1024480 PLOFFS 0 0 0 0 STATE ACTIVE ACTIVE TUTIL0 svqlen 0.01 0.01 0.01 0.01 PUTIL0 -

Disk I/O Statistics (per second) xfer/s rds/s wrts/s rdb/xfr wrb/xfr wtqlen 0.8 0.5 0.3 8192 8192 0.00 0.8 0.5 0.3 8192 8192 0.00 0.8 0.6 0.2 8192 8192 0.00 1.1 0.7 0.4 8192 8192 0.00

srv-ms 12.4 12.7 12.3 12.1

------------------------------------------------------------------------------Volume: ar_tbl TY NAME v ar_tbl ASSOC fsgen KSTATE ENABLED LENGTH 4096000 PLOFFS STATE ACTIVE TUTIL0 PUTIL0 -

Revised: 4 April, 1998

10

pl sd sd sd sd

ar_tbl-01 soatcd07-01 soatcd10-01 soatcd08-01 soatcd09-01 util% 1.2 1.2 1.0 1.0

ar_tbl ar_tbl-01 ar_tbl-01 ar_tbl-01 ar_tbl-01

ENABLED ENABLED ENABLED ENABLED ENABLED

4097920 1024480 1024480 1024480 1024480

0 0 0 0

ACTIVE -

svqlen 0.01 0.01 0.01 0.01

srv-ms 9.8 9.8 9.8 10.0

Disk ssd2 ssd3 ssd12 ssd22

Disk I/O Statistics (per second) xfer/s rds/s wrts/s rdb/xfr wrb/xfr wtqlen 1.2 1.1 0.2 9158 8192 0.00 1.3 1.1 0.2 9120 8192 0.00 1.1 0.9 0.1 9081 8192 0.00 1.1 0.9 0.1 9143 8192 0.00

------------------------------------------------------------------------------Volume: bom_idx TY NAME v bom_idx pl bom_idx-01 sd soatcd19-01 sd soatcd17-01 sd soatcd20-01 sd soatcd18-01 Disk ssd6 ssd15 ssd16 ssd25 util% 1.5 1.5 1.5 1.6 ASSOC fsgen bom_idx bom_idx-01 bom_idx-01 bom_idx-01 bom_idx-01 KSTATE ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED LENGTH 4096000 4097920 1024480 1024480 1024480 1024480 PLOFFS 0 0 0 0 STATE ACTIVE ACTIVE TUTIL0 svqlen 0.01 0.01 0.02 0.02 PUTIL0 -

Disk I/O Statistics (per second) xfer/s rds/s wrts/s rdb/xfr wrb/xfr wtqlen 1.3 1.2 0.0 8192 8192 0.00 1.3 1.3 0.0 8192 8192 0.00 1.4 1.3 0.0 8192 8192 0.00 1.4 1.4 0.0 8192 8192 0.00

srv-ms 11.6 11.3 11.2 11.3

------------------------------------------------------------------------------Volume: bom_tbl TY NAME v bom_tbl pl bom_tbl-01 sd soatcd22-01 sd soatcd23-01 sd soatcd21-01 sd soatcd24-01 ASSOC fsgen bom_tbl bom_tbl-01 bom_tbl-01 bom_tbl-01 bom_tbl-01 KSTATE ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED LENGTH 4096000 4097920 1024480 1024480 1024480 1024480 PLOFFS 0 0 0 0 STATE ACTIVE ACTIVE TUTIL0 PUTIL0 -

Disk I/O Statistics (per second) Disk util% xfer/s rds/s wrts/s rdb/xfr wrb/xfr wtqlen svqlen srv-ms ssd7 9.3 12.2 10.3 1.9 13510 8192 0.00 0.15 12.3 ssd17 9.4 12.3 10.5 1.8 13552 8192 0.00 0.15 11.9 ssd26 9.4 12.1 10.3 1.8 13705 8192 0.00 0.15 12.5 ssd27 9.4 12.2 10.3 1.9 13597 8192 0.00 0.15 12.6 ------------------------------------------------------------------------------<etc >

Revised: 4 April, 1998

11

You might also like