You are on page 1of 17

Performance measurement, Monitoring and Evaluation

Fakhar uddin Umer farooq

We Will Cover
Introduction Trends effecting performance issues Purpose of Performance Monitoring Evaluation Phases Performance measures Performance Evaluation Techniques

Timing Mixes Kernels Models Benchmarks Synthetic Programs Simulation Monitor(Hardware & Software)

Introduction
Description

Because an operating system is primarily a resource manager, it is important for operating systems designers to be able to determine how effectively a particular system manages its resources. In the early years performance evaluation was focused only on hardware because it was the dominant factor with respect to price but today hardware is less expensive then software. Some software often causes poor performance, even on systems with very powerful hardware. Therefor it is important to monitor their performance.

Trends effecting performance issues


Hardware Software CPU utilization I/O device utilization Networking Distributed Processing

Purpose of Performance Monitoring

Selection Evaluation:

Here the evaluator must decide if obtaining a computer system from a particular vendor is appropriate.

Performance Projection:

Here the goal of the evaluator is to estimate the performance of a system that does not exist. It may be a complete new computer hardware of software.

Performance Monitoring:

The evaluator accumulates the performance data on an existing system or component to be sure that the system is meeting its performance goals.

Evaluation in different Phases

Before Development Phase:


Predict the nature of application that will run on system. The anticipated workloads the application should handle.

In Development Phase:
Determine the best hardware organization. Resource management strategies. Determine whether the evolving system meets its objectives.

After Development Phase:


Concerned with obtaining optimal performance. System tuning.

Performance Measures
By performance we mean the manner in which or the efficiency with which a computer system meets its goals.

Absolute Measures:

Number of jobs per hour

Relative Measures: Quantifiable Measures:

Disk accesses per minute

Non Quantifiable Measures:

Ease of use

Performance Measures

Turnaround Time:

The time from which a job is submitted until the job is returned to the user. waiting to get into memory, waiting in the memory queue, executing on the CPU and doing I/O

Response Time:

The time from which a user presses an ENTER key until the system begins typing a response. The time from which a user presses an ENTER key until the first time slice of service is given to that users request.

System Reaction Time:

Performance Measures

Variance Response Times:

VRT is a measure of dispersion. A small variance indicates that the various response time experienced by users are relatively close to the mean. Work per unit time performance. The measure of amount of work that has been submitted to the system and which the system normally must process in order to function acceptably. This is the measure of maximum throughput a system may have. This is the fraction of time that a resource is in use. Even though a high percentage utilization seems desirable, but it may be the result of inefficient usage.

Throughput:

Workload:

Capacity:

Utilization:

Performance Evaluation Techniques

There are a number of techniques used to evaluate the performance of a system.


Timings Instruction Mixes Kernel Programs Analytic Models Benchmarks Synthetic Programs Simulation Performance Monitoring

Performance Evaluation Techniques

Timings:

Timings are useful to perform quick comparisons of hardware. The number of additions per second has often been used for timing comparisons.

Instruction Mixes:

Instruction mixes use a weighted average of various instruction timings more suited to a particular application.

Kernel Programs:

A typical program that might run at installation. Using manufacturers instruction timing estimates, the kernel program is timed for a given machine. Comparisons are then possible.

Performance Evaluation Techniques


Purpose of evaluation

Ev alu ati on tec hni qu e

Selection Evaluation(system exists elsewhere)

Performance Protection (system does not yet exist)

Performance Monitoring (system in operation)

New hardware

New Software

Design new hardware 1 1 2 2 -2 3 2

Design new software --1 1 2 2 3 2

Reconfigur e hardware

Change software

Timings Mixes Kernels Modes Bebchmarks Synthetic program Simulation Monitor (hardware and software)

1 1 2 2 3 3 3 2

--1 1 3 3 3 2

---2 2 2 3 3

----2 2 3 3

Performance Evaluation Techniques

Analytic Models:

Analytic models are mathematical representation of computer systems or component of computer systems. i.e.: the model of queuing theory and markov processes are some examples.

Benchmarks:

A benchmark is a real program that the evaluator actually submits for execution on the system being evaluated. The evaluator knows the characteristics of the benchmark on existing systems so when it runs on new system, evaluator may draw meaningful conclusions.

Synthetic Programs:

These are the real programs that have been custom designed to exercise specific features of a computer system. They are particularly useful when benchmarks exercising those features dont already exist.

Performance Evaluation Techniques

Simulation:

Simulation is a technique in which the evaluator develops a computerized model of the system being evaluated. The model is then run on a computer system that reflects the behavior of the system being evaluated.

Performance Monitoring:

Performance monitoring is the collection and analysis of information regarding system performance for existing systems. It is useful in determining how a system is performing in terms of throughput, response time, predictability etc. Performance monitoring can locate bottleneck very easily and can help management decide how to improve performance.

Bottlenecks and Saturation


A resource becomes a bottleneck limiting the overall performance of the system when it can not handle the work being routed to it. Resources operating near there capacity tend to become saturated. i.e. processes competing for the attention of the resource begin to interfere with one another. How can bottlenecks be detected? Quite simple. Each resource request queue should be monitored. When a queue begins to grow quickly, then the arrival rate of requests at that resource must be larger than its service rate. And the resource has become saturated. They should be avoided to increase performance.

References
http://en.wikipedia.org/wiki/Performance_measu rement http://www.bucknell.edu/Documents/Engineerin g/JoshShaffer-thesis.pdf http://www.cse.wustl.edu/~jain/cse56708/ftp/db/index.html www4.cs.fau.de/TR/pdf/TR-I4-94-14 http://en.wikipedia.org/w/index.php?search=Perf ormance+measurement+monitoring&title=Speci al%3ASearch

Questions?

You might also like