You are on page 1of 27

1

The time-centric view focuses on minimizing the end-to-end response time.

The resource-oriented approach attempts to minimize the resource consumption, so


that throughput and number of concurrent users can be maximized.

The overall objective of performance optimization is the optimal compromise between


fast run time and low resource consumption.

Resources relevant for performance include:


• CPU time
• DB time
• Memory
• Network bandwidth
• Disk space

8
Means …
Users judge system performance subjectively and often perceive it different from the
objective performance

Depends on
What users see
From current experience (observation)
What users know
From prior experience (expectations)
From system feedback (information)
How soon users can continue with their work
How soon they can start reading
How soon they can start interacting (clicking, typing, …)
How successful users are in completing their tasks
How well the interaction confirms to events in the real world
Continuity: Smooth movements, appearances/disappearances of objects,
transitions, …

Conclusions
Improving perceived performance may even lead to worse objective performance
But: Good perceived performance leads to higher user motivation and thus better
overall performance
Tolerance for waiting times can be increased by applying perceived performance tricks
(for example, by providing timely feedback)
About the Graphic
This graphic combines references from many authors, most of which refer to the original source
from Robertson, Card, and Mackinlay.
Originally, only the time ranges 0.1 sec, 1 sec, and 10 sec are referred to. The PeP team added
the 3 sec and the >15 sec ranges (data from Shneiderman & Plaisant).
The time ranges are only meant as easy to remember heuristics. In some cases, also ranges are
given. However, the ranges are not connected.
Extension for PeP Evaluation
For the PeP evaluation, we extended the time ranges so that they are connected. This allows us
to assign observed response time data to the times ranges, which is the basis for the PeP
evaluation.
Task Level
At the task level, different types of tasks (or corresponding UI events) are given for reference.
User Level
At the user level, user reactions are listed that are critical for the evaluation of the system’s
performance.
0.1: directness, perception of cause-and-effect relations, animations, etc.
1.0: dialog, feeling of appropriateness
3.0: perception of slowness, focus starts to wane
10: focus gets lost
15: Users get annoyed
17
Requirements for the Product Standard Performance include
• Architecture Requirements
Information on how to best adhere to the architecture requirements (programming guidelines)
defined by the product standard Performance is available in training class ZDS407.
• Quality Requirements
Adherence to the performance measurement KPIs defined by the standard depends on how
the measurement results of your current release compare with equivalent results of a previous
release.

To enable comparisons between releases, the Performance Result Repository (PRR) contains
three tab strips:
• Current measurement results
To collect the measurement results for each test case of the scenario or component release
being evaluated.
• Results of previous release
For the measurement results of the equivalent test cases of a previous release. (This
worksheet is optional for first releases.)
• Diff. between both releases %
On this tab strip, the difference (in percent) between the values entered in the "Current
measurement results" and "Results of previous release“ tab strips is calculated. It also indicates
whether a possible performance degradation of the current release - when compared with the
previous release - exceeds the limits set by the Board.
Training class ZDS410 describes the Performance Results Repository in more detail.

19
20
21
To obtain the KPI values mandated by the quality requirements and to check adherence to the
architecture requirements, several tools are available.

24
There are many tools which can partially or fully collect the required performance counters for
SAP Performance Standard. The different tools can be applied in different versions of the
products, as well they are specific to Java Server environment or Java Standalone program.

26

You might also like