Professional Documents
Culture Documents
Power
Performance Size
NRE cost
Figure 1.1 Design metric competition — improving one may worsen others.
Time-to-market and Ease of use are some of the metrics that affect the cost
and price. Sometimes, a cost-performance metric may be more important
than cost and performance separately.
3. Power Consumption Metrics: Metrics of this group measure the power
consumption of the system. These metrics are gaining importance in
many fields as battery powered mobile systems become prevalent and
energy conservation becomes more significant.
4. System Effectiveness Metrics: In many applications such as military
applications, how effective the system is in implementing its target is
more important than cost. Reliability, Maintainability, Serviceability,
design adequacy and flexibility are related to the metrics of this group.
5. Others: These are metrics that include those that may guide the designer
to select from many off-the-shelf components that can do the job. Ease
of use, software support, safety and the availability of second source
suppliers are some of the metrics of this group.
Definitions:
Latency or Response time: The time between the start of the task’s execution
and the end. For example, producing one car takes 4 hours.
Throughput: The number of tasks that can be processed per unit time. For
example, an assembly line may be able to produce 6 cars per day.
The main concern in the two cases, throughput and response time, is time.
The computer that performs the same amount of work in the least time is the
fastest. If we are speaking of a single task, then we are speaking of response
time, while if we are speaking of executing many tasks, then we are speaking
about throughput. The latency metric is directly related to the execution time
while throughput measures the rate of implementing a given task. We can
expect many metrics measuring throughput based on the definition of the
task. The task may be an instruction as in case of MIPS (see 1.3.2.2), or it
may be floating-point operations as in case of MFLOPS (see 1.3.2.3) or any
other task. Besides execution time and rate metrics, there is a wide variety
of more specialised metrics used as indices of computer system performance.
Unfortunately, as we shall see later, many of these metrics are often used but
interpreted incorrectly.
10 Processor Design Metrics
There are many different metrics that have been used to describe the per-
formance of a computer system. Some of these metrics are commonly used
throughout the field, such as MIPS and MFLOPS (which are defined later in
this chapter), whereas others are introduced by manufacturers and/or designers
for new situations as they are needed. Experience has shown that not all of
these metrics are ‘good’ in the sense that sometimes using a particular metric
out of context can lead to erroneous or misleading conclusions. Consequently,
it is useful to understand the characteristics of a ‘good’ performance metric.
This understanding will help when deciding which of the existing perfor-
mance metrics to use for a particular situation and when developing a new
performance metric.
A performance metric that satisfies all of the following requirements is
generally useful to a performance analyst in allowing accurate and detailed
comparisons of different measurements. These criteria have been developed by
1.3 Performance Design Metrics 11
Many measures have been devised in an attempt to create standard and easy-to-
use measures of computer performance. One consequence has been that simple
metrics, valid only in a limited context, have been heavily misused such that
using them normally results in misleading conclusions, distorted results and
incorrect interpretations. Clock rate, MIPS and MFLOPS are the best examples
of such simple performance metrics; using any of them results in misleading
and sometimes incorrect conclusions. These three metrics belong to the same
family of performance metrics that measure performance by calculating the
rate of occurrence of an event. In Section 1.3.2.8 we give an example that
highlights the danger of using the wrong metric (mainly the use of means-
based metrics or using the rate as a measure) to reach a conclusion about
computer performance. In most cases it is better to use metrics that use the
execution time as a base for measuring the performance.
1.3 Performance Design Metrics 13
calculate MIPS for two computers with the same instruction set but one of them
has special hardware to execute floating-point operations and another machine
using software routines to execute the floating-point operations. The floating-
point hardware needs more clock cycles to implement one floating point
operation compared with the number of clock cycles needed to implement
an integer operation. This increases the average value of the CPI (cycles per
instruction) of the machine which in turn, according to equation (1.1), results
in a lower MIPS rating. On the other hand however, the software routines
that were needed to execute floating point operation consisted of many simple
instructions, now being replaced by a single hardware instruction and thus
executing much faster. Hence, the inclusion of floating point hardware will
result in a machine that has a lower MIPS rating but can do more work, thus
highlighting the drawback of MIPS as a metric. Example 1.4 further illustrates
this effect.
tinkering. For example, many compiler developers have used these bench-
marks as practice programmes, thereby tuning their optimisations to the
characteristics of this collection of applications. As a result, the execution
times of the collection of programmes in the SPEC suite can be quite sensitive
to the particular selection of optimisation flags chosen when the programme is
compiled. Also, the selection of specific programmes that comprise the SPEC
suite is determined by a committee of representatives from the manufacturers
within the cooperative. This committee is subject to numerous outside pres-
sures since each manufacturer has a strong interest in advocating application
programmes that will perform well on their machines. Thus, while SPEC is
a significant step in the right direction towards defining a good performance
metric, it still falls short of the goal.
1.3.2.5 Comments
As mentioned before any performance metric must be reliable. The majority
of the above mentioned metrics are not reliable. The main reason that makes
them unreliable is that they measure what was done whether or not it was
useful. Such metrics are called means-based metrics. The use of such metrics
may lead to wrong conclusions concerning the performance of the system.
To avoid such problems, we must use metrics that are based on the def-
inition of performance, i.e. the execution time. Such metrics are ends-based
metrics and measure what is actually accomplished. The difference between
the two classes of performance metrics is highlighted in Section 1.3.2.8.
CPU time = CPU clock cycles for a programme ∗ Clock cycle time
For any processor the clock cycle time (or clock rate) is known and it is
possible to measure the CPU clock cycles. CPU time can also be expressed
in terms of number of instruction executed (called instruction count IC), and
the average number of clock cycles per instruction (CPI):