You are on page 1of 58

ISO/IEC JTC1/SC7

Software Engineering

Secretariat: CANADA (SCC)

ISO/IEC JTC1 /SC7 N2419R


Date: 2002-03-14

Reference number of document: ISO/IEC TR 9126-2

Committee identification: ISO/IEC JTC1 /SC 7/WG 6

Secretariat: Japan

Software engineering –Product quality – Part 2: External metrics

Titre — Titre — Partie n: Titre

Document type: International technical report


Document subtype: if applicable
Document stage: (40) Enquiry
Document language: E

ISO Basic template Version 3.0 1997-02-03


8.1 Functionality metrics

Table 8.1.1 Suitability metrics - 4


External suitability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Sources of ISO/IEC Target
metrics data element computations of measured scale type input to 12207 audience
value type measure- SLCP
ment Reference

Functional How adequate are Number of functions that X=1-A/B 0 <= X <= 1 Absolute X= Count/ Requireme 6.5 Developer,
adequacy the evaluated are suitable for performing The closer to Count nt Validation, SQA
functions? the specified tasks A= Number of functions in which problems 1.0, the more specificatio 6.3 Quality
comparing to the number are detected in evaluation adequate. A= Count n Assurance,
of function evaluated. B= Number of functions evaluated B= Count (Req. 5.3
Spec.) Qualificatio
Evaluation n testing
report
Functional How complete is the Do functional tests (black X=1-A/B 0<=X<=1 Absolute A= Count Req. spec. 6.5 Developer,
implementation implementation box test) of the system The closer to B= Count Validation, SQA
according to according to the A = Number of missing functions detected in 1.0 is the X= Count/ Evaluation 6.3 Quality
completeness
requirement requirement specifications. evaluation better. Count report Assurance,
specifications? Count the number of B = Number of functions described in 5.3
missing functions detected requirement specifications Qualificatio
in evaluation and compare n testing
with the number of function
described in the
requirement specifications.

NOTE: 1. Input to the measurement process is the updated requirement specification. Any 2. This metric is suggested as experimental use.
changes identified during life cycle must be applied to the requirement specifications before
using in measurement process.
NOTE: Any missing function can not be examined by testing because it is not implemented. For detecting missing functions, it is suggested that each function stated
in a requirement specification be tested one by one during functional testing. Such results become input to “Functional implementation completeness” metric. For
detecting functions which are implemented but inadequate, it is suggested that each function be tested for multiple specified tasks. Such results become input to the
“Functional adequacy” metric. Therefore, users of metrics are suggested to use both these metrics during functional testing.
External suitability metrics

Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Sources of ISO/IEC Target
metrics data element computations of measured scale type input to 12207 audience
value type measure- SLCP
ment Reference
Functional How correct is the Do functional tests (black X=1- A / B 0<=X<=1 Absolute A= Count Req. spec. 6.5 Developer,
implementation functional box test) of the system The closer to B= Count Evaluation Validation, SQA
implementation? according to the A= Number of incorrectly implemented or 1.0 is the X= Count/ report 6.3 Quality
coverage
requirement specifications. missing functions detected in evaluation better. Count Assurance,
Count the number of B= Number of functions described in 5.3
incorrectly implemented or requirement specifications Qualificatio
missing functions detected n testing
in evaluation and compare
with the total number of
functions described in the
requirement specifications
Count the number of
functions that are complete
versus the ones that are
not.
NOTE: 1. Input to the measurement process is the updated requirement specification. Any 2. This measure represents a binary gate checking of determining the presence of a feature.
changes identified during life cycle must be applied to the requirement specifications before
using in measurement process.
Functional Count the number of X = 1- A / B 0<=X<= 1 Absolute A= Count Req. spec. 6.8 Maintainer
specification How stable is the functions described in The closer to B= Count Problem SQA
functional functional specifications A= Number of functions changed after 1.0 is the X= Count/ Evaluation Resolution
stability
specification after that had to be changed entering operation starting from entering better. Size report 5.4
(volatility) entering operation? after the system is put into operation Operation
operation and compare B= Number of functions described in
with the total number of requirement specifications
functions described in the
requirement specifications.
NOTE: This metric is suggested as experimental use.
Table 8.1.2 Accuracy metrics - 3
External accuracy metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Accuracy to Are differences Do input .vs. output test X=A / T 0<=X A= Count Req. spec. 6.5 Developer
expectation between the actual cases and compare the The closer to Ratio T= Time Validation User
and reasonable output to reasonable A= Number of cases encountered by the 0 is the better. X= Count/ User 6.3 Quality
expected expected results. users with a difference against to reasonable Time operation Assurance
results acceptable? expected results beyond allowable manual
Count the number of cases
encountered by the users T= Operation time Hearing to
with an unacceptable users
difference from reasonable
expected results. Test report
NOTE: Reasonable expected results might be identified in a requirement specification, a user operation manual, or users’ expectations.
Computational How often do the end Record the number of X=A / T 0<=X A= Count Req. 6.5 Developer
Accuracy users encounter inaccurate computations The closer to Ratio T= Time spec. Validation User
inaccurate results? based on specifications. A= Number of inaccurate computations 0 is the better. X= Count/ Test report 6.3 Quality
encountered by users Time Assurance

T= Operation time
Precision How often do the end Record the number of X=A / T 0<=X A= Count Req. spec. 6.5 Developer
users encounter results with inadequate The closer to Ratio T= Time Validation User
results with precision. A= Number of results encountered by the 0 is the better. X= Count/ Test report 6.3 Quality
inadequate users with level of precision different from Time Assurance
precision ? required

T= Operation time
NOTE: Data elements for computation of external metrics are designed to use externally accessible information, because it is helpful for end users, operators,
maintainers or acquirers to use external metrics. Therefore, the time basis metric often appears in external metrics and is different from internal ones.
Table 8.1.3 Interoperability metrics - 3
External interoperability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to
ISO/IEC Target
metrics data element computations of measured scale type measure-
12207 audience
value type ment SLCP
Reference
Data How correctly have Test each downstream X= A / B 0<=X<= 1 Absolute A= Count Req. spec. 6.5 Developer
exchangeability the exchange interface function output A= Number of data formats which are The closer to B= Count (User Validation
interface functions for record format of the approved to be exchanged successfully with 1.0 is the X= Count/ manual)
(Data format
specified data system according to the other software or system during testing on better. Count
based) transfer been data fields specifications. data exchanges, Test report
implemented? B= Total number of data formats to be
Count the number of data exchanged
formats that are approved
to be exchanged with other
software or system during
testing on data exchanges
in comparing with the total
number.
NOTE: It is recommended to test specified data transaction.
Data How often does the Count the number of cases a) X= 1 - A / B 0<=X<= 1 a) A= Count Req. spec. 5.4 Maintainer
exchangeability end user fail to that interface functions A= Number of cases in which user failed to The closer to Absolute B= Count (User Operation
exchange were used and failed. exchange data with other software or 1.0 is the X= Count/ manual)
(User’s success
data between target systems better. Count
attempt based) software and other B= Number of cases in which user Test report
software? attempted to exchange data
How often are the
data transfers b) Y= A / T 0<=Y b) Y= Count/
between target T= Period of operation time The closer to Ratio Time
software and other 0, is the T= Time
software successful? better.

Can user usually


succeed in
exchanging data?
Table 8.1.4 Security metrics
External security metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Access How complete is the Evaluate the amount of X= A / B 0<=X<=1 Absolute A= Count Test spec. 6.5 Developer
auditability audit trail concerning accesses that the system The closer to B= Count Test report Validation
the user access to recorded in the access A= Number of “user accesses to the system 1.0 is the X= Count/
the system and data? history database. and data” recorded in the access history better. Count
database
B= Number of “ user accesses to the system
and data” done during evaluation

NOTE:1. Accesses to data may be measured only with testing activities.


2. This metric is suggested as an experimental use. 4. “User access to the system and data” record may include “virus detection record“ for virus
3. It is recommended that penetration tests be performed to simulate attacks, because such protection. The aim of the concept of computer virus protection is to create suitable
security attacks do not normally occur in the usual testing. Real security metrics may only be safeguards with which the occurrence of computer viruses in systems can be prevented or
taken in "real life system environment", that is "quality in use". detected as early as possible.

Access How controllable is Count number of detected X= A / B 0<=X<=1 Absolute A= Count Test spec. 6.5 Developer
controllability access to the illegal operations with The closer to B= Count Test report Validation
system? comparing to number of A= Number of detected different types of 1.0 is the X= Count/ Operation
illegal operations as in the illegal operations better. Count report 6.3 Quality
specification. B= Number of types of illegal operations as Assurance
in the specification

NOTE: 1. If it is necessary to complement detection of unexpected illegal operations


additional intensive abnormal operation testing should be conducted. 3. Functions prevent unauthorized persons from creating, deleting or modifying programs or
2. It is recommended that penetration tests be performed to simulate attack, because such information. Therefore, it is suggested to include such illegal operation types in test cases.
security attacks do not normally occur in the usual testing. Real security metrics may only be
taken in "real life system environment", that is "quality in use".
External security metrics

Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Data corruption What is the frequency Count the occurrences of a) X= 1 – A / N 0<=X<= 1 a) A= Count Test spec. 6.5 Maintainer
prevention of data corruption major and minor data A= Number of times that a major data The closer to Absolute B= Count Test report Validation Developer
events? corruption events. corruption event occurred 1.0 is the N= Count Operation 5.3
N= Number of test cases tried to cause data better. X= Count/ report Qualifica-
corruption event Count tion testing
0<=Y<= 1 5.4
The closer to Operation
b) 1.0 is the
Y= 1- B / N better. b)
B= Number of times that a minor data Absolute Y= Count/
corruption event occurred 0<=Z Count
The closer to 0,
is the better.
c)
Z= A / T or B / T c) T= Time
T= period of operation time (during operation Ratio Z= Count/
testing) Time

NOTE: 1. Intensive abnormal operation testing is needed to obtain minor and major data 4. It is recommended that penetration tests be performed to simulate attack, because such
corruption events. security attacks do not normally occur in the usual testing.
2. It is recommended to grade the impact of data corruption events such as the following Real security metrics may only be taken in "real life system environment", that is "quality in
examples: use"
Major (fatal) data corruption event:
- reproduction and recovery impossible; 5. This metric is suggested as an experimental use.
- second affection distribution too wide;
- importance of data itself. 6. Data backup is one of the effective ways to prevent data corruption. The creation of back
Minor data corruption event: up ensures that necessary data can be restored quickly in the event that parts of the
- reproduction or recovery possible and operative data are lost. However, data back up is regarded as a part of the composition of the
- no second affection distribution; reliability metrics in this report.
- importance of data itself.
7. It is suggested that this metric be used experimentally.
3.Data elements for computation of external metrics are designed to use externally
accessible information, because it is helpful for end users, operators, maintainers or
acquirers to use external metrics. Therefore, counting events and times used here are
different from corresponding internal metric.
Table 8.1.5 Functionality compliance metrics
External functionality compliance metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Functional How compliant is the Count the number of items X=1- A/B 0<= X <=1 Absolute A= Count Product 5.3 Supplier
compliance functionality of the requiring compliance that The closer to B= Count description Qualifica-
product to applicable have been met and A= Number of functionality compliance items 1.0 is the X= Count/ (User tion testing User
regulations, compare with the number specified that have not been implemented better. Count manual or
standards and of items requiring during testing Specificati 6.5
conventions? compliance in the on) of Validation
specification. B= Total number of functionality compliance complianc
Design test cases in items specified e and
accordance with related
compliance items. standards,
convention
Conduct functional testing s or
for these test cases. regulations

Count the number of Test


compliance items that specificatio
have been satisfied. n and
report
NOTE: 1. It may be useful to collect several measured values along time, to analyse the trend 2. It is suggested to count number of failures, because problem detection is an objective of
of increasingly satisfied compliance items and to determine whether they are fully satisfied or effective testing and also suitable for counting and recording.
not.
Interface How compliant are Count the number of X= A / B 0<=X<= 1 Absolute A= Count Product 6.5 Developer
standard the interfaces to interfaces that meet A= Number of correctly implemented The closer to B= Count description Validation
applicable required compliance and interfaces as specified 1.0 is the X= Count/ of
compliance
regulations, compare with the number B= Total number of interfaces requiring better. Count complianc
standards and of interfaces requiring compliance e and
conventions? compliance as in the related
specifications. standards,
convention
NOTE: All specified s or
attributes of a standard regulations
must be tested.
Test
specificatio
n and
report
8.2 Reliability metrics

Table 8.2.1 Maturity metrics

External maturity metrics


Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP

Reference

Estimated How many problems Count the number of faults X= {ABS( A1 - A2 )} / B 0<=X Absolute A1= Test report 5.3 Developer
latent fault still exist that may detected during a defined It depends on Count Integration
emerge as future trial period and predict (X: Estimated residuary latent fault density) stage of A2= Operation 5.3 Tester
density
faults? potential number of future ABS()= Absolute Value testing. Count report Qualifica-
faults using a reliability A1 = total number of predicted latent faults in At the later B= tion testing SQA
growth estimation model. a software product stages, Size Problem 5.4
A2 = total number of actually detected faults smaller is X= Count/ report Operation User
B= product size better. Size 6.5
Validation
6.3 Quality
Assurance
NOTE: 1.When total number of actually detected faults becomes larger than total number of 2.It is recommended to use several reliability growth estimation models and choose the most
predicted latent faults, it is recommended to predict again and estimate more larger number. suitable one and repeat prediction with monitoring detected faults.
Estimated larger numbers are intended to predict reasonable latent failures, but not to make 3. It may be helpful to predict upper and lower number of latent faults.
the product look better. 4. It is necessary to convert this value (X) to the <0,1> interval if making summarisation of
characteristics
External maturity metrics

Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Failure density How many failures Count the number of X= A1 / A2 0<=X Absolute A1= Test report 5.3 Developer
against test were detected during detected failures and It depends on Count Integration
cases defined trial period? performed test cases. A1 = number of detected failures stage of A2= Operation 5.3 Tester
A2 = number of performed test cases testing. Count report Qualifica-
At the later B= tion testing SQA
stages, Size Problem 5.4
smaller is X,Y= report Operation
better. Count/ 6.3 Quality
Size Assurance
NOTE: 1. The larger is the better, in early stage of testing. On the contrary, the smaller is the
better, in later stage of testing or operation. It is recommended to monitor the trend of this 3. It is necessary to convert this value (X) to the <0,1> interval if making summarisation of
measure along with the time. characteristics.
2. This metric depends on adequacy of test cases so highly that they should be designed to
include appropriate cases: e.g., normal, exceptional and abnormal cases.
Failure How many failure Count the number of X= A1 / A2 0<=X<= 1 a) A1= Test report 5.3 User
resolution conditions are failures that did not The closer to Absolute Count Operation Integration
resolved? reoccur during defined trial A1 = number of resolved failures 1.0 is better A2= (test) 5.3 SQA
period under the similar A2 = total number of actually detected as more Count report Qualifica-
conditions. failures failures are A3 = tion testing Maintainer
resolved. Count 5.4
Maintain a problem Operation
resolution report X= Count/
describing status of all the Count
failures.
NOTE:
1. It is recommended to monitor the trend when using this measure.
2. Total number of predicted latent failures might be estimated using reliability growth models
adjusted with actual historical data relating to similar software product. In such a case, the
number of actual and predicted failures can be comparable and the number of residual
unresolved failures can be measurable.
External maturity metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP

Reference

Fault density How many faults Count the number of X= A / B 0<=X Absolute A= Test report 5.3 Developer
were detected during detected faults and It depends on Count Integration
defined trial period? compute density. A = number of detected faults stage of B= Operation 5.3 Tester
B = product size testing. Size report Qualifica-
At the later X= tion testing SQA
stages, Count/ Problem 5.4
smaller is Size report Operation
better. 6.3 Quality
Assurance
NOTE: 1. The larger is the better, in early stage of testing. On the contrary, the smaller is the
better, in later stage of testing or operation. It is recommended to monitor the trend of this 3. It is necessary to convert this value (X) to the <0,1> interval if making summarisation of
measure along with the time. characteristics.
2. The number of detected faults divided by the number of test cases indicates effectiveness
of test cases. 4. When counting faults, pay attention to the followings:
- Possibility of duplication, because multiple reports may contain the same faults as other
report;
- Possibility of others than faults, because users or testers may not figure out whether their
problems are operation error, environmental error or software failure.
External maturity metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP

Reference

Fault removal How many faults Count the number of faults a) X= A1 / A2 0<=X<= 1 a) A1= Test report 5.3 Developer
have been corrected? removed during testing The closer to Absolute Count Integration
and compare with the total A1 = number of corrected faults 1.0 is better A2= Organizati 5.3 SQA
number of faults detected A2 = total number of actually detected faults as fewer Count on Qualifica-
and total number of faults faults remain. A3= database tion testing Maintainer
predicted. Count 6.5
b) 0<=Y b) Validation
Y= A1 / A3 The closer to Absolute X= Count/ 6.3 Quality
1.0 is better Count Assurance
A3 = total number of predicted latent faults in as fewer Y= Count/
the software product faults remain. Count
NOTE:
1. It is recommended to monitor the trend during a defined period of time. Otherwise, when Y < 1, investigate
whether it is because there are less than the usual number of defects in the software
2. Total number of predicted latent faults may be estimated using reliability growth models products or because the testing was not adequate to detect all possible faults.
adjusted with actual historical data relating to similar software product.
4. It is necessary to convert this value (Y) to the <0,1> interval if making summarisation of
characteristics
3. It is recommended to monitor the estimated faults resolution ratio Y, so that if Y > 1,
investigate the reason whether it is because more faults have been detected early or 5. When counting faults, pay attention to the possibility of duplication, because multiple
because software product contains an unusual number of faults. reports may contain the same faults as other report.
Mean time How frequently does Count the number of a) X = T1 / A 0<X,Y a) A= Test report 5.3 Maintainer
between the software fail in failures occurred during a b) Y = T2 / A The longer is Ratio Count Integration
failures (MTBF) operation? defined period of operation the better.As T1 = Operation 5.3 User
and compute the average T1 = operation time longer time b) Time (test) Qualifica-
interval between the T2 = sum of time intervals between can be Ratio T2 = report tion testing
failures. consecutive failure occurrences expected Time 5.4
A = total number of actually detected failures between X =Time / Operation
(Failures occurred during observed failures. Count testing
operation time) Y =Time/ 5.4
Count Operation
NOTE: 2. Failure rate or hazard rate calculation may be alternatively used.
1. The following investigation may be helpful: - distribution of time interval between failure 3. It is necessary to convert this value (X,Y) to the <0,1> interval if making summarisation of
occurrences; characteristics
- changes of mean time along with interval operation time period;
- distribution indicating which function has frequent failure occurrences and operation
because of function and use dependency.
External maturity metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP

Reference

Test coverage How much of Count the number of test X= A / B 0<=X<=1 Absolute A= Count Req. 5.3 Developer
(Specified required test cases cases performed during The closer to B= Count spec., Qualifica- Tester
operation have been executed testing and compare the A= Number of actually performed test cases 1.0 is the X= Count/ Test spec. tion testing SQA
scenario testing during testing? number of test cases representing operation scenario during better test Count or User 6.5
coverage ) required to obtain testing coverage. manual Validation
adequate test coverage. B= Number of test cases to be performed to Test report 6.3 Quality
cover requirements Operation Assurance
report
NOTE:
1. Test cases may be normalised by software size, that is: test density coverage Y= A / C, where C= Size of product to be tested.
The larger Y is the better. Size may be functional size that user can measure.
Test maturity Is the product well Count the number of X= A / B 0<=X<=1 Absolute A= Count Req. 5.3 Developer
tested? passed test cases which The closer to B= Count spec., Qualifica- Tester
(NOTE: This is to have been actually A= Number of passed test cases during 1.0 is the X= Count/ Test spec. tion testing SQA
predict the success executed and compare it testing or operation better. Count , or User 6.3 Quality
rate the product will to the total number of test B= Number of test cases to be performed to manual Assurance
achieve in future cases to be performed as cover requirements Test report
testing.) per requirements. Operation
report
NOTE: 1. It is recommended to perform stress testing using live historical data especially 2. Passed test cases may be normalised by software size, that is:
from peak periods. passed test case density Y= A / C, where
It is also recommended to ensure that the following test types are executed and passed C= Size of product to be tested.
successfully: The larger Y is better.
- User operation scenario; Size may be functional size that user can measure.
- Peak stress;
- Overloaded data input..
Table 8.2.2 Fault tolerance metrics
External fault tolerance metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Breakdown How often the Count the number of X= 1- A / B 0<=X<= 1 Absolute A =Count Test report 5.3 User
avoidance software product breakdowns occurrence The closer to B =Count Operation Integration Maintainer
causes the break with respect to number of A= Number of breakdowns 1.0 is the X =Count/ report 5.3
down of the total failures. B= Number of failures better. Count Qualifica-
production tion testing
environment? If it is under operation, 5.4
analyse log of user Operation
operation history.
NOTE: 1.The breakdown means the execution of any user tasks is suspended until system is 2. When none or few failures are observed, time between breakdowns may be more
restarted, or its control is lost until system is forced to be shut down. suitable.
Failure How many fault Count the number of X=A / B 0<=X<= 1 Absolute A= Count Test report 5.3 User
avoidance patterns were brought avoided fault patterns and The closer to B= Count Integration Maintainer
under control to avoid compare it to the number A= Number of avoided critical and serious 1.0 is better, X= Count/ Operation 5.3
critical and serious of fault patterns to be failure occurrences against test cases of as the user Count report Qualifica-
failures? considered fault pattern can more tion testing
B= Number of executed test cases of fault often avoid 5.4
pattern (almost causing failure) during critical or Operation
testing serious 6.5
failure. Validation

NOTE:
1. It is recommended to categorise failure avoidance levels which is the extent of mitigating 2. Failure avoidance levels may be based on a risk matrix composed by
impact of faults, for example: severity of consequence and frequency of occurrence provided by
-Critical: entire system stops / or serious database destruction; ISO/IEC 15026 System and software integrity.
-Serious: important functions become inoperable and no alternative way of operating 3. Fault pattern examples
(workaround); - out of range data
-Average: most functions are still available, but limited performance occurs with limited or - deadlock
alternate operation (workaround); Fault tree analysis technique may be used to detect fault patterns.
-Small: a few functions experience limited performance with limited operation; 4. Test cases can include the human incorrect operation
-None: impact does not reach end user
External fault tolerance metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Incorrect How many functions Count the number of test X=A / B 0<=X<= 1 Absolute A= Count Test report 5.3 User
operation are implemented with cases of incorrect The closer to B= Count Operation Integration Maintainer
incorrect operations operations which were A= Number of avoided critical and serious 1.0 is better, X= Count/ report 5.3
avoidance
avoidance capability? avoided to cause critical failures occurrences as more Count Qualifica-
and serious failures and B= Number of executed test cases of incorrect incorrect user tion testing
compare it to the number operation patterns (almost causing failure) operation is 5.4
of executed test cases of during testing avoided. Operation
incorrect operation
patterns to be considered.

NOTE: 3. Fault tree analysis technique may be used to detect incorrect operation patterns
1. Also data damage in addition to system failure. 4. This metric may be used experimentally.
2. Incorrect operation patterns
- Incorrect data types as parameters
- Incorrect sequence of data input
- Incorrect sequence of operation
Table 8.2.3 Recoverability metrics
External recoverability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Availability How available is the Test system in a a) 0<=X<=1 (a),(b) To = Time Test report 5.3 User
system for use during production like X= { To / (To + Tr) } The larger Absolute Tr = Time Operation Integration Maintainer
the specified period environment for a specified and closer to X= Time/ report 5.3
of time? period of time performing b) 1.0 is better, Time Qualifica-
all user operations. Y= A1 / A2 as the user tion testing
can use the 5.4
Measure the repair time software for A1= Count Operation
period each time the To = operation time more time. A2= Count
system was unavailable Tr = time to repair Y= Count/
during the trial. A1= total available cases of user’s Count
successful software use when user attempt 0<=Y<=1
Compute mean time to to use The larger
repair. A2= total number of cases of user’s attempt and closer to
to use the software during observation time. 1.0 is the
This is from the user callable function better.
operation view.

NOTE: It is recommended that this metric includes only the automatic recovery provided by
the software and excludes the maintenance work of human.
Mean down What is the average Measure the down time X= T / N 0<X Ratio T= Time Test report 5.3 User
time time the system stays each time the system is The smaller is N= Count Operation Integration Maintainer
unavailable when a unavailable during a T= Total down time the better, X= Time/ report 5.3
failure occurs before specified trial period and N= Number of observed breakdowns system will be Count Qualifica-
gradual start up? compute the mean time. The worst case or distribution of down time down for tion testing
should be measured. shorter time. 5.4
Operation
6.5
Validation
NOTE:
1. It is recommended that this recoverability metric includes only the automatic recovery 2. It is necessary to convert this value (X) to the <0,1> interval if making summarisation of
provided by the software and excludes the maintenance work of human. characteristics
External recoverability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input toISO/IEC Target
metrics data element computations of measured scale type measure-12207 audience
value type ment SLCP
Reference
Mean recovery What is the average Measure the full recovery X= Sum(T) / B 0<X Ratio T= Time Test report 5.3 User
time time the system takes times for each of the time The smaller is N= Count Operation Integration Maintainer
to complete recovery the system was brought T= Time to recovery downed software the better. X= Time/ report 5.3
from initial partial down during the specified system at each opportunity Count Qualifica-
recovery? trial period and compute N= Number of cases which observed tion testing
the mean time. software system entered into recovery 5.4
Operation
6.5
Validation
NOTE: 2. It is recommended that this recoverability metric includes only the automatic recovery
1. It is recommended to measure the maximum time of the worst case or distribution of provided by the software and excludes the maintenance work of human.
recovery time for many cases. 3. It is recommended to distinguish the grades of recovery difficulty, for example, recovery of
destroyed database is more difficult than recovery of destroyed transaction.

4. It is necessary to convert this value (X) to the <0,1> interval if making summarisation of
characteristics
Restartability How often the system Count the number of times X = A / B 0<=X<=1 Absolute A =Count Test report 5.3 User
can restart providing the system restarts and The larger B =Count Operation Integration Maintainer
service to users provides service to users A= Number of restarts which met to required and closer to X =Count/ report 5.3
within a required within a target required time during testing or user operation support 1.0 is better, Count Qualifica-
time? time and compare it to the B= Total number of restarts during testing or as the user tion testing
total number of restarts, user operation support can restart 5.4
when the system was easily. Operation
brought down during the 6.5
specified trial period. Validation
NOTE: 2. It is recommended that this recoverability metric includes only the automatic recovery
1. It is recommended to estimate different time to restart to correspond to the severity level provided by the software and excludes the maintenance work of human.
of inoperability, such as data base destruction, lost multi transaction, lost single transaction,
or temporary data destruction.
External recoverability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Restorability How capable is the Count the number of X= A / B 0<=X<=1 Absolute A= Count Req. 5.3 User
product in restoring successful restorations The larger B= Count spec., Integration Maintainer
itself after abnormal and compare it to the A= Number of restoration cases successfully and closer to X= Count/ Test spec. 5.3
event or at request? number of tested done 1.0 is better, Count or User Qualifica-
restoration required in the B= Number of restoration cases tested as as he product manual tion testing
specifications. per requirements is more 5.4
capable to Test report Operation
Restoration requirement restore in Operation 6.5
examples: defined report Validation
database checkpoint, cases.
transaction checkpoint,
redo function, undo
function etc.
NOTE: It is recommended that this metric includes only the automatic recovery provided by the software and excludes the maintenance work of human.
Restore How effective is the Count the number of X= A / B 0<=X<=1 Absolute A= Count Test report 5.3 User
effectiveness restoration capability? tested restoration meeting The larger B= Count Operation Integration Maintainer
target restoration time and A= Number of cases successfully restored and closer to X= Count/ report 5.3
compare it to the number meeting the target restore time 1.0 is the Count Qualifica-
of restorations required B= Number of cases performed better, as the tion testing
with specified target time. restoration 5.4
process in Operation
product is 6.5
more Validation
effective.
NOTE: It is recommended that this metric includes only the automatic recovery provided by the software and excludes the maintenance work of human.
Table 8.2.4 Reliability compliance metrics
External reliability compliance metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input toISO/IEC Target
metrics data element computations of measured scale type measure-12207 audience
value type ment SLCP
Reference
Reliability How compliant is the Count the number of items X=1- A/B 0<= X <=1 Absolute A= Count Product 5.3 Supplier
compliance reliability of the requiring compliance that The closer to B= Count description Qualifica-
product to applicable have been met and A= Number of reliability compliance items 1.0 is the X= Count/ (User tion testing User
regulations, compare with the number specified that have not been implemented better. Count manual or
standards and of items requiring during testing Specifica- 6.5
conventions. compliance as in the tion) of Validation
specification. B= Total number of reliability compliance complian-
items specified ce and
related
standards,
conven-
tions or
regulations

Test
specifica-
tion and
report
NOTE:
It may be useful to collect several measured values along time, to analyse the trend of increasingly satisfied compliance items and to determine whether they are fully satisfied or not.
8.3 Usability Metrics

Table 8.3.1 Understandability metrics


External understandability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Completeness What proportion of Conduct user test and X=A/B 0<=X<= 1 Absolute A= Count User 5.3 User
of description functions (or types of interview user with A = Number of functions (or types of The closer to B= Count manual Qualifica-
functions) is questionnaires or observe functions) understood 1.0 is the X= Count/ Operation tion testing Maintainer
understood after user behaviour. B = Total number of functions (or types of better. Count (test)
reading the product functions) report 5.4
description? Count the number of Operation
functions which are
adequately understood
and compare with the total
number of functions in the
product.
NOTE: This indicates whether potential users understand the capability of the product after reading the product description.
Demonstration What proportion of Conduct user test and X=A/B 0<=X<= 1 Absolute A= Count User 5.3 User
accessibility the demonstrations/ observe user behaviour. A= Number of demonstrations / tutorials that The closer to B= Count manual Qualifica-
tutorials can the user the user successfully accesses 1.0 is the X= Count/ Operation tion testing Maintainer
access? Count the number of B= Number of demonstrations / tutorials better. Count (test)
functions that are available report 5.4
adequately demonstrable Operation
and compare with the total
number of functions
requiring demonstration
capability
NOTE: This indicates whether users can find the demonstrations and/or tutorials.
Demonstration What proportion of Observe the behaviour of X = A / B 0<=X<= 1 Absolute A= Count User 5.3 User
accessibility in the demonstrations / the user who is trying to The closer to B= Count manual Qualifica-
tutorials can the user see demonstration/tutorial. A= Number of cases in which user 1.0 is the X= Count/ Operation tion testing Maintainer
use
access whenever Observation may employ successfully sees demonstration when user better. Count
user actually needs to human cognitive action attempts to see demonstration User 5.4
do during operation? monitoring approach with B= Number of cases in which user attempts monitoring Operation
video camera. to see demonstration during observation record
period

NOTE: This indicates whether users can find the demonstrations and/or tutorials while using the product.
External understandability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Demonstration What proportion of Observe the behaviour of X = A / B 0<=X<= 1 Absolute A= Count User 5.3 User
effectiveness functions can the the user who is trying to The closer to B= Count manual Qualifica-
user operate see demonstration/tutorial. A= Number of functions operated 1.0 is the X= Count/ Operation tion testing Maintainer
successfully after a Observation may employ successfully better. Count (test)
demonstration or human cognitive action B= Number of demonstrations/tutorials report 5.4
tutorial? monitoring approach with accessed Operation
video camera.
NOTE: This indicates whether users can operate functions successfully after an online demonstration or tutorial.

Evident What proportion of Conduct user test and X=A/B 0<=X<= 1 Absolute A= Count User 5.3 User
functions functions (or types of
interview user with The closer to B= Count manual Qualifica-
function) can be questionnaires or observe A = Number of functions (or types of 1.0 is the X= Count/ Operation tion testing Maintainer
identified by the user
user behaviour. functions) identified by the user better. Count (test) 5.4
based upon start up B = Total number of actual functions (or report Operation
conditions? Count the number of types of functions)
functions that are evident
to the user and compare
with the total number of
functions.
NOTE: This indicates whether users are able to locate functions by exploring the interface (e.g. by inspecting the menus).

Function What proportion of Conduct user test and X= A / B 0 <= X <= 1 Absolute A= Count User 5.3 User
understand- the product functions interview user with The closer to B= Count manual Qualifica-
will the user be able questionnaires. A= Number of interface functions whose 1.0, the X= Count/ Operation tion testing Maintainer
ability
to understand better. Count (test) 5.4
purpose is correctly described by the user
correctly? Count the number of user B= Number of functions available from the report Operation
interface functions where
interface
purposes are easily
understood by the user
and compare with the
number of functions
available for user.
NOTE: This indicates whether users are able to understand functions by exploring the interface (e.g. by inspecting the menus).
External understandability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input toISO/IEC Target
metrics data element computations of measured scale type measure-12207 audience
value type ment SLCP
Reference
Understandable Can users Conduct user test and X= A / B 0<=X<= 1 Absolute A= Count User 6.5 User
input and understand what is interview user with The closer to . B= Count manual Validation
required as input data questionnaires or observe A= Number of input and output data items 1.0 is the X= Count/ Operation 5.3 Maintainer
output
and what is provided user behaviour. which user successfully understands better. Count (test) Qualifica-
as output by software B= Number of input and output data items report tion testing
system? Count the number of input available from the interface 5.4
and output data items Operation
understood by the user
and compare with the total
number of them available
for user.
NOTE: This indicates whether users can understand the format in which data should be input and correctly identify the meaning of output data.
Table 8.3.2 Learnability metrics
External learnability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Ease of How long does the Conduct user test and T= Mean time taken to learn to use a 0<T Ratio T= Time Operation 6.5 User
function user take to learn to observe user behaviour. function correctly The shorter is (test) Validation
use a function? the better. report 5.3 Maintainer
learning
Qualifica-
User tion testing
monitoring 5.4
record Operation

NOTE: This metric is generally used as one of experienced and justified.


Ease of How long does the Observe user behaviour T= Sum of user operation time until user 0<T Ratio T= TimeOperation 6.5 User
learning to user take to learn from when they start to achieved to perform the specified task within The shorter is (test) Validation
how to perform the learn until they begin to a short time the better. report Maintainer
perform a task
specified task operate efficiently. 5.3
in use efficiently? User Qualifica-
monitoring tion testing
record
5.4
Operation
NOTE: 1. It is recommended to determine an expected user’s operating time as a short time. Such user’s operating time may be the threshold, for example, which is 70% of time at the first
use as the fair proportion.
2. Effort may alternatively represent time by person-hour unit.
Effectiveness of What proportion of Conduct user test and X= A / B 0<=X<=1 Absolute A= Count Operation 6.5 User
the user tasks can be observe user behaviour. The closer to B= Count (test) Validation
completed correctly A= Number of tasks successfully completed 1.0 is the report 5.3 Human
documentation
after using the user Count the number of tasks after accessing online help and/or better. X= Count/ Qualifica- interface
and/or help documentation and/or successfully completed documentation Count User tion testing designer
system help system? after accessing online help monitoring 5.4
and/or documentation and B = Total of number of tasks tested record Operation
compare with the total
number of tasks tested.
NOTE: Three metrics are possible: completeness of the documentation, completeness of the help facility, or completeness of the help and documentation used in combination.
External learnability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to
ISO/IEC Target
metrics data element computations of measured scale type measure-
12207 audience
value type ment SLCP
Reference
Effectiveness of What proportion of Observe user behaviour. X=A/B 0<=X<=1 Absolute A= Count User 6.5 User
user functions can be used The closer to B= Count manual Validation
correctly after reading Count the number of A = Number of functions that can be used 1.0 is the 5.3 Human
documentation
the documentation or functions used correctly B = Total of number of functions provided better. X= Count/ Operation Qualifica- interface
and/or help using help systems? after reading the Count (test) tion testing designer
systems in use documentation or using report 5.4
help systems and compare Operation
with the total number of User
functions. monitoring
record
NOTE: This metric is generally used as one of experienced and justified metrics rather than the others.
Help What proportion of Conduct user test and X=A/B 0<=X<=1 Absolute A= Count Operation 6.5 User
accessibility the help topics can observe user behaviour. The closer to B= Count (test) Validation
the user locate? A = Number of tasks for which correct online 1.0 is the report 5.3 Human
Count the number of tasks help is located better. X= Count/ Qualifica- interface
for which correct online B = Total of number of tasks tested Count User tion testing designer
help is located and monitoring 5.4
compare with the total record Operation
number of tasks tested.

Help frequency How frequently does Conduct user test and X=A 0<= X Absolute X= Count Operation 6.5 User
a user have to access observe user behaviour. The closer to A =Count (test) Validation
help to learn A = Number of accesses to help until a user 0 is the better. report 5.3 Human
operation to complete Count the number of cases completes his/her task. Qualifica- interface
his/her work task? that a user accesses help User tion testing designer
to complete his/her task. monitoring 5.4
record Operation
Table 8.3.3 Operability metrics a) Conforms with operational user expectations
External Operability metrics a) Conforms with operational user expectations
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Operational How consistent are Observe the behaviour of a) X = 1 - A / B a) A= Count Operation 6.5 User
consistency in the component of the the user and ask the 0<=X<=1 Absolute B= Count (test) Validation
user interface? opinion. A= Number of messages or functions which The closer to report 5.3 Human
use
user found unacceptably inconsistent with 1.0 is the X= Count/ Qualifica- interface
the user’s expectation better. Count User tion testing designer
B= Number of messages or functions monitoring 5.4
record Operation
b) Y = N / UOT 0<=Y b) UOT=
The smaller Ratio Time
N= Number of operations which user found and closer to N= Count
unacceptably inconsistent with the user’s 0.0 is the Y= Count/
expectation better. Time
UOT= user operating time (during
observation period)
NOTE: 1. User’s experience of operation is usually helpful to recognise several operation patterns, which derive user’s expectation.
2. Both of “input predictability” and “output predictability” are effective for operational consistency.
3. This metric may be used to measure “Easy to derive operation” and “Smooth Communication”.
Table 8.3.3 Operability metrics b) Controllable
External Operability metrics b) Controllable
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Error correction Can user easily Conduct user test and T= Tc - Ts 0<T Ratio Ts, Tc= Operation 6.5 User
correct error on observe user behaviour. The shorter is Time (test) Validation
tasks? Tc = Time of completing correction of the better. T= Time report 5.3 Human
specified type errors of performed task Qualifica- interface
Ts = Time of starting correction of specified User tion testing designer
type errors of performed task monitoring 5.4
record Operation
NOTE: User of this metric is suggested to specify types of errors for test cases by considering, for example, severity (displaying error or destroying data), type of input/output error (input
text error, output data error to database or graphical error on display) or type of error operational situation (interactive use or emergent operation).
Error correction Can user easily Observe the behaviour of a) 0<=X Ratio A= Count Operation 6.5 User
in use recover his/her error the user who is operating X= A / UOT The higher is UOT = (test) Validation
or retry tasks? software the better. Time report 5.3 Human
A= number of times that the user succeeds X = Count Qualifica- interface
to cancel their error operation / Time User tion testing designer
UOT= user operating time during monitoring 5.4
observation period record Operation

NOTE:
When function is tested one by one, the ratio
can be also calculated, that is the ratio of
number of functions which user succeeds to
cancel his/her operation to all functions.

Can user easily Observe the behaviour of b) 0<=X<=1 Absolute A= Count, Operation 6.5 User
recover his/her input? the user who is operating X=A/B The closer to B= Count (test) Validation
software 1.0 is the X= Count/ report 5.3 Human
A= Number of screens or forms where the better. Count Qualifica- interface
input data were successfully modified or User tion testing designer
changed before being elaborated monitoring 5.4
record Operation
B = Number of screens or forms where user
tried to modify or to change the input data
during observed user operating time
Table 8.3.3 Operability metrics c) Suitable for the task operation
External Operability metrics c) Suitable for the task operation
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input toISO/IEC Target
metrics data element computations of measured scale type measure-12207 audience
value type ment SLCP
Reference
Default value Can user easily Observe the behaviour of X=1-A/B 0<=X<= 1 Absolute A= Count Operation 6.5 User
availability in select parameter the user who is operating The closer to B= Count (test) Validation
values for his/her software. A= The number of times that the user fail to
1.0 is the X= Count/ report 5.3 Human
use
convenient establish or to select parameter values in a
better. Count Qualifica- interface
operation? Count how many times short period (because user can not use User tion testing designer
user attempts to establish default values provided by the software) monitoring 5.4
or to select parameter record Operation
values and fails, (because B= Total number of times that the user
user can not use default attempt to establish or to select parameter
values provided by the values
software).
NOTE: 1. It is recommended to observe and record operator’s behaviour and decide how long period is allowable to select parameter values as “short period”.
2. When parameter setting function is tested by each function, the ratio of allowable function can be also calculated.
3. It is recommended to conduct functional test that covers parameter-setting functions.
Table 8.3.3 Operability metrics d) Self descriptive (Guiding)
External Operability metrics d) Self descriptive (Guiding)
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Message Can user easily Observe user behaviour X = A / UOT 0<=X Ratio A =Count Operation 6.5 User
understand- understand who is operating software The smaller UOT = (test) Validation
messages from A = number of times that the user pauses for and closer to Time report 5.3 Human
ability in use
software system? a long period or successively and repeatedly 0.0 is the X = Count Qualifica- interface
Is there any fails at the same operation, because of the better. / Time User tion testing designer
message which lack of message comprehension. monitoring 5.4
caused the user a UOT = user operating time (observation record Operation
delay in period)
understanding before
starting the next
action?
Can user easily
memorise important
message?
NOTE: b) Memorability: Memorability implies that user remember important
1. The extent of ease of message comprehension is represented by how long that message messages presenting information such as guidance on the next user
caused delay in user understanding before starting the next action. action, name of data items to be looked at, and warning of careful
Therefore, it is recommended to observe and record operator’s behaviour and decide what operation.
length of pause is considered a “long period”. - Can user easily remember important messages?
- Is remembering important messages helpful to the user?
2. It is recommended to investigate the following as possible causes of the problems of user’s - Is it required for the user to remember only a few important messages
message comprehension. and not so much?

a)Attentiveness : Attentiveness implies that user successfully recognises important messages 3. When messages are tested one by one, the ratio of comprehended
presenting information such as guidance on next user action, name of data items to be looked messages to the total can be also calculated.
at, and warning of careful operation.
- Does user ever fail to watch when encountering important messages? 4. When several users are observed who are participants of operational
- Can user avoid mistakes in operation, because of recognising important messages? testing, the ratio of users who comprehended messages to all users can
be calculated.
Self- In what proportion of Conduct user test and X= A / B 0 <= X <= 1 Absolute X =Count/ Operation 6.5 User
explanatory error conditions does observe user behaviour. The closer to Count (test) Validation
the user propose the A =Number of error conditions for which the 1.0 is the A =Count report 5.3 Human
error messages
correct recovery user proposes the correct recovery action better. B =Count Qualifica- interface
action? B =Number of error conditions tested User tion testing designer
monitoring 5.4
record Operation
NOTE: This metric is generally used as one of experienced and justified.
Table 8.3.3 Operability metrics e) Operational error tolerant (Human error free)
External operability metrics e) Operational error tolerant (Human error free)
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to
ISO/IEC Target
metrics data element computations of measured scale type measure-
12207 audience
value type mentSLCP
Reference
Operational Can user easily Observe the behaviour of X=1 - A/B 0<=X<= 1 Absolute A= Count, Operation 6.5 User
error recover his/her worse the user who is operating The closer to B= Count (test) Validation
situation? software. A= Number of unsuccessfully recovered 1.0 is the X= Count/ report 5.3 Human
recoverability in
situation (after a user error or change) in better. Count Qualifica- interface
use which user was not informed about a risk by User tion testing designer
the system monitoring 5.4
B= Number of user errors or changes record Operation

NOTE: The formula above is representative of the worst case. User of this metric may take account of the combination of 1) the number of errors where the user is / is not warned by the
software system and 2) the number of occasions where the user successfully / unsuccessfully recovers the situation.
Time between Can user operate the Observe the behaviour of X = T / N (at time t during [ t-T, t] ) 0<X Ratio T = Time Operation 6.5 User
human error software long enough the user who is operating The higher is N = Count (test) Validation
without human error? software T = operation time period during observation the better. X= report 5.3 Human
operations in
( or The sum of operating time between Time / Qualifica- interface
use user’s human error operations ) Count User tion testing designer
N= number of occurrences of user’s human monitoring 5.4
error operation record Operation

NOTE:
1. Human error operation may be detected by counting below user’s behaviour : 2. It seems that an operation pause implies a user’s hesitation operation.
a) Simple human error (Slips): The number of times that the user just simply makes errors to It depends on the function, operation procedure, application domain, and user whether it is
input operation; considered a long period or not for the user to pause the operation. Therefore, the evaluator
b) Intentional error (Mistakes): The number of times that the user repeats fail an error at the is requested to take them into account and determine the reasonable threshold time. For an
same operation with misunderstanding during observation period; interactive operation, a "long period" threshold range of 1min. to 3 min.
c) Operation hesitation pause: The number of times that the user pauses for a long period
with hesitation during observation period.
User of this metric is suggested to measure separately for each type listed above.
External operability metrics e) Operational error tolerant (Human error free)
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input toISO/IEC Target
metrics data element computations of measured scale type measure-12207 audience
value type ment SLCP
Reference
Undoability How frequently does Conduct user test and a) 0<=X<=1 a) A= Count Operation 6.5 User
(User error the user successfully observe user behaviour. X= A / B The closer to Absolute B= Count (test) Validation
correct input errors? A= Number of input errors which the user 1.0 is the X= Count/ report 5.3 Human
correction)
successfully corrects better. Count Qualifica- interface
B= Number of attempts to correct input User tion testing designer
errors monitoring 5.4
record Operation

How frequently does Conduct user test and b) 0 <= Y <= 1 b) A= Count Operation 6.5 User
the user correctly observe user behaviour. Y= A / B The closer to Absolute B= Count (test) Validation
undo errors? A= Number of error conditions which the 1.0 is the Y= Count/ report 5.3 Human
user successfully corrects better. Count Qualifica- interface
B= Total number of error conditions tested User tion testing designer
monitoring 5.4
record Operation
NOTE: This metric is generally used as one of experienced and justified.
Table 8.3.3 Operability metrics f) Suitable for individualisation
External operability metrics f) Suitable for individualisation
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Customisability Can user easily Conduct user test and X= A / B 0 <= X <= 1 Absolute A= Count User 6.5 User
customise operation observe user behaviour. The closer to B= Count manual Validation
procedures for his/her A= Number of functions successfully 1.0 is the X= Count/ 5.3 Human
convenience? customised better. Count Operation Qualifica- interface
B= Number of attempts to customise (test) tion testing designer
Can a user, who report 5.4
instructs end users, Operation
easily set customised User
operation procedure monitoring
templates for record
preventing their
errors?

What proportion of
functions can be
customised?

NOTE:
1. Ratio of user’s failures to customise may be measured.
Y = 1 - (C / D)
C = Number of cases in which a user fails to customise operation
D = Total number of cases in which a user attempted to customise operation for his/her convenience.
0<=Y<= 1, The closer to 1.0 is the better.

2. It is recommended to regard the following as variations of customising operations:


- chose alternative operation, such as using menu selection instead of command input;
- combined user’s operation procedure, such as recording and editing operation procedures;
- set constrained template operation, such as programming procedures or making a template for input guidance.

3. This metric is generally used as one of experienced and justified.

Operation Can user easily Count user’s strokes for X=1- A/B 0<=X< 1 Absolute A= Count Operation 6.5 User
procedure reduce operation specified operation and The closer to B= Count (test) Validation
procedures for his/her compare them between A = Number of reduced operation 1.0 is the X= Count/ report 5.3 Human
reduction
convenience? before and after procedures after customising operation better. Count Qualifica- interface
customising operation. B = Number of operation procedures before User tion testing designer
customising operation monitoring 5.4
record Operation

NOTE: 1. It is recommended to take samples for each different user task and to distinguish between an operator who is a skilled user or a beginner.
2. Number of operation procedures may be represented by counting operation strokes such as click, drug, key touch, screen touch, etc.
3. This includes keyboard shortcuts.
External operability metrics f) Suitable for individualisation
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Physical What proportion of Conduct user test and X= A / B 0 <= X <= 1 Absolute A= Count Operation 6.5 User
accessibility functions can be observe user behaviour. The closer to B= Count (test) Validation
accessed by users A= Number of functions successfully 1.0 is the X= Count/ report 5.3 Human
with physical accessed better. Count Qualifica- interface
handicaps? B= Number of functions User tion testing designer
monitoring 5.4
record Operation
NOTE: Examples of physical inaccessibility are inability to use a mouse and blindness.
Table 8.3.4 Attractiveness metrics
External attractiveness metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Attractive How attractive is the Questionnaire to users Questionnaire to assess the attractiveness Depend on its Absolute Count Questionn 6.5 User
interaction interface to the user? of the interface to users, after experience of questionnaire aire result Validation
usage scoring 5.3 Human
method. Qualifica- interface
tion testing designer
5.4
Operation

Interface What proportion of Conduct user test and X= A / B 0 <= X <= 1 Absolute A= Count Users’ 6.5 User
appearance interface elements observe user behaviour. The closer to B= Count requests Validation
can be customised in A= Number of interface elements 1.0 is the X= Count/ 5.3 Human
customisability
appearance to the customised in appearance to user’s better. Count Operation Qualifica- interface
user’s satisfaction? satisfaction (test) tion testing designer
B= Number of interface elements that the report 5.4
user wishes to customise Operation
NOTE: This metric is generally used as one of experienced and justified.
Table 8.3.5 Usability compliance metrics
External usability compliance metrics
Metric name Purpose Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Usability How completely does Specify required X=1- A/B 0<= X <=1 Absolute A= Count Product 5.3 Supplier
compliance the software adhere compliance items based The closer to B= Count description Qualifica-
to the standards, on standards, conventions, A= Number of usability compliance items 1.0 is the X= Count/ (User tion testing User
conventions, style style guides or regulations specified that have not been implemented better. Count manual or
guides or regulations relating to usability. during testing Specificati 6.5
relating to usability? on) of Validation
Design test cases in B= Total number of usability compliance complian-
accordance with items specified ce and
compliance items. related
standards,
Conduct functional testing conven-
for these test cases. tions, style
guides or
regulations

Test
specifica-
tion and
report
NOTE:
It may be useful to collect several measured values along time, to analyse the trend of increasingly satisfied compliance items and to determine whether they are fully satisfied or not.
8.4 Efficiency metrics

Table 8.4.1 Time behaviour metrics a) Response time


External time behaviour metrics a) Response time
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Response time What is the time Start a specified task. T = ( time of gaining the result) 0<T Ratio T= Time Testing 5.3 User
taken to complete a Measure the time it takes - ( time of command entry finished) The sooner is report Sys./Sw.
specified task? for the sample to complete the better. Integration Developer
its operation. Operation 5.3
How long does it take Keep a record of each report Qualifica- Maintainer
before the system attempt. showing tion testing
response to a elapse 5.4 SQA
specified operation? time Operation
5.5 Mainte-
nance
NOTE: It is recommended to take account of time bandwidth and to use statistical analysis with measures for a lot of tasks (sample shots) and not for only one task.
Response time What is the average Execute a number of X = Tmean / TXmean 0 <= X Absolute Tmean= Testing 5.3 User
(Mean time to wait time the user scenarios of concurrent The nearer to Time report Sys./Sw.
response) experiences after tasks. Tmean = (Ti) / N, (for i=1 to N) 1.0 and less TXmean= Integration Developer
issuing a request until Measure the time it takes TXmean = required mean response time than 1.0 is the Time Operation 5.3
the request is to complete the selected better. Ti= Time report Qualifica- Maintainer
completed within a operation(s). Ti= response time for i-th evaluation (shot) N= Count showing tion testing
specified system load Keep a record of each N= number of evaluations (sampled shots) X= Time/ elapse 5.4 SQA
in terms of concurrent attempt and compute the Time time Operation
tasks and system mean time for each 5.5 Mainte-
utilisation? scenario. nance
NOTE: Required mean response time can be derived from specification of required real-time processing, user expectation of business needs or observation of user reaction. A user
cognitive of the aspects of human ergonomics might be considered.
External time behaviour metrics a) Response time
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input toISO/IEC Target
metrics data element computations of measured scale type measure-12207 audience
value type ment SLCP
Reference
Response time What is the absolute Calibrate the test. X= Tmax / Rmax 0<X Absolute Tmax= Testing 5.3 User
(Worst case limit on time required Emulate a condition whereby The nearer to Time report Sys./Sw.
response time in fulfilling a function? the system reaches a Tmax= MAX(Ti) (for i=1 to N) 1 and less Rmax= Integration Developer
ratio) maximum load situation. Rmax = required maximum response time than 1 is the Time Operation 5.3
In the worst case, can Run application and monitor better. Ti= Time report Qualifica- Maintainer
user still get response result(s) MAX(Ti)= maximum response time among N= Count showing tion testing
within the specified evaluations X= Time/ elapse 5.4 SQA
time limit? N= number of evaluations (sampled shots) Time time Operation
Ti= response time for i-th evaluation (shot) 5.5 Mainte-
In the worst case, can nance
user still get reply NOTE:
from the software 1. Distribution may be calculated as
within a time short illustrated below.
enough to be Statistical maximal ratio Y= Tdev / Rmax
tolerable for user?
Tdev = Tmean + K ( DEV )
Tdev is time deviated from mean time to the
particular time: e.g. 2 or 3 times of standard
deviation.
K: coefficient (2 or 3)
DEV=SQRT{ ( (Ti-Tmean) **2) / (N-1)} (for
i=1 to N)

Tmean = (Ti) / N, (for i=1 to N)


TXmean = required mean response time
Table 8.4.1 Time behaviour metrics b) Throughput
External time behaviour metrics b) Throughput
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Throughput How many tasks can Calibrate each task X =A/T 0<X Ratio A= Count Testing 5.3 User
be successfully according to the intended A = number of completed tasks The larger is T= Time report Sys./Sw.
performed over a priority given. T = observation time period the better. X= Integration Developer
given period of time? Start several job tasks. Count/ Operation 5.3
Measure the time it takes Time report Qualifica- Maintainer
for the measured task to showing tion testing
complete its operation. elapse 5.4 SQA
Keep a record of each time Operation
attempt. 5.5 Mainte-
nance
Throughput What is the average Calibrate each task X = Xmean / Rmean 0<X Absolute Xmean= Testing 5.4 User
(Mean amount number of concurrent according to intended The larger is Count report Operation
of throughput) tasks the system can priority. Xmean = (Xi)/N the better. Rmean= 5.5 Mainte- Developer
handle over a set unit Execute a number of Rmean = required mean throughput Count Operation nance
of time? concurrent tasks. Ai= Count report Maintainer
Measure the time it takes Xi = Ai / Ti Ti= Time showing
to complete the selected Ai = number of concurrent tasks observed Xi= Count/ elapse SQA
task in the given traffic. over set period of time for i-th evaluation Time time
Keep a record of each Ti = set period of time for i-th evaluation N= Count
attempt. N = number of evaluations X = Count/
Count
External time behaviour metrics b) Throughput
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Throughput What is the absolute Calibrate the test. X = Xmax / Rmax 0<X Absolute Xmax= Testing 5.4 User
(Worst case limit on the system in Emulate the condition The larger is Count report Operation
throughput terms of the number whereby the system Xmax = MAX(Xi) (for i = 1 to N) the better. Rmax= 5.5 Mainte- Developer
ratio) and handling of reaches a situation of Rmax = required maximum throughput. Count Operation nance
concurrent tasks as maximum load. Run job MAX(Xi) = maximum number of job tasks Ai= Count report Maintainer
throughput? tasks concurrently and among evaluations Ti= Time showing
monitor result(s). Xi= Count/ elapse SQA
Xi = Ai / Ti Time time
Ai = number of concurrent tasks observed N= Count
over set period of time for i-th evaluation
Ti = set period of time for i-th evaluation Xdev=
N= number of evaluations Count

NOTE: X = Count/
1. Distribution may be calculated as Count
illustrated below.
Statistical maximal ratio Y= Xdev / Xmax

Xdev = Xmean + K ( DEV )


Xdev is time deviated from mean time to the
particular time: e.g. 2 or 3 times of standard
deviation.
K: coefficient (2 or 3)
DEV=SQRT{ ( (Xi-Xmean) **2) / (N-1)} (for
i=1 to N)
Xmean = (Xi)/N
Table 8.4.1 Time behaviour metrics c) Turnaround time
External time behaviour metrics c) Turnaround time
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Turnaround What is the wait time Calibrate the test T = Time between user’s finishing getting 0<T Ratio T= Time Testing 5.3 User
time the user experiences accordingly. output results and user’s finishing request The shorter report Sys./Sw.
after issuing an Start the job task. Measure the better. Integration Developer
instruction to start a the time it takes for the job NOTE: It is recommended to take account of Operation 5.3
group of related tasks task to complete its time bandwidth and to use statistical report Qualifica- Maintainer
and their completion? operation. analysis with measures for many tasks showing tion testing
Keep a record of each (sample shots), not only one task (shot). elapse 5.4 SQA
attempt. time Operation
5.5 Mainte-
nance
Turnaround What is the average Calibrate the test. X = Tmean/TXmean 0<X Absolute Tmean= Testing 5.3 User
time (Mean time wait time the user Emulate a condition The shorter is Time report Sys./Sw.
for turnaround) experiences after where a load is placed on Tmean = (Ti)/N, (for i=1 to N) the better. TXmean= Integration Developer
issuing an instruction the system by executing a TXmean = required mean turnaround time Time Operation 5.3
to start a group of number of concurrent Ti = turnaround time for i-th evaluation (shot) Ti= Time report Qualifica- Maintainer
related tasks and tasks (sampled shots). N = number of evaluations (sampled shots) N= Count showing tion testing
their completion Measure the time it takes elapse 5.4 SQA
within a specified to complete the selected X= Time/ time Operation
system load in terms job task in the given Time 5.5 Mainte-
of concurrent tasks traffic. nance
and system Keep a record of each
utilisation? attempt.
External time behaviour metrics c) Turnaround time
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Turnaround What is the absolute Calibrate the test. X= Tmax / Rmax 0<X Absolute X= Time/ Testing 5.4 User
time (Worst limit on time required Emulate a condition where The nearer to Time report Operation
case in fulfilling a job task? by the system reaches Tmax= MAX(Ti) (for i=1 to N) 1.0 and less 5.5 Mainte- Developer
turnaround time maximum load in terms of Rmax = required maximum turnaround time than 1.0 is the Tmax = Operation nance
ratio) In the worst case, tasks performed. Run the better. Time report Maintainer
how long does it take selected job task and MAX(Ti)= maximum turnaround time among Rmax= showing
for software system monitor result(s). evaluations Time elapse SQA
to perform specified N= number of evaluations (sampled shots) Ti= Time time
tasks? Ti= turnaround time for i-th evaluation (shot) N= Count
Tdev =
NOTE: Time
1. Distribution may be calculated as
illustrated below.
Statistical maximal ratio Y= Tdev / Rmax

Tdev = Tmean + K ( DEV )


Tdev is time deviated from mean time to the
particular time: e.g. 2 or 3 times of standard
deviation.
K: coefficient (2 or 3)
DEV=SQRT{ ( (Ti-Tmean) **2) / (N-1)} (for
i=1 to N)

Tmean = (Ti) / N, (for i=1 to N)


TXmean = required mean turnaround time
Waiting time What proportion of Execute a number of 0<= X Absolute Ta= Time Testing 5.3 User
the time do users scenarios of concurrent X = Ta / Tb The smaller Tb= Time report Sys./Sw.
spend waiting for the tasks. the better. X= Time/ Integration Developer
system to respond? Measure the time it takes Ta = total time spent waiting Time Operation 5.3
to complete the selected Tb = task time report Qualifica- Maintainer
operation(s). showing tion testing
Keep a record of each elapse 5.4 SQA
attempt and compute the time Operation
mean time for each 5.5 Mainte-
scenario. nance
NOTE: If the tasks can be partially completed, the Task efficiency metric should be used when making comparisons.
Table 8.4.2 Resource utilisation metrics a) I/O devices resource utilisation
External resource utilisation metrics a) I/O devices resource utilisation
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
I/O devices Is the I/O device Execute concurrently a X=A/B 0 <= X <= 1 Absolute A= Time Testing 5.3 Developer
utilisation utilisation too high, large number of tasks, A = time of I/O devices occupied B= Time report Qualifica-
causing record I/O device B = specified time which is designed to The less than X= Time/ tion testing Maintainer
inefficiencies? utilisation, and compare occupy I/O devices and nearer to Time Operation 5.4
with the design objectives. the 1.0 is the report Operation SQA
better. Mainte-
nance
I/O loading What is the absolute Calibrate the test X = Amax / Rmax 0<= X Absolute Amax = Testing 5.3 User
limits limit on I/O utilisation condition. Emulate a The smaller is Count report Qualifica-
in fulfilling a function? condition whereby the Amax = MAX(Ai), (for i = 1 to N) the better. Rmax = tion testing Developer
system reaches a situation Rmax = required maximum I/O messages Count Operation 5.4
of maximum load. Run MAX(Ai) = Maximum number of I/O Ai = Count report Operation Maintainer
application and monitor messages from 1st to i-th evaluation. N= Count showing 5.5 Mainte-
result(s). N= number of evaluations. X = Count/ elapse nance SQA
Count time
I/O related How often does the Calibrate the test X=A/T 0 <= X Ratio A = Count Testing 5.3 User
errors user encounter conditions. Emulate a A = number of warning messages or system The smaller is T = Time report Qualifica-
problems in I/O condition whereby the failures the better. X = Count/ tion testing Maintainer
device related system reaches a situation T = User operating time during user Time Operation 5.4
operations? of maximum I/O load. Run observation report Operation SQA
the application and record showing 5.5 Mainte-
number of errors due to elapse nance
I/O failure and warnings. time
Mean I/O What is the average Calibrate the test X = Amean / Rmean 0<= X Absolute Amean = Testing 5.3 User
fulfillment ratio number of I/O related condition. Emulate a The smaller is Count report Qualifica-
error messages and condition whereby the Amean = (Ai)/N the better. Rmean = tion testing Developer
failures over a system reaches a situation Rmean = required mean number of I/O Count Operation 5.4
specified length of of maximum load. Run the messages Ai = Count report Operation Maintainer
time and specified application and record Ai = number of I/O error messages for i-th N= Count showing 5.5 Mainte-
utilisation? number of errors due to evaluation X = Count/ elapse nance SQA
I/O failure and warnings. N = number of evaluations Count time
User waiting What is the impact of Execute concurrently a T = Time spent to wait for finish of I/O 0<T Ratio T= Time Testing 5.3 User
time of I/O I/O device utilisation large amount of tasks and devices operation report Qualifica-
devices on the user wait measure the user wait The shorter is tion testing Developer
utilisation times? times as a result of I/O NOTE: It is recommended that the maximal the better. Operation 5.4
device operation. and distributed time are to be investigated report Operation Maintainer
for several cases of testing or operating, 5.5 Mainte-
because the measures are tend to be nance SQA
fluctuated by condition of use.
Table 8.4.2 Resource utilisation metrics b) Memory resource utilisation
External resource utilisation metrics b) Memory resource utilisation
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Maximum What is the absolute Calibrate the test X = Amax / Rmax 0<= X Absolute Amax= Testing 5.3 User
memory limit on memory condition. Emulate a The smaller is Count report Qualifica-
utilisation required in fulfilling a condition whereby the Amax = MAX(Ai), (for i = 1 to N) the better. Rmax= tion testing Developer
function? system reaches a situation Rmax = required maximum memory related Count Operation 5.4
of maximum load. Run error messages Ai= Count report Operation Maintainer
application and monitor MAX(Ai) = Maximum number of memory N= Count showing 5.5 Mainte-
result(s) related error messages from 1st to i-th X = Count/ elapse nance SQA
evaluation Count time
N= number of evaluations
Mean What is the average Calibrate the test X = Amean / Rmean 0<= X Absolute Amean= Testing 5.3 User
occurrence of number of memory condition. Emulate a The smaller is Count report Qualifica-
memory error related error condition whereby the Amean = (Ai)/N the better. Rmean= tion testing Developer
messages and system reaches a situation Rmean = required mean number of memory Count Operation 5.4
failures over a of maximum load. Run the related error messages Ai= Count report Operation Maintainer
specified length of application and record Ai = number of memory related error N= Count showing 5.5 Mainte-
time and a specified number of errors due to messages for i-th evaluation X = Count/ elapse nance SQA
load on the system? memory failure and N = number of evaluations Count time
warnings.
Ratio of How many memory Calibrate the test X=A/T 0 <= X Ratio A = Count Testing 5.3 User
memory errors were conditions. report Qualifica-
experienced over a The smaller is T = Time tion testing Maintainer
error/time
set period of time and Emulate a condition the better. 5.4
specified resource whereby the system A = number of warning messages or system X = Count/ Operation SQA
utilisation? reaches a situation of failures Time Operation 5.5 Mainte-
maximum load. report nance
T = User operating time during user showing
Run the application and observation elapse
record number of errors time
due to memory failure and
warnings.
Table 8.4.2 Resource utilisation metrics c) Transmission resource utilisation
External resource utilisation metrics c) Transmission resource utilisation
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Maximum What is the absolute Evaluate what is required X = Amax / Rmax 0<= X Absolute Amax = Testing 5.3 User
transmission limit of transmissions for the system to reach a Amax = MAX(Ai), (for i = 1 to N) The smaller is Count report Qualifica-
required to fulfil a situation of maximum load. Rmax = required maximum number of the better. Rmax = tion testing Developer
utilisation
function? Emulate this condition. transmission related error messages and Count Operation 5.4
Run application and failures Ai = Count report Operation Maintainer
monitor result(s) . MAX(Ai) = Maximum number of transmission N= Count showing 5.5 Mainte-
related error messages and failures from 1st elapse nance SQA
to i-th evaluation. X = Count/ time
N= number of evaluations Count
Media device What is the degree of Calibrate the test X = SyncTime/T The smaller is Ratio SyncTime Testing 5.3 User
utilisation synchronisation conditions. Emulate a the better. = Time report Qualifica-
balancing between different condition whereby the SyncTime = Time devoted to a continuous T = Time Operation tion testing Maintainer
media over a set system reaches a situation resource X= report 5.4
period of time? of maximum transmission T = required time period during which Time/Time showing Operation SQA
load. Run the application dissimilar media are expected to finish their elapse 5.5 Mainte-
and record the delay in the tasks with synchronisation time nance
processing of different
media types.
Mean What is the average Calibrate the test X = Amean / Rmean 0<= X Absolute Amean= Testing 5.3 User
occurrence of number of condition. Emulate a The smaller is Count report Qualifica-
transmission transmission related condition whereby the Amean = (Ai)/N the better. Rmean= tion testing Developer
error error messages and system reaches a situation Rmean = required mean number of Count Operation 5.4
failures over a of maximum load. Run the transmission related error messages and Ai= Count report Operation Maintainer
specified length of application and record failures N= Count showing 5.5 Mainte-
time and specified number of errors due to X = Count/ elapse nance SQA
utilisation? transmission failure and Ai = Number of transmission related error Count time
warnings. messages and failures for i-th evaluation
N = number of evaluations
Mean of How many Calibrate the test X=A/T 0 <= X Ratio A = Count Testing 5.3 User
transmission transmission -related conditions. Emulate a The smaller is T = Time report Qualifica-
error per time error messages were condition whereby the A = number of warning messages or system the better. X = Count/ tion testing Maintainer
experienced over a system reaches a situation failures Time Operation 5.4
set period of time and of maximum transmission T = User operating time during user report Operation SQA
specified resource load. Run the application observation showing 5.5 Mainte-
utilisation? and record number of elapse nance
errors due to transmission time
failure and warnings.
External resource utilisation metrics c) Transmission resource utilisation
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Transmission Is software system Execute concurrently X=A/B 0 <= X <= 1 Absolute A= Size Testing 5.3 Developer
capacity capable of performing specified tasks with B= Size report Qualifica-
utilisation tasks within expected multiple users, observe A = transmission capacity The less than X= Size / tion testing Maintainer
transmission transmission capacity and B = specified transmission capacity which is and nearer to Size Operation 5.4
capacity? compare specified one. designed to be used by the software during the 1.0 is the report Operation SQA
execution better. 5.5 Mainte-
nance
NOTE: It is recommended to measure
dynamically peaked value with multiple
users.
Table 8.4.3 Efficiency compliance metrics
Efficiency compliance metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Efficiency How compliant is the Count the number of items X = 1 - A / B 0<= X <=1 Absolute A= Count Product 5.3 Supplier
Compliance efficiency of the requiring compliance that (X: Ratio of satisfied compliance items The closer to B= Count description Qualifica-
product to applicable have been met and relating to efficiency) 1.0 is the X= Count/ (User tion testing User
regulations, compare with the number better. Count manual
standards and of items requiring A= Number of efficiency compliance items or 6.5
conventions. compliance in the specified that have not been implemented Specificati Validation
specification. during testing on) of
complian-
B= Total number of efficiency compliance ce and
items specified related
standards,
NOTE: It may be useful to collect several conven-
measured values along time, to analyse the tions or
trend of increasing satisfied compliance regulations
items and to determine whether they are
fully satisfied or not. Test
specifica-
tion and
report
8.5 Maintainability metrics

Table 8.5.1 Analysability metrics

External analysability metrics


Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Audit trail Can user identify Observe behaviour of user X= A / B 0<=X Absolute A= Count Problem 5.3 Developer
capability specific operation or maintainer who is trying The closer to B= Count resolution Qualifica-
which caused failure? to resolve failures. A= Number of data actually recorded during 1.0 is the X= Count/ report tion testing Maintainer
operation better. Count 5.4
Can maintainer easily B= Number of data planned to be recorded Operation Operation Operator
find specific enough to monitor status of software during report 5.5 Mainte-
operation which operation nance
caused failure?
Diagnostic How capable are the Observe behaviour of user X= A / B 0<=X<= 1 Absolute A= Count Problem 5.3 Developer
function diagnostic functions or maintainer who is trying The closer to B= Count resolution Qualifica-
support in supporting causal to resolve failures using A= Number of failures which maintainer can 1.0 is the X= Count/ report tion testing Maintainer
analysis? diagnostics functions. diagnose (using the diagnostics function) to better. Count 5.4
understand the cause-effect relationship Operation Operation Operator
Can user identify the report 5.5 Mainte-
specific operation B= Total number of registered failures nance
which caused failure?
(User may be able to
avoid falling into the
same failure
occurrence again with
alternative operation.)
Can maintainer easily
find cause of failure?
Failure analysis Can user identify Observe behaviour of user X=1- A / B 0<=X<= 1 Absolute A= Count Problem 5.3 User
capability specific operation or maintainer who is trying The closer to B= Count resolution Qualifica-
which caused failure? to resolve failures. A= Number of failures of which causes are 1.0 is the X= Count/ report tion testing Developer
still not found better. Count 5.4
Can maintainer easily B= Total number of registered failures Operation Operation Maintainer
find cause of failure? report 5.5
Mainte- Operator
nance
External analysability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Failure analysis Can user efficiently Observe behaviour of user X= Sum(T) / N 0<=X Ratio T= Time Problem 5.3 Developer
efficiency analyse cause of or maintainer who is trying Tin,Tout= resolution Qualifica-
failure? to resolve failures. T= Tout - Tin The shorter is Time report tion testing Maintainer
(User sometimes Tout = Time at which the causes of failure the better. N= Count
performs are found out ( or reported back to user) Operation 5.4 Operator
maintenance by Tin = Time at which the failure report is X= Time/ report Operation
setting parameters.) received Count
Can maintainer N= Number of registered failures 5.5
easily find cause of Mainte-
failure? nance
How easy to analyse
the cause of failure?
NOTE: 1. It is recommended to measure maximal time of the worst case and time duration
(bandwidth) to represent deviation. 3. From the individual user’s point of view, time is of concern, while effort may also be of
2. It is recommended to exclude number of failures of which causes are not yet found when concern from the maintainer’s point of view. Therefore, person-hours may be used instead of
measurement is done. However, the ratio of such obscure failures should be also measured time.
and presented together.
Status Can user identify Observe behaviour of user X= 1- A / B 0<=X<= 1 Absolute A= Count Problem 5.3 User
monitoring specific operation or maintainer who is trying The closer to B= Count resolution Qualifica-
capability which caused failure to get monitored data A= Number of cases which maintainer (or 1.0 is the X= Count/ report tion testing Developer
by getting monitored recording status of user) failed to get monitor data better. Count 5.4
data during software during operation. Operation Operation Maintainer
operation? B= Number of cases which maintainer (or report 5.5
user) attempted to get monitor data Mainte- Operator
Can maintainer easily recording status of software during operation nance
find cause of failure
by getting monitored
data during
operation?
Table 8.5.2 Changeability metrics
External changeability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Change cycle Can the user's Monitor interaction Average Time : Tav = Sum(Tu) / N 0<Tav Ratio Tu= Time Problem 5.3 User
efficiency problem be solved to between user and supplier. resolution Qualifica-
his satisfaction within Record the time taken Tu= Trc - Tsn The shorter is Trc, report tion testing Maintainer
an acceptable time from the initial user's the better., Tsn = 5.4
scale? request to the resolution of Tsn= Time at which user finished to send except of the Time Mainten- Operation Operator
problem. request for maintenance to supplier with number of N= Count ance 5.5 Mainte-
problem report revised report nance
versions was Tav= Time
Trc= Time at which user received the large. Operation
revised version release (or status report) report

N= Number of revised versions


Change Can the maintainer Observe the behaviour of Average Time : Tav = Sum(Tm) / N 0<Tav Ratio Tm= Time Problem 5.3 Developer
implementation easily change the the user and maintainer resolution Qualifica-
elapse time software to resolve while trying to change the Tm=Tout - Tin The shorter is Tin, report tion testing Maintainer
the failure problem? software. the better, Tout =
Otherwise, investigate Tout= Time at which the causes of failure except of the Time Mainten- 5.4 Operator
problem resolution report are removed with changing the software ( or number of ance Operation
or maintenance report. status is reported back to user) failures was Tav= Time report
large. 5.5 Mainte-
Tin= Time at which the causes of failures are Operation nance
found out report

N= Number of registered and removed


failures
NOTE: 1. It is recommended to measure maximal time of the worst case and time bandwidth 2. It is recommended to exclude the failures for which causes have not yet been found when
to represent deviation. the measurement is done. However, the ratio of such obscure failures should be also
measured and presented together.
3. From the individual user’s point of view, time is of concern, while effort may also be of
concern from the maintainer’s point of view. Therefore, person-hours may be used instead of
time.
External changeability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input toISO/IEC Target
metrics data element computations of measured scale type measure-12207 audience
value type ment SLCP
Reference
Modification Can the maintainer Observe behaviour of T = Sum (A / B) / N 0<T Ratio A= Time Problem 5.3 Developer
complexity easily change the maintainer who is trying to The shorter is B= Size resolution Qualifica-
software to resolve change the software. A= Work time spent to change the better or N= Count report tion testing Maintainer
problem? Otherwise, investigate B= Size of software change the required T= Time
problem resolution report N= Number of changes number of Mainten- 5.4 Operator
or maintenance report and changes were ance Operation
product description. NOTE: excessive. report
A size of software change may be changed 5.5 Mainte-
executable statements of program code, Operation nance
number of changed items of requirements report
specification, or changed pages of document
etc.
Parameterised Can the user or the Observe behaviour of the X=1- A / B 0<=X<= 1 Absolute A= Count Problem 5.3 Developer
modifiability maintainer easily user or the maintainer The closer to B= Count resolution Qualifica-
change parameter to while trying to change the A= Number of cases which maintainer fails 1.0 is the X= Count/ report tion testing Maintainer
change software and software. to change software by using parameter better. Count 5.4
resolve problems? Otherwise, investigate B= Number of cases which maintainer Mainten- Operation Operator
problem resolution report attempts to change software by using ance 5.5 Mainte-
or maintenance report. parameter report nance User

Operation
report
Software Can the user easily Observe the behaviour of X= A / B 0<=X<= 1 Absolute A= Count User 5.3 Developer
change control identify revised user or maintainer while The closer to B= Count manual or Qualifica-
capability versions? trying to change the A= Number of change log data actually 1.0 is the X= Count/ specificatio tion testing Maintainer
software. recorded better or the Count n 5.4
Can the maintainer Otherwise, investigate B= Number of change log data planned to closer to 0 the Problem Operation Operator
easily change the problem resolution report be recorded enough to trace software fewer resolution 5.5 Mainte-
software to resolve or maintenance report. changes changes have report nance
problems? taken place.
Mainten-
ance
report

Operation
report
Table 8.5.3 Stability metrics
External stability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Change Can user operate Observe behaviour of user X= Na / Ta 0<=X,Y Ratio Problem 5.3 Developer
success ratio software system or maintainer who is The smaller Na, Nb= resolution Qualifica-
without failures after operating software system Y = { (Na / Ta) / (Nb / Tb) } and closer to Count report tion testing Maintainer
maintenance? after maintenance. 0 is the better. Ta,Tb=
Na = Number of cases which user Time Mainten- 5.4 Operator
Can maintainer easily Count failures which user encounters failures during operation after ance Operation
mitigate failures or maintainer encountered software was changed X= Count/ report
caused by during operating software Nb = Number of cases which user Time 5.5 Mainte-
maintenance side before and after encounters failures during operation before Operation nance
effects? maintenance. software is changed Y=[(Count/ report
Ta = Operation time during specified Time) /
Otherwise, investigate observation period after software is changed (Count/
problem resolution report, Tb = Operation time during specified Time)]
operation report or observation period before software is
maintenance report. changed
NOTE: 1. X and Y imply “ frequency of encountering failures after change” and “fluctuated 3. If changed function is identified, it is recommended to determine whether encountered
frequency of encountering failures before/after change”. failures are detected in the changed function itself or in the other ones. The extent of impacts
2. User may need specified period to determine side effects of software changes, when the may be rated for each failure.
revision-up of software is introduced for resolving problems.
3. It is recommend to compare this frequency before and after change.
Modification Can user operate Count failures occurrences X= A / N 0<=X Absolute A= Count Problem 5.3 Developer
impact software system after change, which are The smaller N= Count resolution Qualifica-
localisation without failures after mutually chaining and A= Number of failures emerged after failure and closer to X= Count/ report tion testing Maintainer
( Emerging maintenance? affected by change. is resolved by change during specified 0 is the better. Count
failure after period Operation 5.4 Operator
change) Can maintainer easily N= Number of resolved failures report Operation
mitigate failures
caused by 5.5 Mainte-
maintenance side nance
effects?
NOTE: X implies “chaining failure emerging per resolved failure”. It is recommend to give precise measure by checking whether cause of current failure is attributed to change for previous
failure resolution, as possible.
Table 8.5.4 Testability metrics
External testability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to
ISO/IEC Target
metrics data element computations of measured scale type measure-
12207 audience
value type ment
SLCP
Reference
Availability of Can user and Observe behaviour of user X= A / B 0 <= X <=1 Absolute A= Count Problem 5.3 Developer
built-in test maintainer easily or maintainer who is The larger B= Count resolution Qualifica-
function perform operational testing software system A= Number of cases in which maintainer can and the closer X= Count/ report tion testing Maintainer
testing without after maintenance. use suitably built-in test function to 1.0 is the Count 5.4
additional test facility B= Number of cases of test opportunities better. Operation Operation Operator
preparation? report 5.5 Mainte-
nance
NOTE: Examples of built-in test functions include simulation function, pre-check function for ready to use, etc.
Re-test Can user and Observe behaviour of user X= Sum(T) / N 0<X Ratio T= Time Problem 5.3 Developer
efficiency maintainer easily or maintainer who is The smaller is N= Count resolution Qualifica-
perform operational testing software system T= Time spent to test to make sure whether the better. X= Time report tion testing Maintainer
testing and determine after maintenance. reported failure was resolved or not /Count 5.4
whether the software N= Number of resolved failures Operation Operation Operator
is ready for operation report 5.5 Mainte-
or not? nance

NOTE: X implies “average time (effort) to test after failure resolution”. If failures are not resolved or fixed, exclude them and separately measure ratio of such failures.
Test Can user and Observe behaviour of user X = A / B 0 <= X <=1 Absolute A= Count Problem 5.3 Developer
restartability maintainer easily or maintainer who is The larger B= Count resolution Qualifica-
perform operational testing software system A = Number of cases in which maintainer and the closer X= Count/ report tion testing Maintainer
testing with check after maintenance. can pause and restart executing test run at to 1.0 is the Count 5.4
points after desired points to check step by step better. Operation Operation Operator
maintenance? B= Number of cases of pause of executing report 5.5 Mainte-
test run nance
Table 8.5.5 Maintainability compliance metrics
External maintainability compliance metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to
ISO/IEC Target
metrics data element computations of measured scale type measure-
12207 audience
value type ment SLCP
Reference
Maintainability How compliant is the Count the number of items X = 1- A / B 0<= X <=1 Absolute A= Count Product 5.3 Supplier
compliance maintainability of the requiring compliance that The closer to B= Count description Qualifica-
product to applicable have been met and A= Number of maintainability compliance 1.0 is the X= Count/ (User tion testing User
regulations, compare with the number items specified that have not been better. Count manual or
standards and of items requiring implemented during testing Specifica- 6.5
conventions. compliance in the tion) of Validation
specification. B= Total number of maintainability complian-
compliance items specified ce and
related
standards,
conven-
tions or
regulations

Test
specifica-
tion and
report
NOTE:
It may be useful to collect several measured values along time, to analyse the trend of increasing satisfied compliance items and to determine whether they are fully satisfied.
8.6 Portability metrics

Table 8.6.1 Adaptability metrics

External adaptability metrics


Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input toISO/IEC Target
metrics data element computations of measured scale type measure-12207 audience
value type ment SLCP
Reference
Adaptability of Can user or Observe user’s or X= A/B 0<=X<=1 Absolute A= Count Problem 5.3 Developer
data structures maintainer easily maintainer’s behaviour B= Count resolution Qualifica-
adapt software to when user is trying to A = The number of data which are operable The larger X= Count/ report tion testing Maintainer
data sets in new adapt software to and but are not observed due to incomplete and closer to Count 5.4
environment? operation environment. operations caused by adaptation limitations 1.0 is the Operation Operation Operator
B= The number of data which are expected better. report 5.5 Mainte-
to be operable in the environment to which nance
the software is adapted
NOTE: These data mainly include types of data such as data files, data tuples or databases to be adapted to different data volumes, data items or data structures. A and B of the formula
are necessary to count the same types of data. Such an adaptation may be required when, for example, the business scope is extended.
Hardware Can user or Observe user’s or X= 1 - A / B 0<=X<=1 Absolute A= Count Problem 5.3 Developer
environmental maintainer easily maintainer’s behaviour B= Count resolution Qualifica-
adaptability adapt software to when user is trying to A= Number of operational functions of which The larger is X= Count/ report tion testing Maintainer
environment? adapt software to tasks were not completed or not enough the better. Count 5.4
(adaptability to Is software system operation environment. resulted to meet adequate levels during Operation Operation Operator
hardware capable enough to combined operating testing with report 5.5 Mainte-
devices and adapt itself to environmental hardware nance
network facilities) operation B= Total number of functions which were
environment? tested
NOTE: It is recommended to conduct overload combination testing with hardware environmental configurations which may possibly be operationally combined in a variety of user
operational environments.
Organisational Can user or Observe user’s or X= 1 - A / B 0<=X<=1 Absolute A= Count Problem 5.3 Developer
environment maintainer easily maintainer’s behaviour B= Count resolution Qualifica-
adaptability adapt software to when user is trying to A= Number of operated functions in which The larger is X= Count/ report tion testing Maintainer
environment? adapt software to the tasks were not completed or not enough the better. Count 5.4
(Organisation operation environment. resulted to meet adequate levels during Operation Operation Operator
adaptability to Is software system operational testing with user’s business report 5.5 Mainte-
infrastructure of capable enough to environment nance
organisation) adapt itself to the B= Total number of functions which were
operational tested
environment?
NOTE: 1. It is recommended to conduct testing which takes account of the varieties of 2. “Organisational environment adaptability” is concerned with the environment of the
combinations of infrastructure components of possible user’s business environments. business operation of the user’s organisation. “System software environmental adaptability” is
concerned with the environment of the technical operation of systems. Therefore, there is a
clear distinction.
External adaptability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Porting user Can user or Observe user’s or T= Sum of user operating time spent to 0<T Ratio T=Time Problem 5.3 Developer
friendliness maintainer easily maintainer’s behaviour complete adaptation of the software to user’s The shorter is resolution Qualifica-
adapt software to when user is trying to environment, when user attempt to install or the better. report tion testing Maintainer
environment? adapt software to change setup 5.4
operational environment? Operation Operation Operator
NOTE: T implies “user effort required to report 5.5 Mainte-
adapt to user’s environment”. Person-hour nance
may be used instead of time.
System Can user or Observe user’s or X= 1 - A / B 0<=X<=1 Absolute A= Count Problem 5.3 Developer
software maintainer easily maintainer’s behaviour B= Count resolution Qualifica-
environmental adapt software to when user is trying to A= Number of operational functions of which The larger is X= Count/ report tion testing Maintainer
adaptability environment? adapt software to tasks were not completed or were not the better. Count 5.4
operation environment. enough resulted to meet adequate level Operation Operation Operator
(adaptability to Is software system during combined operating testing with report 5.5 Mainte-
OS, network capable enough to operating system software or concurrent nance
software and co- adapt itself to application software
operated operation B= Total number of functions which were
application environment? tested
software)
NOTE: 1. It is recommended to conduct overload combination testing with operating system 2. “Organisational environment adaptability” is concerned with the environment for business
softwares or concurrent application softwares which are possibly combined operated in a operation of user’s organisation. “System software environmental adaptability” is concerned
variety of user operational environments. with the environment for technical operations on systems. Therefore, there is a clear
distinction.
Table 8.6.2 Installability metrics
External installability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to
ISO/IEC Target
metrics data element computations of measured scale type measure-
12207 audience
value type mentSLCP
Reference
Ease of Can user or Observe user’s or X=A/B 0<=X<= 1 Absolute A= Count Problem 5.3 Developer
installation maintainer easily maintainer’s behaviour The closer to B= Count resolution Qualifica-
install software to when user is trying to A = Number of cases which a user 1.0 is the X= Count/ report tion testing Maintainer
operation install software to succeeded to in changing the install better. Count 5.4
environment? operation environment operation for his/her convenience Operation Operation Operator
report 5.5 Mainte-
B = Total number of cases which a user nance
attempted to change the install operation for
his/her convenience

NOTE: 1. This metric is suggested as experimental use. 2. When time basis metric is required, spent time for installation may be measurable.
Ease of Setup Can user or Observe user’s or X=1- A/B 0<=X<= 1 Absolute A= Count Problem 5.3 Developer
Re-try maintainer easily re- maintainer’s behaviour The closer to B= Count resolution Qualifica-
try set-up installation when user is trying to re- A = Number of cases in which user fails in 1.0 is the X= Count/ report tion testing Maintainer
of software? try set-up installation of re-trying set-up during set-up operation better. Count 5.4
software? Operation Operation Operator
B = Total number of cases in which user report 5.5 Mainte-
attempt to re-try setup during set-up nance
operation
NOTE: 1 This metric is suggested as experimental use.
NOTE: 3. Operational installation effort reduction
The following complementary metrics may be used. User Install Operation Procedure Reduction Ratio X = 1- A / B
A = Number of install operation procedures which a user had to do after procedure reduction
1. Effortless installation B = Number of install operation procedures normally
User’s manual actions for installation X = A 0<= X <=1 The closer to 1.0 is the better.
A= The number of user’s manual actions needed for installation
0<X 4. Ease of user’s manual install operation
The smaller is the better. Easiness level of user’s manual install operation
X = Score of easiness level of user’s manual operation
2. Installation ease Examples of easiness level are following:
Installation supporting level X = A [very easy] requiring only user’s starting of install or set-up functions and then observing
A is rated with, for example: installation;
- Only executing installation program where nothing more is needed (excellent); [easy] requiring only user’s answering of question from install or set-up functions;
- Instructional guide for installation (good); [not easy] requiring user’s looking up parameters from tables or filling-in boxes;
- Source code of program needs modification for installation (poor). [complicated] requiring user’s searching parameter files, looking up parameters from files to
X= Direct Interpretation of measured value be changed and writing them.
X= Direct Interpretation of measured value
Table 8.6.3 Co-existence metrics
External co-existence metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Available co- How often user Use evaluated software X=A/T 0<=X Ratio A= Count Problem 5.3 Developer
existence encounters any concurrently with other The closer to T= Time resolution Qualifica-
constraints or software which user often A = Number of any constraints or 0 is the better. X= Count/ report tion testing Maintainer
unexpected failures uses. unexpected failures which user encounter Time 5.4
when operating during operating concurrently with other Operation Operation SQA
concurrently with software report 5.5 Mainte-
other software? T = Time duration of concurrently operating nance Operator
other software
Table 8.6.4 Replaceability metrics
External replaceability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to
ISO/IEC Target
metrics data element computations of measured scale type measure-
12207 audience
value type ment SLCP
Reference
Continued use Can user or Observe user’s or X=A/B 0<= X <=1 Absolute A= Count Problem 5.3 Developer
of data maintainer easily maintainer’s behaviour The larger is B= Count resolution Qualifica-
continue to use the when user is replacing A = number of data which are used in other the better. X= Count/ report tion testing Maintainer
same data after software to previous one. software to be replaced and are confirmed Count 5.4
replacing this that they are able to be continuously used Operation Operation Operator
software to previous report 5.5 Mainte-
one? B = number of data which are used in other nance
Is software system software to be replaced and planned to be
migration going on continuously reusable
successfully?
NOTE: 1. This metric can be applied to both cases of replacing an entirely different software and a different version of the same software series to previous one.
Function Can user or Observe user’s or X=A/B 0<= X <=1 Absolute A= Count Problem 5.3 Developer
inclusiveness maintainer easily maintainer’s behaviour The larger is B= Count resolution Qualifica-
continue to use when user is replacing A = number of functions which produce the better. X= Count/ report tion testing Maintainer
similar functions after software to previous one. similar results as previously produced and Count 5.4
replacing this where changes have not been required Operation Operation Operator
software to previous report 5.5 Mainte-
one? B = number of tested functions which are nance
Is software system similar to functions provided by another
migration going on software to be replaced
successfully?
NOTE: 1. This metric can be applied to both cases of replacing an entirely different software and a different version of the same software series to previous one.
User support How consistent are Observe the behaviour of X= 1 - A1 / A2 0<=X Absolute A1= Test report 5.3 User
functional the new components the user and ask the Larger is Count Integration
consistency with existing user opinion. A= Number of new functions which user better. A2= Operation 5.3 User
interface? found unacceptably inconsistent with the Count report Qualifica- interface
user’s expectation X= tion testing designer
B= Number of new functions Count/ 5.4 Maintainer
Count Operation Developer
6.3 Quality Tester
Assurance SQA
NOTE: 1. The case that a different software is introduced to replace for a previous software, 2. In case that the pattern of interaction is changed to improve user interface in a new
a new different software can be identified as a current version. version,, it is suggested to observe user’s behaviour and to count the number of cases which
the user fails to access functions caused by unacceptable conformity against user’s
expectation derived from previous version.
Table 8.6.5 Portability compliance metrics
External portability compliance metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to
ISO/IEC Target
metrics data element computations of measured scale type measure-
12207 audience
value type ment SLCP
Reference
Portability How compliant is the Count the number of items X = 1- A / B 0<= X <=1 Absolute A= Count Product 5.3 Supplier
compliance portability ofthe requiring compliance that The closer to B= Count description Qualifica-
product to applicable have been met and A= Number of portability compliance items 1.0 is the X= Count/ (User tion testing User
regulations, compare with the number specified that have not been implemented better. Count manual or
standards and of items requiring during testing Specifica- 6.5
conventions ? compliance in the tion) of Validation
specification. B= Total number of portability compliance complian-
items specified ce and
related
standards,
conven-
tions or
regula-
tions
Test
specifica-
tion and
report
NOTE:
1 It may be useful to collect several measured values along time, analyse the trend of increasing satisfied compliance items, and determine whether they are fully satisfied.

You might also like