Professional Documents
Culture Documents
ISO IEC 9126-2 - Product Quality - External Metrics
ISO IEC 9126-2 - Product Quality - External Metrics
Software Engineering
Secretariat: Japan
Functional How adequate are Number of functions that X=1-A/B 0 <= X <= 1 Absolute X= Count/ Requireme 6.5 Developer,
adequacy the evaluated are suitable for performing The closer to Count nt Validation, SQA
functions? the specified tasks A= Number of functions in which problems 1.0, the more specificatio 6.3 Quality
comparing to the number are detected in evaluation adequate. A= Count n Assurance,
of function evaluated. B= Number of functions evaluated B= Count (Req. 5.3
Spec.) Qualificatio
Evaluation n testing
report
Functional How complete is the Do functional tests (black X=1-A/B 0<=X<=1 Absolute A= Count Req. spec. 6.5 Developer,
implementation implementation box test) of the system The closer to B= Count Validation, SQA
according to according to the A = Number of missing functions detected in 1.0 is the X= Count/ Evaluation 6.3 Quality
completeness
requirement requirement specifications. evaluation better. Count report Assurance,
specifications? Count the number of B = Number of functions described in 5.3
missing functions detected requirement specifications Qualificatio
in evaluation and compare n testing
with the number of function
described in the
requirement specifications.
NOTE: 1. Input to the measurement process is the updated requirement specification. Any 2. This metric is suggested as experimental use.
changes identified during life cycle must be applied to the requirement specifications before
using in measurement process.
NOTE: Any missing function can not be examined by testing because it is not implemented. For detecting missing functions, it is suggested that each function stated
in a requirement specification be tested one by one during functional testing. Such results become input to “Functional implementation completeness” metric. For
detecting functions which are implemented but inadequate, it is suggested that each function be tested for multiple specified tasks. Such results become input to the
“Functional adequacy” metric. Therefore, users of metrics are suggested to use both these metrics during functional testing.
External suitability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Sources of ISO/IEC Target
metrics data element computations of measured scale type input to 12207 audience
value type measure- SLCP
ment Reference
Functional How correct is the Do functional tests (black X=1- A / B 0<=X<=1 Absolute A= Count Req. spec. 6.5 Developer,
implementation functional box test) of the system The closer to B= Count Evaluation Validation, SQA
implementation? according to the A= Number of incorrectly implemented or 1.0 is the X= Count/ report 6.3 Quality
coverage
requirement specifications. missing functions detected in evaluation better. Count Assurance,
Count the number of B= Number of functions described in 5.3
incorrectly implemented or requirement specifications Qualificatio
missing functions detected n testing
in evaluation and compare
with the total number of
functions described in the
requirement specifications
Count the number of
functions that are complete
versus the ones that are
not.
NOTE: 1. Input to the measurement process is the updated requirement specification. Any 2. This measure represents a binary gate checking of determining the presence of a feature.
changes identified during life cycle must be applied to the requirement specifications before
using in measurement process.
Functional Count the number of X = 1- A / B 0<=X<= 1 Absolute A= Count Req. spec. 6.8 Maintainer
specification How stable is the functions described in The closer to B= Count Problem SQA
functional functional specifications A= Number of functions changed after 1.0 is the X= Count/ Evaluation Resolution
stability
specification after that had to be changed entering operation starting from entering better. Size report 5.4
(volatility) entering operation? after the system is put into operation Operation
operation and compare B= Number of functions described in
with the total number of requirement specifications
functions described in the
requirement specifications.
NOTE: This metric is suggested as experimental use.
Table 8.1.2 Accuracy metrics - 3
External accuracy metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Accuracy to Are differences Do input .vs. output test X=A / T 0<=X A= Count Req. spec. 6.5 Developer
expectation between the actual cases and compare the The closer to Ratio T= Time Validation User
and reasonable output to reasonable A= Number of cases encountered by the 0 is the better. X= Count/ User 6.3 Quality
expected expected results. users with a difference against to reasonable Time operation Assurance
results acceptable? expected results beyond allowable manual
Count the number of cases
encountered by the users T= Operation time Hearing to
with an unacceptable users
difference from reasonable
expected results. Test report
NOTE: Reasonable expected results might be identified in a requirement specification, a user operation manual, or users’ expectations.
Computational How often do the end Record the number of X=A / T 0<=X A= Count Req. 6.5 Developer
Accuracy users encounter inaccurate computations The closer to Ratio T= Time spec. Validation User
inaccurate results? based on specifications. A= Number of inaccurate computations 0 is the better. X= Count/ Test report 6.3 Quality
encountered by users Time Assurance
T= Operation time
Precision How often do the end Record the number of X=A / T 0<=X A= Count Req. spec. 6.5 Developer
users encounter results with inadequate The closer to Ratio T= Time Validation User
results with precision. A= Number of results encountered by the 0 is the better. X= Count/ Test report 6.3 Quality
inadequate users with level of precision different from Time Assurance
precision ? required
T= Operation time
NOTE: Data elements for computation of external metrics are designed to use externally accessible information, because it is helpful for end users, operators,
maintainers or acquirers to use external metrics. Therefore, the time basis metric often appears in external metrics and is different from internal ones.
Table 8.1.3 Interoperability metrics - 3
External interoperability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to
ISO/IEC Target
metrics data element computations of measured scale type measure-
12207 audience
value type ment SLCP
Reference
Data How correctly have Test each downstream X= A / B 0<=X<= 1 Absolute A= Count Req. spec. 6.5 Developer
exchangeability the exchange interface function output A= Number of data formats which are The closer to B= Count (User Validation
interface functions for record format of the approved to be exchanged successfully with 1.0 is the X= Count/ manual)
(Data format
specified data system according to the other software or system during testing on better. Count
based) transfer been data fields specifications. data exchanges, Test report
implemented? B= Total number of data formats to be
Count the number of data exchanged
formats that are approved
to be exchanged with other
software or system during
testing on data exchanges
in comparing with the total
number.
NOTE: It is recommended to test specified data transaction.
Data How often does the Count the number of cases a) X= 1 - A / B 0<=X<= 1 a) A= Count Req. spec. 5.4 Maintainer
exchangeability end user fail to that interface functions A= Number of cases in which user failed to The closer to Absolute B= Count (User Operation
exchange were used and failed. exchange data with other software or 1.0 is the X= Count/ manual)
(User’s success
data between target systems better. Count
attempt based) software and other B= Number of cases in which user Test report
software? attempted to exchange data
How often are the
data transfers b) Y= A / T 0<=Y b) Y= Count/
between target T= Period of operation time The closer to Ratio Time
software and other 0, is the T= Time
software successful? better.
Access How controllable is Count number of detected X= A / B 0<=X<=1 Absolute A= Count Test spec. 6.5 Developer
controllability access to the illegal operations with The closer to B= Count Test report Validation
system? comparing to number of A= Number of detected different types of 1.0 is the X= Count/ Operation
illegal operations as in the illegal operations better. Count report 6.3 Quality
specification. B= Number of types of illegal operations as Assurance
in the specification
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Data corruption What is the frequency Count the occurrences of a) X= 1 – A / N 0<=X<= 1 a) A= Count Test spec. 6.5 Maintainer
prevention of data corruption major and minor data A= Number of times that a major data The closer to Absolute B= Count Test report Validation Developer
events? corruption events. corruption event occurred 1.0 is the N= Count Operation 5.3
N= Number of test cases tried to cause data better. X= Count/ report Qualifica-
corruption event Count tion testing
0<=Y<= 1 5.4
The closer to Operation
b) 1.0 is the
Y= 1- B / N better. b)
B= Number of times that a minor data Absolute Y= Count/
corruption event occurred 0<=Z Count
The closer to 0,
is the better.
c)
Z= A / T or B / T c) T= Time
T= period of operation time (during operation Ratio Z= Count/
testing) Time
NOTE: 1. Intensive abnormal operation testing is needed to obtain minor and major data 4. It is recommended that penetration tests be performed to simulate attack, because such
corruption events. security attacks do not normally occur in the usual testing.
2. It is recommended to grade the impact of data corruption events such as the following Real security metrics may only be taken in "real life system environment", that is "quality in
examples: use"
Major (fatal) data corruption event:
- reproduction and recovery impossible; 5. This metric is suggested as an experimental use.
- second affection distribution too wide;
- importance of data itself. 6. Data backup is one of the effective ways to prevent data corruption. The creation of back
Minor data corruption event: up ensures that necessary data can be restored quickly in the event that parts of the
- reproduction or recovery possible and operative data are lost. However, data back up is regarded as a part of the composition of the
- no second affection distribution; reliability metrics in this report.
- importance of data itself.
7. It is suggested that this metric be used experimentally.
3.Data elements for computation of external metrics are designed to use externally
accessible information, because it is helpful for end users, operators, maintainers or
acquirers to use external metrics. Therefore, counting events and times used here are
different from corresponding internal metric.
Table 8.1.5 Functionality compliance metrics
External functionality compliance metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Functional How compliant is the Count the number of items X=1- A/B 0<= X <=1 Absolute A= Count Product 5.3 Supplier
compliance functionality of the requiring compliance that The closer to B= Count description Qualifica-
product to applicable have been met and A= Number of functionality compliance items 1.0 is the X= Count/ (User tion testing User
regulations, compare with the number specified that have not been implemented better. Count manual or
standards and of items requiring during testing Specificati 6.5
conventions? compliance in the on) of Validation
specification. B= Total number of functionality compliance complianc
Design test cases in items specified e and
accordance with related
compliance items. standards,
convention
Conduct functional testing s or
for these test cases. regulations
Reference
Estimated How many problems Count the number of faults X= {ABS( A1 - A2 )} / B 0<=X Absolute A1= Test report 5.3 Developer
latent fault still exist that may detected during a defined It depends on Count Integration
emerge as future trial period and predict (X: Estimated residuary latent fault density) stage of A2= Operation 5.3 Tester
density
faults? potential number of future ABS()= Absolute Value testing. Count report Qualifica-
faults using a reliability A1 = total number of predicted latent faults in At the later B= tion testing SQA
growth estimation model. a software product stages, Size Problem 5.4
A2 = total number of actually detected faults smaller is X= Count/ report Operation User
B= product size better. Size 6.5
Validation
6.3 Quality
Assurance
NOTE: 1.When total number of actually detected faults becomes larger than total number of 2.It is recommended to use several reliability growth estimation models and choose the most
predicted latent faults, it is recommended to predict again and estimate more larger number. suitable one and repeat prediction with monitoring detected faults.
Estimated larger numbers are intended to predict reasonable latent failures, but not to make 3. It may be helpful to predict upper and lower number of latent faults.
the product look better. 4. It is necessary to convert this value (X) to the <0,1> interval if making summarisation of
characteristics
External maturity metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Failure density How many failures Count the number of X= A1 / A2 0<=X Absolute A1= Test report 5.3 Developer
against test were detected during detected failures and It depends on Count Integration
cases defined trial period? performed test cases. A1 = number of detected failures stage of A2= Operation 5.3 Tester
A2 = number of performed test cases testing. Count report Qualifica-
At the later B= tion testing SQA
stages, Size Problem 5.4
smaller is X,Y= report Operation
better. Count/ 6.3 Quality
Size Assurance
NOTE: 1. The larger is the better, in early stage of testing. On the contrary, the smaller is the
better, in later stage of testing or operation. It is recommended to monitor the trend of this 3. It is necessary to convert this value (X) to the <0,1> interval if making summarisation of
measure along with the time. characteristics.
2. This metric depends on adequacy of test cases so highly that they should be designed to
include appropriate cases: e.g., normal, exceptional and abnormal cases.
Failure How many failure Count the number of X= A1 / A2 0<=X<= 1 a) A1= Test report 5.3 User
resolution conditions are failures that did not The closer to Absolute Count Operation Integration
resolved? reoccur during defined trial A1 = number of resolved failures 1.0 is better A2= (test) 5.3 SQA
period under the similar A2 = total number of actually detected as more Count report Qualifica-
conditions. failures failures are A3 = tion testing Maintainer
resolved. Count 5.4
Maintain a problem Operation
resolution report X= Count/
describing status of all the Count
failures.
NOTE:
1. It is recommended to monitor the trend when using this measure.
2. Total number of predicted latent failures might be estimated using reliability growth models
adjusted with actual historical data relating to similar software product. In such a case, the
number of actual and predicted failures can be comparable and the number of residual
unresolved failures can be measurable.
External maturity metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Fault density How many faults Count the number of X= A / B 0<=X Absolute A= Test report 5.3 Developer
were detected during detected faults and It depends on Count Integration
defined trial period? compute density. A = number of detected faults stage of B= Operation 5.3 Tester
B = product size testing. Size report Qualifica-
At the later X= tion testing SQA
stages, Count/ Problem 5.4
smaller is Size report Operation
better. 6.3 Quality
Assurance
NOTE: 1. The larger is the better, in early stage of testing. On the contrary, the smaller is the
better, in later stage of testing or operation. It is recommended to monitor the trend of this 3. It is necessary to convert this value (X) to the <0,1> interval if making summarisation of
measure along with the time. characteristics.
2. The number of detected faults divided by the number of test cases indicates effectiveness
of test cases. 4. When counting faults, pay attention to the followings:
- Possibility of duplication, because multiple reports may contain the same faults as other
report;
- Possibility of others than faults, because users or testers may not figure out whether their
problems are operation error, environmental error or software failure.
External maturity metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Fault removal How many faults Count the number of faults a) X= A1 / A2 0<=X<= 1 a) A1= Test report 5.3 Developer
have been corrected? removed during testing The closer to Absolute Count Integration
and compare with the total A1 = number of corrected faults 1.0 is better A2= Organizati 5.3 SQA
number of faults detected A2 = total number of actually detected faults as fewer Count on Qualifica-
and total number of faults faults remain. A3= database tion testing Maintainer
predicted. Count 6.5
b) 0<=Y b) Validation
Y= A1 / A3 The closer to Absolute X= Count/ 6.3 Quality
1.0 is better Count Assurance
A3 = total number of predicted latent faults in as fewer Y= Count/
the software product faults remain. Count
NOTE:
1. It is recommended to monitor the trend during a defined period of time. Otherwise, when Y < 1, investigate
whether it is because there are less than the usual number of defects in the software
2. Total number of predicted latent faults may be estimated using reliability growth models products or because the testing was not adequate to detect all possible faults.
adjusted with actual historical data relating to similar software product.
4. It is necessary to convert this value (Y) to the <0,1> interval if making summarisation of
characteristics
3. It is recommended to monitor the estimated faults resolution ratio Y, so that if Y > 1,
investigate the reason whether it is because more faults have been detected early or 5. When counting faults, pay attention to the possibility of duplication, because multiple
because software product contains an unusual number of faults. reports may contain the same faults as other report.
Mean time How frequently does Count the number of a) X = T1 / A 0<X,Y a) A= Test report 5.3 Maintainer
between the software fail in failures occurred during a b) Y = T2 / A The longer is Ratio Count Integration
failures (MTBF) operation? defined period of operation the better.As T1 = Operation 5.3 User
and compute the average T1 = operation time longer time b) Time (test) Qualifica-
interval between the T2 = sum of time intervals between can be Ratio T2 = report tion testing
failures. consecutive failure occurrences expected Time 5.4
A = total number of actually detected failures between X =Time / Operation
(Failures occurred during observed failures. Count testing
operation time) Y =Time/ 5.4
Count Operation
NOTE: 2. Failure rate or hazard rate calculation may be alternatively used.
1. The following investigation may be helpful: - distribution of time interval between failure 3. It is necessary to convert this value (X,Y) to the <0,1> interval if making summarisation of
occurrences; characteristics
- changes of mean time along with interval operation time period;
- distribution indicating which function has frequent failure occurrences and operation
because of function and use dependency.
External maturity metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Test coverage How much of Count the number of test X= A / B 0<=X<=1 Absolute A= Count Req. 5.3 Developer
(Specified required test cases cases performed during The closer to B= Count spec., Qualifica- Tester
operation have been executed testing and compare the A= Number of actually performed test cases 1.0 is the X= Count/ Test spec. tion testing SQA
scenario testing during testing? number of test cases representing operation scenario during better test Count or User 6.5
coverage ) required to obtain testing coverage. manual Validation
adequate test coverage. B= Number of test cases to be performed to Test report 6.3 Quality
cover requirements Operation Assurance
report
NOTE:
1. Test cases may be normalised by software size, that is: test density coverage Y= A / C, where C= Size of product to be tested.
The larger Y is the better. Size may be functional size that user can measure.
Test maturity Is the product well Count the number of X= A / B 0<=X<=1 Absolute A= Count Req. 5.3 Developer
tested? passed test cases which The closer to B= Count spec., Qualifica- Tester
(NOTE: This is to have been actually A= Number of passed test cases during 1.0 is the X= Count/ Test spec. tion testing SQA
predict the success executed and compare it testing or operation better. Count , or User 6.3 Quality
rate the product will to the total number of test B= Number of test cases to be performed to manual Assurance
achieve in future cases to be performed as cover requirements Test report
testing.) per requirements. Operation
report
NOTE: 1. It is recommended to perform stress testing using live historical data especially 2. Passed test cases may be normalised by software size, that is:
from peak periods. passed test case density Y= A / C, where
It is also recommended to ensure that the following test types are executed and passed C= Size of product to be tested.
successfully: The larger Y is better.
- User operation scenario; Size may be functional size that user can measure.
- Peak stress;
- Overloaded data input..
Table 8.2.2 Fault tolerance metrics
External fault tolerance metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Breakdown How often the Count the number of X= 1- A / B 0<=X<= 1 Absolute A =Count Test report 5.3 User
avoidance software product breakdowns occurrence The closer to B =Count Operation Integration Maintainer
causes the break with respect to number of A= Number of breakdowns 1.0 is the X =Count/ report 5.3
down of the total failures. B= Number of failures better. Count Qualifica-
production tion testing
environment? If it is under operation, 5.4
analyse log of user Operation
operation history.
NOTE: 1.The breakdown means the execution of any user tasks is suspended until system is 2. When none or few failures are observed, time between breakdowns may be more
restarted, or its control is lost until system is forced to be shut down. suitable.
Failure How many fault Count the number of X=A / B 0<=X<= 1 Absolute A= Count Test report 5.3 User
avoidance patterns were brought avoided fault patterns and The closer to B= Count Integration Maintainer
under control to avoid compare it to the number A= Number of avoided critical and serious 1.0 is better, X= Count/ Operation 5.3
critical and serious of fault patterns to be failure occurrences against test cases of as the user Count report Qualifica-
failures? considered fault pattern can more tion testing
B= Number of executed test cases of fault often avoid 5.4
pattern (almost causing failure) during critical or Operation
testing serious 6.5
failure. Validation
NOTE:
1. It is recommended to categorise failure avoidance levels which is the extent of mitigating 2. Failure avoidance levels may be based on a risk matrix composed by
impact of faults, for example: severity of consequence and frequency of occurrence provided by
-Critical: entire system stops / or serious database destruction; ISO/IEC 15026 System and software integrity.
-Serious: important functions become inoperable and no alternative way of operating 3. Fault pattern examples
(workaround); - out of range data
-Average: most functions are still available, but limited performance occurs with limited or - deadlock
alternate operation (workaround); Fault tree analysis technique may be used to detect fault patterns.
-Small: a few functions experience limited performance with limited operation; 4. Test cases can include the human incorrect operation
-None: impact does not reach end user
External fault tolerance metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Incorrect How many functions Count the number of test X=A / B 0<=X<= 1 Absolute A= Count Test report 5.3 User
operation are implemented with cases of incorrect The closer to B= Count Operation Integration Maintainer
incorrect operations operations which were A= Number of avoided critical and serious 1.0 is better, X= Count/ report 5.3
avoidance
avoidance capability? avoided to cause critical failures occurrences as more Count Qualifica-
and serious failures and B= Number of executed test cases of incorrect incorrect user tion testing
compare it to the number operation patterns (almost causing failure) operation is 5.4
of executed test cases of during testing avoided. Operation
incorrect operation
patterns to be considered.
NOTE: 3. Fault tree analysis technique may be used to detect incorrect operation patterns
1. Also data damage in addition to system failure. 4. This metric may be used experimentally.
2. Incorrect operation patterns
- Incorrect data types as parameters
- Incorrect sequence of data input
- Incorrect sequence of operation
Table 8.2.3 Recoverability metrics
External recoverability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Availability How available is the Test system in a a) 0<=X<=1 (a),(b) To = Time Test report 5.3 User
system for use during production like X= { To / (To + Tr) } The larger Absolute Tr = Time Operation Integration Maintainer
the specified period environment for a specified and closer to X= Time/ report 5.3
of time? period of time performing b) 1.0 is better, Time Qualifica-
all user operations. Y= A1 / A2 as the user tion testing
can use the 5.4
Measure the repair time software for A1= Count Operation
period each time the To = operation time more time. A2= Count
system was unavailable Tr = time to repair Y= Count/
during the trial. A1= total available cases of user’s Count
successful software use when user attempt 0<=Y<=1
Compute mean time to to use The larger
repair. A2= total number of cases of user’s attempt and closer to
to use the software during observation time. 1.0 is the
This is from the user callable function better.
operation view.
NOTE: It is recommended that this metric includes only the automatic recovery provided by
the software and excludes the maintenance work of human.
Mean down What is the average Measure the down time X= T / N 0<X Ratio T= Time Test report 5.3 User
time time the system stays each time the system is The smaller is N= Count Operation Integration Maintainer
unavailable when a unavailable during a T= Total down time the better, X= Time/ report 5.3
failure occurs before specified trial period and N= Number of observed breakdowns system will be Count Qualifica-
gradual start up? compute the mean time. The worst case or distribution of down time down for tion testing
should be measured. shorter time. 5.4
Operation
6.5
Validation
NOTE:
1. It is recommended that this recoverability metric includes only the automatic recovery 2. It is necessary to convert this value (X) to the <0,1> interval if making summarisation of
provided by the software and excludes the maintenance work of human. characteristics
External recoverability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input toISO/IEC Target
metrics data element computations of measured scale type measure-12207 audience
value type ment SLCP
Reference
Mean recovery What is the average Measure the full recovery X= Sum(T) / B 0<X Ratio T= Time Test report 5.3 User
time time the system takes times for each of the time The smaller is N= Count Operation Integration Maintainer
to complete recovery the system was brought T= Time to recovery downed software the better. X= Time/ report 5.3
from initial partial down during the specified system at each opportunity Count Qualifica-
recovery? trial period and compute N= Number of cases which observed tion testing
the mean time. software system entered into recovery 5.4
Operation
6.5
Validation
NOTE: 2. It is recommended that this recoverability metric includes only the automatic recovery
1. It is recommended to measure the maximum time of the worst case or distribution of provided by the software and excludes the maintenance work of human.
recovery time for many cases. 3. It is recommended to distinguish the grades of recovery difficulty, for example, recovery of
destroyed database is more difficult than recovery of destroyed transaction.
4. It is necessary to convert this value (X) to the <0,1> interval if making summarisation of
characteristics
Restartability How often the system Count the number of times X = A / B 0<=X<=1 Absolute A =Count Test report 5.3 User
can restart providing the system restarts and The larger B =Count Operation Integration Maintainer
service to users provides service to users A= Number of restarts which met to required and closer to X =Count/ report 5.3
within a required within a target required time during testing or user operation support 1.0 is better, Count Qualifica-
time? time and compare it to the B= Total number of restarts during testing or as the user tion testing
total number of restarts, user operation support can restart 5.4
when the system was easily. Operation
brought down during the 6.5
specified trial period. Validation
NOTE: 2. It is recommended that this recoverability metric includes only the automatic recovery
1. It is recommended to estimate different time to restart to correspond to the severity level provided by the software and excludes the maintenance work of human.
of inoperability, such as data base destruction, lost multi transaction, lost single transaction,
or temporary data destruction.
External recoverability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Restorability How capable is the Count the number of X= A / B 0<=X<=1 Absolute A= Count Req. 5.3 User
product in restoring successful restorations The larger B= Count spec., Integration Maintainer
itself after abnormal and compare it to the A= Number of restoration cases successfully and closer to X= Count/ Test spec. 5.3
event or at request? number of tested done 1.0 is better, Count or User Qualifica-
restoration required in the B= Number of restoration cases tested as as he product manual tion testing
specifications. per requirements is more 5.4
capable to Test report Operation
Restoration requirement restore in Operation 6.5
examples: defined report Validation
database checkpoint, cases.
transaction checkpoint,
redo function, undo
function etc.
NOTE: It is recommended that this metric includes only the automatic recovery provided by the software and excludes the maintenance work of human.
Restore How effective is the Count the number of X= A / B 0<=X<=1 Absolute A= Count Test report 5.3 User
effectiveness restoration capability? tested restoration meeting The larger B= Count Operation Integration Maintainer
target restoration time and A= Number of cases successfully restored and closer to X= Count/ report 5.3
compare it to the number meeting the target restore time 1.0 is the Count Qualifica-
of restorations required B= Number of cases performed better, as the tion testing
with specified target time. restoration 5.4
process in Operation
product is 6.5
more Validation
effective.
NOTE: It is recommended that this metric includes only the automatic recovery provided by the software and excludes the maintenance work of human.
Table 8.2.4 Reliability compliance metrics
External reliability compliance metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input toISO/IEC Target
metrics data element computations of measured scale type measure-12207 audience
value type ment SLCP
Reference
Reliability How compliant is the Count the number of items X=1- A/B 0<= X <=1 Absolute A= Count Product 5.3 Supplier
compliance reliability of the requiring compliance that The closer to B= Count description Qualifica-
product to applicable have been met and A= Number of reliability compliance items 1.0 is the X= Count/ (User tion testing User
regulations, compare with the number specified that have not been implemented better. Count manual or
standards and of items requiring during testing Specifica- 6.5
conventions. compliance as in the tion) of Validation
specification. B= Total number of reliability compliance complian-
items specified ce and
related
standards,
conven-
tions or
regulations
Test
specifica-
tion and
report
NOTE:
It may be useful to collect several measured values along time, to analyse the trend of increasingly satisfied compliance items and to determine whether they are fully satisfied or not.
8.3 Usability Metrics
NOTE: This indicates whether users can find the demonstrations and/or tutorials while using the product.
External understandability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Demonstration What proportion of Observe the behaviour of X = A / B 0<=X<= 1 Absolute A= Count User 5.3 User
effectiveness functions can the the user who is trying to The closer to B= Count manual Qualifica-
user operate see demonstration/tutorial. A= Number of functions operated 1.0 is the X= Count/ Operation tion testing Maintainer
successfully after a Observation may employ successfully better. Count (test)
demonstration or human cognitive action B= Number of demonstrations/tutorials report 5.4
tutorial? monitoring approach with accessed Operation
video camera.
NOTE: This indicates whether users can operate functions successfully after an online demonstration or tutorial.
Evident What proportion of Conduct user test and X=A/B 0<=X<= 1 Absolute A= Count User 5.3 User
functions functions (or types of
interview user with The closer to B= Count manual Qualifica-
function) can be questionnaires or observe A = Number of functions (or types of 1.0 is the X= Count/ Operation tion testing Maintainer
identified by the user
user behaviour. functions) identified by the user better. Count (test) 5.4
based upon start up B = Total number of actual functions (or report Operation
conditions? Count the number of types of functions)
functions that are evident
to the user and compare
with the total number of
functions.
NOTE: This indicates whether users are able to locate functions by exploring the interface (e.g. by inspecting the menus).
Function What proportion of Conduct user test and X= A / B 0 <= X <= 1 Absolute A= Count User 5.3 User
understand- the product functions interview user with The closer to B= Count manual Qualifica-
will the user be able questionnaires. A= Number of interface functions whose 1.0, the X= Count/ Operation tion testing Maintainer
ability
to understand better. Count (test) 5.4
purpose is correctly described by the user
correctly? Count the number of user B= Number of functions available from the report Operation
interface functions where
interface
purposes are easily
understood by the user
and compare with the
number of functions
available for user.
NOTE: This indicates whether users are able to understand functions by exploring the interface (e.g. by inspecting the menus).
External understandability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input toISO/IEC Target
metrics data element computations of measured scale type measure-12207 audience
value type ment SLCP
Reference
Understandable Can users Conduct user test and X= A / B 0<=X<= 1 Absolute A= Count User 6.5 User
input and understand what is interview user with The closer to . B= Count manual Validation
required as input data questionnaires or observe A= Number of input and output data items 1.0 is the X= Count/ Operation 5.3 Maintainer
output
and what is provided user behaviour. which user successfully understands better. Count (test) Qualifica-
as output by software B= Number of input and output data items report tion testing
system? Count the number of input available from the interface 5.4
and output data items Operation
understood by the user
and compare with the total
number of them available
for user.
NOTE: This indicates whether users can understand the format in which data should be input and correctly identify the meaning of output data.
Table 8.3.2 Learnability metrics
External learnability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Ease of How long does the Conduct user test and T= Mean time taken to learn to use a 0<T Ratio T= Time Operation 6.5 User
function user take to learn to observe user behaviour. function correctly The shorter is (test) Validation
use a function? the better. report 5.3 Maintainer
learning
Qualifica-
User tion testing
monitoring 5.4
record Operation
Help frequency How frequently does Conduct user test and X=A 0<= X Absolute X= Count Operation 6.5 User
a user have to access observe user behaviour. The closer to A =Count (test) Validation
help to learn A = Number of accesses to help until a user 0 is the better. report 5.3 Human
operation to complete Count the number of cases completes his/her task. Qualifica- interface
his/her work task? that a user accesses help User tion testing designer
to complete his/her task. monitoring 5.4
record Operation
Table 8.3.3 Operability metrics a) Conforms with operational user expectations
External Operability metrics a) Conforms with operational user expectations
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Operational How consistent are Observe the behaviour of a) X = 1 - A / B a) A= Count Operation 6.5 User
consistency in the component of the the user and ask the 0<=X<=1 Absolute B= Count (test) Validation
user interface? opinion. A= Number of messages or functions which The closer to report 5.3 Human
use
user found unacceptably inconsistent with 1.0 is the X= Count/ Qualifica- interface
the user’s expectation better. Count User tion testing designer
B= Number of messages or functions monitoring 5.4
record Operation
b) Y = N / UOT 0<=Y b) UOT=
The smaller Ratio Time
N= Number of operations which user found and closer to N= Count
unacceptably inconsistent with the user’s 0.0 is the Y= Count/
expectation better. Time
UOT= user operating time (during
observation period)
NOTE: 1. User’s experience of operation is usually helpful to recognise several operation patterns, which derive user’s expectation.
2. Both of “input predictability” and “output predictability” are effective for operational consistency.
3. This metric may be used to measure “Easy to derive operation” and “Smooth Communication”.
Table 8.3.3 Operability metrics b) Controllable
External Operability metrics b) Controllable
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Error correction Can user easily Conduct user test and T= Tc - Ts 0<T Ratio Ts, Tc= Operation 6.5 User
correct error on observe user behaviour. The shorter is Time (test) Validation
tasks? Tc = Time of completing correction of the better. T= Time report 5.3 Human
specified type errors of performed task Qualifica- interface
Ts = Time of starting correction of specified User tion testing designer
type errors of performed task monitoring 5.4
record Operation
NOTE: User of this metric is suggested to specify types of errors for test cases by considering, for example, severity (displaying error or destroying data), type of input/output error (input
text error, output data error to database or graphical error on display) or type of error operational situation (interactive use or emergent operation).
Error correction Can user easily Observe the behaviour of a) 0<=X Ratio A= Count Operation 6.5 User
in use recover his/her error the user who is operating X= A / UOT The higher is UOT = (test) Validation
or retry tasks? software the better. Time report 5.3 Human
A= number of times that the user succeeds X = Count Qualifica- interface
to cancel their error operation / Time User tion testing designer
UOT= user operating time during monitoring 5.4
observation period record Operation
NOTE:
When function is tested one by one, the ratio
can be also calculated, that is the ratio of
number of functions which user succeeds to
cancel his/her operation to all functions.
Can user easily Observe the behaviour of b) 0<=X<=1 Absolute A= Count, Operation 6.5 User
recover his/her input? the user who is operating X=A/B The closer to B= Count (test) Validation
software 1.0 is the X= Count/ report 5.3 Human
A= Number of screens or forms where the better. Count Qualifica- interface
input data were successfully modified or User tion testing designer
changed before being elaborated monitoring 5.4
record Operation
B = Number of screens or forms where user
tried to modify or to change the input data
during observed user operating time
Table 8.3.3 Operability metrics c) Suitable for the task operation
External Operability metrics c) Suitable for the task operation
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input toISO/IEC Target
metrics data element computations of measured scale type measure-12207 audience
value type ment SLCP
Reference
Default value Can user easily Observe the behaviour of X=1-A/B 0<=X<= 1 Absolute A= Count Operation 6.5 User
availability in select parameter the user who is operating The closer to B= Count (test) Validation
values for his/her software. A= The number of times that the user fail to
1.0 is the X= Count/ report 5.3 Human
use
convenient establish or to select parameter values in a
better. Count Qualifica- interface
operation? Count how many times short period (because user can not use User tion testing designer
user attempts to establish default values provided by the software) monitoring 5.4
or to select parameter record Operation
values and fails, (because B= Total number of times that the user
user can not use default attempt to establish or to select parameter
values provided by the values
software).
NOTE: 1. It is recommended to observe and record operator’s behaviour and decide how long period is allowable to select parameter values as “short period”.
2. When parameter setting function is tested by each function, the ratio of allowable function can be also calculated.
3. It is recommended to conduct functional test that covers parameter-setting functions.
Table 8.3.3 Operability metrics d) Self descriptive (Guiding)
External Operability metrics d) Self descriptive (Guiding)
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Message Can user easily Observe user behaviour X = A / UOT 0<=X Ratio A =Count Operation 6.5 User
understand- understand who is operating software The smaller UOT = (test) Validation
messages from A = number of times that the user pauses for and closer to Time report 5.3 Human
ability in use
software system? a long period or successively and repeatedly 0.0 is the X = Count Qualifica- interface
Is there any fails at the same operation, because of the better. / Time User tion testing designer
message which lack of message comprehension. monitoring 5.4
caused the user a UOT = user operating time (observation record Operation
delay in period)
understanding before
starting the next
action?
Can user easily
memorise important
message?
NOTE: b) Memorability: Memorability implies that user remember important
1. The extent of ease of message comprehension is represented by how long that message messages presenting information such as guidance on the next user
caused delay in user understanding before starting the next action. action, name of data items to be looked at, and warning of careful
Therefore, it is recommended to observe and record operator’s behaviour and decide what operation.
length of pause is considered a “long period”. - Can user easily remember important messages?
- Is remembering important messages helpful to the user?
2. It is recommended to investigate the following as possible causes of the problems of user’s - Is it required for the user to remember only a few important messages
message comprehension. and not so much?
a)Attentiveness : Attentiveness implies that user successfully recognises important messages 3. When messages are tested one by one, the ratio of comprehended
presenting information such as guidance on next user action, name of data items to be looked messages to the total can be also calculated.
at, and warning of careful operation.
- Does user ever fail to watch when encountering important messages? 4. When several users are observed who are participants of operational
- Can user avoid mistakes in operation, because of recognising important messages? testing, the ratio of users who comprehended messages to all users can
be calculated.
Self- In what proportion of Conduct user test and X= A / B 0 <= X <= 1 Absolute X =Count/ Operation 6.5 User
explanatory error conditions does observe user behaviour. The closer to Count (test) Validation
the user propose the A =Number of error conditions for which the 1.0 is the A =Count report 5.3 Human
error messages
correct recovery user proposes the correct recovery action better. B =Count Qualifica- interface
action? B =Number of error conditions tested User tion testing designer
monitoring 5.4
record Operation
NOTE: This metric is generally used as one of experienced and justified.
Table 8.3.3 Operability metrics e) Operational error tolerant (Human error free)
External operability metrics e) Operational error tolerant (Human error free)
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to
ISO/IEC Target
metrics data element computations of measured scale type measure-
12207 audience
value type mentSLCP
Reference
Operational Can user easily Observe the behaviour of X=1 - A/B 0<=X<= 1 Absolute A= Count, Operation 6.5 User
error recover his/her worse the user who is operating The closer to B= Count (test) Validation
situation? software. A= Number of unsuccessfully recovered 1.0 is the X= Count/ report 5.3 Human
recoverability in
situation (after a user error or change) in better. Count Qualifica- interface
use which user was not informed about a risk by User tion testing designer
the system monitoring 5.4
B= Number of user errors or changes record Operation
NOTE: The formula above is representative of the worst case. User of this metric may take account of the combination of 1) the number of errors where the user is / is not warned by the
software system and 2) the number of occasions where the user successfully / unsuccessfully recovers the situation.
Time between Can user operate the Observe the behaviour of X = T / N (at time t during [ t-T, t] ) 0<X Ratio T = Time Operation 6.5 User
human error software long enough the user who is operating The higher is N = Count (test) Validation
without human error? software T = operation time period during observation the better. X= report 5.3 Human
operations in
( or The sum of operating time between Time / Qualifica- interface
use user’s human error operations ) Count User tion testing designer
N= number of occurrences of user’s human monitoring 5.4
error operation record Operation
NOTE:
1. Human error operation may be detected by counting below user’s behaviour : 2. It seems that an operation pause implies a user’s hesitation operation.
a) Simple human error (Slips): The number of times that the user just simply makes errors to It depends on the function, operation procedure, application domain, and user whether it is
input operation; considered a long period or not for the user to pause the operation. Therefore, the evaluator
b) Intentional error (Mistakes): The number of times that the user repeats fail an error at the is requested to take them into account and determine the reasonable threshold time. For an
same operation with misunderstanding during observation period; interactive operation, a "long period" threshold range of 1min. to 3 min.
c) Operation hesitation pause: The number of times that the user pauses for a long period
with hesitation during observation period.
User of this metric is suggested to measure separately for each type listed above.
External operability metrics e) Operational error tolerant (Human error free)
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input toISO/IEC Target
metrics data element computations of measured scale type measure-12207 audience
value type ment SLCP
Reference
Undoability How frequently does Conduct user test and a) 0<=X<=1 a) A= Count Operation 6.5 User
(User error the user successfully observe user behaviour. X= A / B The closer to Absolute B= Count (test) Validation
correct input errors? A= Number of input errors which the user 1.0 is the X= Count/ report 5.3 Human
correction)
successfully corrects better. Count Qualifica- interface
B= Number of attempts to correct input User tion testing designer
errors monitoring 5.4
record Operation
How frequently does Conduct user test and b) 0 <= Y <= 1 b) A= Count Operation 6.5 User
the user correctly observe user behaviour. Y= A / B The closer to Absolute B= Count (test) Validation
undo errors? A= Number of error conditions which the 1.0 is the Y= Count/ report 5.3 Human
user successfully corrects better. Count Qualifica- interface
B= Total number of error conditions tested User tion testing designer
monitoring 5.4
record Operation
NOTE: This metric is generally used as one of experienced and justified.
Table 8.3.3 Operability metrics f) Suitable for individualisation
External operability metrics f) Suitable for individualisation
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Customisability Can user easily Conduct user test and X= A / B 0 <= X <= 1 Absolute A= Count User 6.5 User
customise operation observe user behaviour. The closer to B= Count manual Validation
procedures for his/her A= Number of functions successfully 1.0 is the X= Count/ 5.3 Human
convenience? customised better. Count Operation Qualifica- interface
B= Number of attempts to customise (test) tion testing designer
Can a user, who report 5.4
instructs end users, Operation
easily set customised User
operation procedure monitoring
templates for record
preventing their
errors?
What proportion of
functions can be
customised?
NOTE:
1. Ratio of user’s failures to customise may be measured.
Y = 1 - (C / D)
C = Number of cases in which a user fails to customise operation
D = Total number of cases in which a user attempted to customise operation for his/her convenience.
0<=Y<= 1, The closer to 1.0 is the better.
Operation Can user easily Count user’s strokes for X=1- A/B 0<=X< 1 Absolute A= Count Operation 6.5 User
procedure reduce operation specified operation and The closer to B= Count (test) Validation
procedures for his/her compare them between A = Number of reduced operation 1.0 is the X= Count/ report 5.3 Human
reduction
convenience? before and after procedures after customising operation better. Count Qualifica- interface
customising operation. B = Number of operation procedures before User tion testing designer
customising operation monitoring 5.4
record Operation
NOTE: 1. It is recommended to take samples for each different user task and to distinguish between an operator who is a skilled user or a beginner.
2. Number of operation procedures may be represented by counting operation strokes such as click, drug, key touch, screen touch, etc.
3. This includes keyboard shortcuts.
External operability metrics f) Suitable for individualisation
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Physical What proportion of Conduct user test and X= A / B 0 <= X <= 1 Absolute A= Count Operation 6.5 User
accessibility functions can be observe user behaviour. The closer to B= Count (test) Validation
accessed by users A= Number of functions successfully 1.0 is the X= Count/ report 5.3 Human
with physical accessed better. Count Qualifica- interface
handicaps? B= Number of functions User tion testing designer
monitoring 5.4
record Operation
NOTE: Examples of physical inaccessibility are inability to use a mouse and blindness.
Table 8.3.4 Attractiveness metrics
External attractiveness metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Attractive How attractive is the Questionnaire to users Questionnaire to assess the attractiveness Depend on its Absolute Count Questionn 6.5 User
interaction interface to the user? of the interface to users, after experience of questionnaire aire result Validation
usage scoring 5.3 Human
method. Qualifica- interface
tion testing designer
5.4
Operation
Interface What proportion of Conduct user test and X= A / B 0 <= X <= 1 Absolute A= Count Users’ 6.5 User
appearance interface elements observe user behaviour. The closer to B= Count requests Validation
can be customised in A= Number of interface elements 1.0 is the X= Count/ 5.3 Human
customisability
appearance to the customised in appearance to user’s better. Count Operation Qualifica- interface
user’s satisfaction? satisfaction (test) tion testing designer
B= Number of interface elements that the report 5.4
user wishes to customise Operation
NOTE: This metric is generally used as one of experienced and justified.
Table 8.3.5 Usability compliance metrics
External usability compliance metrics
Metric name Purpose Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Usability How completely does Specify required X=1- A/B 0<= X <=1 Absolute A= Count Product 5.3 Supplier
compliance the software adhere compliance items based The closer to B= Count description Qualifica-
to the standards, on standards, conventions, A= Number of usability compliance items 1.0 is the X= Count/ (User tion testing User
conventions, style style guides or regulations specified that have not been implemented better. Count manual or
guides or regulations relating to usability. during testing Specificati 6.5
relating to usability? on) of Validation
Design test cases in B= Total number of usability compliance complian-
accordance with items specified ce and
compliance items. related
standards,
Conduct functional testing conven-
for these test cases. tions, style
guides or
regulations
Test
specifica-
tion and
report
NOTE:
It may be useful to collect several measured values along time, to analyse the trend of increasingly satisfied compliance items and to determine whether they are fully satisfied or not.
8.4 Efficiency metrics
NOTE: X = Count/
1. Distribution may be calculated as Count
illustrated below.
Statistical maximal ratio Y= Xdev / Xmax
Operation
report
Software Can the user easily Observe the behaviour of X= A / B 0<=X<= 1 Absolute A= Count User 5.3 Developer
change control identify revised user or maintainer while The closer to B= Count manual or Qualifica-
capability versions? trying to change the A= Number of change log data actually 1.0 is the X= Count/ specificatio tion testing Maintainer
software. recorded better or the Count n 5.4
Can the maintainer Otherwise, investigate B= Number of change log data planned to closer to 0 the Problem Operation Operator
easily change the problem resolution report be recorded enough to trace software fewer resolution 5.5 Mainte-
software to resolve or maintenance report. changes changes have report nance
problems? taken place.
Mainten-
ance
report
Operation
report
Table 8.5.3 Stability metrics
External stability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Change Can user operate Observe behaviour of user X= Na / Ta 0<=X,Y Ratio Problem 5.3 Developer
success ratio software system or maintainer who is The smaller Na, Nb= resolution Qualifica-
without failures after operating software system Y = { (Na / Ta) / (Nb / Tb) } and closer to Count report tion testing Maintainer
maintenance? after maintenance. 0 is the better. Ta,Tb=
Na = Number of cases which user Time Mainten- 5.4 Operator
Can maintainer easily Count failures which user encounters failures during operation after ance Operation
mitigate failures or maintainer encountered software was changed X= Count/ report
caused by during operating software Nb = Number of cases which user Time 5.5 Mainte-
maintenance side before and after encounters failures during operation before Operation nance
effects? maintenance. software is changed Y=[(Count/ report
Ta = Operation time during specified Time) /
Otherwise, investigate observation period after software is changed (Count/
problem resolution report, Tb = Operation time during specified Time)]
operation report or observation period before software is
maintenance report. changed
NOTE: 1. X and Y imply “ frequency of encountering failures after change” and “fluctuated 3. If changed function is identified, it is recommended to determine whether encountered
frequency of encountering failures before/after change”. failures are detected in the changed function itself or in the other ones. The extent of impacts
2. User may need specified period to determine side effects of software changes, when the may be rated for each failure.
revision-up of software is introduced for resolving problems.
3. It is recommend to compare this frequency before and after change.
Modification Can user operate Count failures occurrences X= A / N 0<=X Absolute A= Count Problem 5.3 Developer
impact software system after change, which are The smaller N= Count resolution Qualifica-
localisation without failures after mutually chaining and A= Number of failures emerged after failure and closer to X= Count/ report tion testing Maintainer
( Emerging maintenance? affected by change. is resolved by change during specified 0 is the better. Count
failure after period Operation 5.4 Operator
change) Can maintainer easily N= Number of resolved failures report Operation
mitigate failures
caused by 5.5 Mainte-
maintenance side nance
effects?
NOTE: X implies “chaining failure emerging per resolved failure”. It is recommend to give precise measure by checking whether cause of current failure is attributed to change for previous
failure resolution, as possible.
Table 8.5.4 Testability metrics
External testability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to
ISO/IEC Target
metrics data element computations of measured scale type measure-
12207 audience
value type ment
SLCP
Reference
Availability of Can user and Observe behaviour of user X= A / B 0 <= X <=1 Absolute A= Count Problem 5.3 Developer
built-in test maintainer easily or maintainer who is The larger B= Count resolution Qualifica-
function perform operational testing software system A= Number of cases in which maintainer can and the closer X= Count/ report tion testing Maintainer
testing without after maintenance. use suitably built-in test function to 1.0 is the Count 5.4
additional test facility B= Number of cases of test opportunities better. Operation Operation Operator
preparation? report 5.5 Mainte-
nance
NOTE: Examples of built-in test functions include simulation function, pre-check function for ready to use, etc.
Re-test Can user and Observe behaviour of user X= Sum(T) / N 0<X Ratio T= Time Problem 5.3 Developer
efficiency maintainer easily or maintainer who is The smaller is N= Count resolution Qualifica-
perform operational testing software system T= Time spent to test to make sure whether the better. X= Time report tion testing Maintainer
testing and determine after maintenance. reported failure was resolved or not /Count 5.4
whether the software N= Number of resolved failures Operation Operation Operator
is ready for operation report 5.5 Mainte-
or not? nance
NOTE: X implies “average time (effort) to test after failure resolution”. If failures are not resolved or fixed, exclude them and separately measure ratio of such failures.
Test Can user and Observe behaviour of user X = A / B 0 <= X <=1 Absolute A= Count Problem 5.3 Developer
restartability maintainer easily or maintainer who is The larger B= Count resolution Qualifica-
perform operational testing software system A = Number of cases in which maintainer and the closer X= Count/ report tion testing Maintainer
testing with check after maintenance. can pause and restart executing test run at to 1.0 is the Count 5.4
points after desired points to check step by step better. Operation Operation Operator
maintenance? B= Number of cases of pause of executing report 5.5 Mainte-
test run nance
Table 8.5.5 Maintainability compliance metrics
External maintainability compliance metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to
ISO/IEC Target
metrics data element computations of measured scale type measure-
12207 audience
value type ment SLCP
Reference
Maintainability How compliant is the Count the number of items X = 1- A / B 0<= X <=1 Absolute A= Count Product 5.3 Supplier
compliance maintainability of the requiring compliance that The closer to B= Count description Qualifica-
product to applicable have been met and A= Number of maintainability compliance 1.0 is the X= Count/ (User tion testing User
regulations, compare with the number items specified that have not been better. Count manual or
standards and of items requiring implemented during testing Specifica- 6.5
conventions. compliance in the tion) of Validation
specification. B= Total number of maintainability complian-
compliance items specified ce and
related
standards,
conven-
tions or
regulations
Test
specifica-
tion and
report
NOTE:
It may be useful to collect several measured values along time, to analyse the trend of increasing satisfied compliance items and to determine whether they are fully satisfied.
8.6 Portability metrics
NOTE: 1. This metric is suggested as experimental use. 2. When time basis metric is required, spent time for installation may be measurable.
Ease of Setup Can user or Observe user’s or X=1- A/B 0<=X<= 1 Absolute A= Count Problem 5.3 Developer
Re-try maintainer easily re- maintainer’s behaviour The closer to B= Count resolution Qualifica-
try set-up installation when user is trying to re- A = Number of cases in which user fails in 1.0 is the X= Count/ report tion testing Maintainer
of software? try set-up installation of re-trying set-up during set-up operation better. Count 5.4
software? Operation Operation Operator
B = Total number of cases in which user report 5.5 Mainte-
attempt to re-try setup during set-up nance
operation
NOTE: 1 This metric is suggested as experimental use.
NOTE: 3. Operational installation effort reduction
The following complementary metrics may be used. User Install Operation Procedure Reduction Ratio X = 1- A / B
A = Number of install operation procedures which a user had to do after procedure reduction
1. Effortless installation B = Number of install operation procedures normally
User’s manual actions for installation X = A 0<= X <=1 The closer to 1.0 is the better.
A= The number of user’s manual actions needed for installation
0<X 4. Ease of user’s manual install operation
The smaller is the better. Easiness level of user’s manual install operation
X = Score of easiness level of user’s manual operation
2. Installation ease Examples of easiness level are following:
Installation supporting level X = A [very easy] requiring only user’s starting of install or set-up functions and then observing
A is rated with, for example: installation;
- Only executing installation program where nothing more is needed (excellent); [easy] requiring only user’s answering of question from install or set-up functions;
- Instructional guide for installation (good); [not easy] requiring user’s looking up parameters from tables or filling-in boxes;
- Source code of program needs modification for installation (poor). [complicated] requiring user’s searching parameter files, looking up parameters from files to
X= Direct Interpretation of measured value be changed and writing them.
X= Direct Interpretation of measured value
Table 8.6.3 Co-existence metrics
External co-existence metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to ISO/IEC Target
metrics data element computations of measured scale type measure- 12207 audience
value type ment SLCP
Reference
Available co- How often user Use evaluated software X=A/T 0<=X Ratio A= Count Problem 5.3 Developer
existence encounters any concurrently with other The closer to T= Time resolution Qualifica-
constraints or software which user often A = Number of any constraints or 0 is the better. X= Count/ report tion testing Maintainer
unexpected failures uses. unexpected failures which user encounter Time 5.4
when operating during operating concurrently with other Operation Operation SQA
concurrently with software report 5.5 Mainte-
other software? T = Time duration of concurrently operating nance Operator
other software
Table 8.6.4 Replaceability metrics
External replaceability metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to
ISO/IEC Target
metrics data element computations of measured scale type measure-
12207 audience
value type ment SLCP
Reference
Continued use Can user or Observe user’s or X=A/B 0<= X <=1 Absolute A= Count Problem 5.3 Developer
of data maintainer easily maintainer’s behaviour The larger is B= Count resolution Qualifica-
continue to use the when user is replacing A = number of data which are used in other the better. X= Count/ report tion testing Maintainer
same data after software to previous one. software to be replaced and are confirmed Count 5.4
replacing this that they are able to be continuously used Operation Operation Operator
software to previous report 5.5 Mainte-
one? B = number of data which are used in other nance
Is software system software to be replaced and planned to be
migration going on continuously reusable
successfully?
NOTE: 1. This metric can be applied to both cases of replacing an entirely different software and a different version of the same software series to previous one.
Function Can user or Observe user’s or X=A/B 0<= X <=1 Absolute A= Count Problem 5.3 Developer
inclusiveness maintainer easily maintainer’s behaviour The larger is B= Count resolution Qualifica-
continue to use when user is replacing A = number of functions which produce the better. X= Count/ report tion testing Maintainer
similar functions after software to previous one. similar results as previously produced and Count 5.4
replacing this where changes have not been required Operation Operation Operator
software to previous report 5.5 Mainte-
one? B = number of tested functions which are nance
Is software system similar to functions provided by another
migration going on software to be replaced
successfully?
NOTE: 1. This metric can be applied to both cases of replacing an entirely different software and a different version of the same software series to previous one.
User support How consistent are Observe the behaviour of X= 1 - A1 / A2 0<=X Absolute A1= Test report 5.3 User
functional the new components the user and ask the Larger is Count Integration
consistency with existing user opinion. A= Number of new functions which user better. A2= Operation 5.3 User
interface? found unacceptably inconsistent with the Count report Qualifica- interface
user’s expectation X= tion testing designer
B= Number of new functions Count/ 5.4 Maintainer
Count Operation Developer
6.3 Quality Tester
Assurance SQA
NOTE: 1. The case that a different software is introduced to replace for a previous software, 2. In case that the pattern of interaction is changed to improve user interface in a new
a new different software can be identified as a current version. version,, it is suggested to observe user’s behaviour and to count the number of cases which
the user fails to access functions caused by unacceptable conformity against user’s
expectation derived from previous version.
Table 8.6.5 Portability compliance metrics
External portability compliance metrics
Metric name Purpose of the Method of application Measurement, formula and Interpretation Metric Measure Input to
ISO/IEC Target
metrics data element computations of measured scale type measure-
12207 audience
value type ment SLCP
Reference
Portability How compliant is the Count the number of items X = 1- A / B 0<= X <=1 Absolute A= Count Product 5.3 Supplier
compliance portability ofthe requiring compliance that The closer to B= Count description Qualifica-
product to applicable have been met and A= Number of portability compliance items 1.0 is the X= Count/ (User tion testing User
regulations, compare with the number specified that have not been implemented better. Count manual or
standards and of items requiring during testing Specifica- 6.5
conventions ? compliance in the tion) of Validation
specification. B= Total number of portability compliance complian-
items specified ce and
related
standards,
conven-
tions or
regula-
tions
Test
specifica-
tion and
report
NOTE:
1 It may be useful to collect several measured values along time, analyse the trend of increasing satisfied compliance items, and determine whether they are fully satisfied.