You are on page 1of 6

Hardware and Software Readiness

A Systems Approach

Jack Olivieri
202 Burlington Road
Bedford, MA 01730
Mitre Corporation
Phone 781-225-9333
jolivieri@mitre.org

Abstract— This paper describes a new approach to systems development) and tracking (development and beyond)
readiness by identifying quantitative hardware and software activities.
criteria to improve objective and effective decision-making at
product shipment. The proposed method expands on previous
work done in the software area by including and organizing
II. CURRENT APPROACHES AND LIMITATIONS
hardware data. This streamlines various and diverse criteria
obtained from existing quality and reliability data into simple A. Hardware
metrics and visualizations. This allows for a systems approach to For hardware, the current approach typically starts with the
product development by defining a Systems Readiness Index generation and review of predictions which are calculated by a
(SRI) that is applicable across products and releases. Military (Mil-HDBK-217) or Industry Standard (Telcordia SR-
332). Occasionally, the predictions will be factored by lab or
This approach combines two ideas in quantitative terms: product field data. In many cases, especially for Commercial Off The
and process parameters which have been adequately represented Shelf Products (COTS) the source of the prediction is not even
to formalize the readiness index. Parameters from all aspects of questioned. Unfortunately many calculated or field reliability
the hardware and software development life cycle (e.g., metrics (including maintainability or availability) are not
requirements, project management & resources, development &
requested, nor weight given to their source or validity when
testing, audits & assessments, stability and reliability, and
technical documentation) that could impact the readiness index
they are received.
are considered. Hardware Verification Testing (HVT), Highly Accelerated
Life Testing (HALT) and Environmental Stress Screening
Keywords-component; System; Subsystem; Readiness; (ESS) are sometimes employed. However, HALT and ESS is
Reliability; Hardware; Software meeting resistance by some companies and organizations as
being too expensive. In military systems, Reliability Growth
I. INTRODUCTION Planning and Reliability Growth Tracking is now a required
Many of the current approaches used to determine the activity [1]. Requirements specifications are also an area where
criteria (or readiness) for shipment of a system are either one- too little attention has been given.
dimensional or relatively simplistic. For hardware, the decision These activities do not always show the value they deserve,
process is usually based on whether the manufacturing costs, but should figure into any readiness decision.
marketing schedule and functional performance meet the
requirements. For software, the schedule and the estimated B. Software
number of defects remaining usually are the principle factors in
the decision. In most cases, the schedule dominates both For software, current approaches can be varied since there
processes while cost is a close second. This can lead to added are no firmly established standards of assessing software
expenses, increased field failures and ultimately, customer readiness. Several methods use qualitative and sometime
dissatisfaction. subjective measures which focus on process metrics rather than
product metrics. One example of this is the Carnegie Mellon
This paper presents a multidimensional approach to the Capability Maturity Model Integration (CMMI) [2]. This
process of determining system readiness. In this context, model assesses process quality on a scale from 1-5 and is used
system readiness is not a one or two factor decision, but as a basis of software quality and reliability by measuring and
includes many process and product related factors (or vectors) quantifying the process of developing software.
during the planning and development process including
requirements, project management and resources, development The Keene method uses the number of 1000 lines of source
and testing, audits and assessments, reliability and stability, code (KSLOCs), CMMI level and several other estimates such
technical documentation and maintainability of both the as use hours, months to maturity, MTTR, etc. to predict the
hardware and software. It includes both planning (pre- software reliability before coding is completed or tested [3].

Identify applicable sponsor/s here. (sponsors)

978-1-4673-0750-5/12/$31.00 ©2012 IEEE


In 2008, the IEEE published recommended practices on B. Strategy
software reliability [4]. It provides information necessary for The overall strategy for implementing the index revolves
the application of software reliability (SR) measurement to a around the use of reasonably orthogonal vectors. Most of these
project, lays a foundation for building consistent methods, and
vectors will have sub vectors which provide additional detail
establishes the basic principle for collecting the data needed to
and fidelity to the model. They will be listed and identified in
assess and predict the reliability of software. It focuses on
failure defect rates but not their causes. It does have a good an Excel spreadsheet along with thresholds, weighting factors
discussion of the various software growth models with and a composite score. In some cases, the number of metrics
explanations of their use. identified can result in shock to some of the organizations
required to provide the data. For this reason a “basic” and
A more informal approach used by many organizations “enhanced” version of the metrics list is provided. The basic
employs a method to determine the amount of estimated version can be used as recommended here or a version
remaining defects in the code by determining the rate that implemented which includes those metrics already tracked,
defects are removed during test. collected and reported by the organization. The important idea
An Air Force instruction on software maturity uses a score is to start the process of reviewing readiness in a
based on severity level and weights [5]. A system abort rates a comprehensive, quantitative way using an index.
30, while a degraded system (no work around) rates a 15. A
degraded system (with a work around) rates a 5. IV. HARDWARE READINESS
The conclusion from a survey of the existing approaches We will begin with hardware. The HW readiness index is
was that there is a need for consistent, quantitative, implemented using four vectors. Listed below is the basic
requirements driven criteria to decide readiness. version of metrics with a minimal set of sub-vectors.
A. Design
III. THE READINESS INDEX 1) Requirements
2) Design Guide
A. Goals and Requirements
The primary reason for measuring and specifying HW and SW B. Design Testing
readiness is to support technical and business decisions while 1) Hardware Verification Testing
promoting customer satisfaction. That is, we want to be able 2) Maintainability Demo
to use a Systems Readiness Index (SRI) to improve control
over: C. Manufacturing./ Ops Testing
a) General Availability (GA) or Low Rate Initial 1) Environmental Stress Screening
Production (LRIP) 2) Supplier Qualification
b) Customer expectations and satisfaction D. Reliability
c) Cost, schedule and quality 1) RMA Predicitions
2) RMA Modeling/ Analysis
This paper describes an easy to understand, quantitative
approach to understanding hardware and software readiness The fishbone diagram in Fig. 1 is a representation of these four
using an index that has the following advantages: vectors, but with vectors from the enhanced version of the
model. This includes all the sub-vectors which contribute to
1) Allows pass-fail criteria from product requirements each vector.
2) Is easy to calculate from exisiting data
3) Offers a meaningful visualization
4) Can be used for release to release comparisons
5) Can be used for product to product comparisons
6) Complements HW and SW modeling
7) Can be scaled up or down with development activites
8) Support quantitative “green (go)/ yellow (go w/
condition)/ red (no go)” criteria, derived from
product requirements
9) Be computable from test data when available
10) Apply to most (or all) products
11) Ability to validate criteria with field data, and hence
close the loop
What the index does not consider is business metrics such as
cost, manufacturing materials, resources and schedule.
Figure 1
It is important to note that: An example of a single vector (row) in the data collection
 Each vector is computed as a weighted sum of the template is shown in Fig. 2.
constituent variables; the magnitude of each vector is Product Quality
normalized to 1.0 Parameters
Acronym Details Metrics

Design
 The output is a graphical representation of the
integrated result of readiness based on the four Definition: Percentage of requirements correctly
implemented & tested
Requirements
dimensions or vectors. Requirements DREQ
Responsibility: Test Lead for the program
Tested and
Frequency: End of test HVT
Verified /Planned
Source: HVT Test Report
 The framework relates discovery, reporting, and Trigger:
measurement of defects.
Green Green Yellow Red Actual
 Rules for creating unambiguous and explicit Target
Red Target
Range Range Range Value
definitions or specifications of hardware problem and
defect measurements are defined.
 The details of constructing the index table are provided
<98% to
below, but are customizable. 98% 90% >= 98%
90%
< 90% 96%

V. INDEX TABLE STRUCTURE


The structure of the index table consists of the measurements Actual Green Sub-Vector Vector
Comments Red Index
and metrics related to the individual vectors and sub vectors. Index Index Weight Weight
0.96 0.98 0.90 1.00 1
Both the vectors and sub vectors are assigned weights,
depending on the importance that is attributed to each. In order
to provide the greatest clarity and unambiguity, the following
columns populate the spreadsheet: 96.00% 98.00% 90.00% 1.00

 Product Quality Parameter –


o Name of Vector (e.g. Design) and Sub-
vector (e.g. Requirements) to be tracked
Measurements Data
 Acronym (DREQ)
Comments
 Details
o Clear definition of metric
o Responsibility - who provides metric Total Reqs =25
Reqs
o Frequency –when and how often provided Implemented=24
o Source – data source (database/dept) for
metric
o Trigger – when tracking starts
 Metrics –equation/ method of calculation Figure 2
 Green target – value above or below which is green Indexes for the values are direct (= value) when it is
 Red Target - value above or below which is red desirable to have a higher number (greater than a certain
 Green Range – range of values for green percentage) for as metric. The index is the complement (1-
 Yellow Range –range of values for yellow value) if a lower number (less than a certain percentage) is
 Red Range – range of values for red desirable.
 Actual Value –what is the actual metrics value
 Comments – regarding the values VI. SOFTWARE READINESS
 Actual Index – Actual value or complement of actual In a like manner, Software is broken down into vectors and
value sub vectors. SW readiness index is implemented using the 5
 Green Index – Actual index for green or complement dimensions (vectors). Listed below are the vectors and sub-
 Red Index – Actual index for red or complement vectors of a suggested basic version.
 Sub Vector Weight – Sub Vector weight compared to
other sub vectors within the vector A. Software Functionality
 Vector Weight – Vector weight compared to other 1) Tag implementation accuracy
vectors B. Operational Quality
 Measurements – raw data
1) Development Defect Density
 Comments – notes, exceptions, etc. about the
measurements or actual measurements C. Known Remaining Defects
1) Deferred Defects during SQA
2) Non-Reproducible Crash Defects the vector or sub vector. Of course there must be enough
significant vectors to make an index valuable.
D. Testing Scope and Stability
For initial weighing factors a typical risk chart is shown in
1) Defects Closure during SQA
Figure 5.
E. Reliability
1) Residual Defects

Figure 3 shows the Software Readiness Index Fishbone


Interrelationships with the enhanced version of metrics.
A similar index table structure can be used for the software
metrics.

Figure 5

VIII. COMBINING VECTORS FOR A TOTAL INDEX

There are a number of ways to combine the vectors to


calculate an overall score. The following can be considered:

1) Additive model – each vector contributes a fraction to


the overall total. All vectors are added after weighting:
FIGURE 3 a) (weight1 * Vector1) + (weight2 * Vector2) + (weight3
* Vector3) + (weight4 * Vector4) + (weight5 * Vector5)
VII. WEIGHTS AND WEIGHING
Each sub-vector can be weighed against other sub-vectors 2) Multiplicative model – each vector is independent and
within a vector. In addition, each vector can be weighed against affects the entire range of the index directly:
other vectors. For the software vectors above, Figure 4 shows
the weighing portion of the spreadsheet with explanation. a) (Vector1 * Vector2 * Vector3 * Vector4 * Vector5) or
b) (weight1*Vector1 * weight2*Vector2 * weight3*
Vector3* weight4*Vector4 * weight5* Vector5)/5

3) Hybrid model – vectors combined into normalized


additive groups that are then multiplied to form the index; the
significant variables affect the index more than the variables
that are deemed less important; all variables contribute to the
index
a) (w1 * Vector1 + w2 * Vector2 +w3* Vector3) x (w4 *
Vector4 + w5 * Vector5)

In like manner the HW readiness and SW Readiness Indexes


can be combined. In most cases (depending on the system), an
equal weight can be given to HW and SW.
IX. VISUALIZATION
Figure 4
An important characteristic of the index is the ability to
If a vector or sub-vector is not available or applicable, then visualize the results in an easy manner. At the bottom of the
a weighting value of 0 can be entered and the model will ignore spreadsheet the results of the vector and sub-vector data are
shown in a summary chart which takes into consideration the
weighing that was inputted. See Figure 6 for an actual
example of a SW readiness index.

 The charts shows the red, yellow, green ranges for


each vector and the actual value of the index for that
vector

 The total index bar shows the target ranges for the
composite index and the actual value for the
composite index. Red = NO GO; Green = GO; Yellow
= GO w/ CONDITIONS
 The Hybrid Model was used for the calculated total in
this example after using the tool with real data and
showed he most realistic approach.
Figure 7

XI. SUMMARY AND CONCLUSION


This paper has shown the construction of a System Readiness
Index that includes both HW and SW with the following
characteristics:

1) Systematic approach to quantifying HW/SW


“goodness”
Figure 6 2) Setting measurable targets for each metric along a
The results are then graphed in a bar chart shown in Figure vector and tracking the status of each metric and the overall
7. This chart shows the five major vectors of the software index in a dashboard.
readiness metrics that correspond to the summary chart. The 3) During the implement phase, the dashboard provides a
depiction not only shows the values of the five vectors, but summary of the state of the HW/SW, the risk areas, the size of
where the weaknesses are and how close the values are to the the gap in each area to aid the prioritization process.
threshold of the various ranges. 4) Drives the Go/No Go decision.
5) Reduces subjectivity in the decision process by
quantifying the individual areas in a single composite index.
X. RESULTS
6) Organizes & streamlines existing quality and reliability
The software readiness portion of the index described data and processes into a simple tool and a methodology:
above has been used on several telcom software releases Agree – Track – Verify.
during 2006-2008 [6]. Figure 7 is based on an actual product 7) The index and tool simplify and systematize the process
release. The index highlighted weak areas (such as fault of probing, monitoring and managing the HW/SW
insertion) and additional testing was performed. Currently, development testing. Each metric is defined, its source is
there are some current DOD customers who are considering identified, and the method for computing the metric is clearly
starting a basic version of the index using the metrics that are spelled out.
currently captured in their programs. The hardware readiness
8) Complements and supplement other tools/techniques
index has just been developed and needs to be implemented.
(e.g., DfS, DfT, DfM) since it is focused primarily on HW/SW
prior to Controlled Introduction and General Availability.
Both models would benefit greatly from field data to
calibrate them. This requires a commitment in time and 9) Creates a framework that is applicable across products
resources since there can be many months or years between and across releases; the calibration improves with each
the collection of prediction or development test data (i.e. release within this framework.
calculation of an index) and the accumulation of statistically
significant data. If there is an existing data collection system, XII. FUTURE ENHANCEMENTS AND TOPICS
it can be modified to collect the appropriate metrics.
Some of the future topics and enhancements that could be
explored with the model include customizing the index
template for:

1) Limited introduction vs full rate produciion (CI vs GA)


2) Major releases (including New Product Introduction)
3) Minor releases
4) Trending analysis
5) Integration with CMMI Lev 4/Lev 5
6) Hardware/ Software Security Vector and sub Vectors

Anyone interested in a sample template of the model may


contact the author via email. The model is provided for use
with proper attribution. Any examples of use of the model
incorporating feedback from field data would be appreciated.

REFERENCES
[1] MIL-HDBK-189C, 14 June 2011

[2] CMMI - CMMI® for Development, Version 1.3 CMMI-


DEV, V1.3, November 2010

[3] Development Process SW Reliability Model: An Early


Prediction Method, Apr 29, 2011; Also, Progressive Software
Reliability Modeling- Samuel Keene, ISSRE Copyright 1999

[4] IEEE Recommended Practice on Software Reliability,


Standards Committee of the IEEE Reliability Society, 27
March 2008

[5] AIR FORCE INSTRUCTION 10-602 - 18 MARCH 2005,


A8.2. (Attachment 8 Software Maturity)

[6] Quantifying Software Reliability and Readiness, Abyaya


Asthana and Jack Olivieri, ASQ Section 509

FURTHER READING

Analysis of Testing and Operational Software Reliability in


SRGM based on NHPP, S. Thirumurugan, and D. R. Prince
Williams, International Journal of Computer and Information
Science and Engineering 1;1 © www.waset.org Winter 2007

You might also like