You are on page 1of 4

SOFTWARE ENGINEERING TECHNIQUES

LESSON 16:

Topics Covered
Reliability concept, Reliability and failure intensity, Uses of
1.0
reliability studies, Reliability models, Macro models, Criteria for Reliability

a good model
Objectives
Failure
Upon completion of this Lesson, you should be able to: Intensity
Reliability
• Know what is reliability concept
• Know what are the uses of reliability studies
• Know what is a reliability model – Macro model and the
criteria for a good model.
Reliability Concept
The definition that we will present here for software reliability is
Time [hr]
one that is widely accepted throughout the field. “It is the
probability of failure-free operation of computer program for a
specified time in a specified environment”. For example, a Thus an important use of software reliability measurement in is
time-sharing system may have a reliability of 0.95 for 10 hr, in system engineering. However, there are at lease four other
when employed by the average user. This system, when ways in which software reliability measures can be of great value
executed for 10hr, would operate without failure for 95 of these to the software engineer, manager, or user.
periods out of 100. As a result of the general way in which we First, you can use software reliability measures to evaluate
defined failure, not that the concept of software reliability software engineering technology quantitatively. New techniques
incorporates the notion of performance being satisfactory. For are continually being proposed for improving the process of
example, excessive response time at a given load level may be developing software, but unfortunately they have been exposed
considered unsatisfactory, so that a routine must be recorded in to little quantitative evaluation. Because of the inability to
more efficient form. distinguish between good and bad, new technology has often
Failure intensity is an alternative way of expressing reliability. led to a general resistance to change that is counter productive.
We just gave the example of the reliability of a particular system Software reliability measures offer the promise of establishing
being 0.95 for 10 hr of time. An equivalent statement is that at least one criterion for evaluating the new technology. For
the failure intensity of 0.05 failure/hr. Each specification has its example, you might run experiments to determine the decrease
advantages. The failure intensity statement is more economical, in failure intensity (failures per unit time) at the start of system
as you only have to give one number. However, the reliability test resulting from design reviews. A quantitative evaluation
statement is better suited to the combination of reliabilities of such as this makes the benefits of good software engineering
components to get system reliability. If the risk of failures at technology highly visible.
any point in time is of paramount concern, failure intensity may Second, software reliability measures offer you the possibility of
be the more appropriate measure. Such would be the case for a evaluating development status during the test phases of a
nuclear power plant. When proper operation of system to project. Methods such as intuition of designers of test team,
accomplish some function with time duration is required percent of tests completed and successful execution of critical
reliability specification is often best. An example would be a functional tests have been used to evaluate testing progress.
space flight to the moon. Fig. 6.5 shows how failure intensity None of these have been really satisfactory and some have been
and reliability typically vary during a test period, as failure are quite unsatisfactory. An objective reliability measure (such as
removed; Note that we define failure intensity, just like we do failure intensity) established from test data provides a sound
reliability, with respect to a specified environment. means of determining status. Reliability generally increases with
Uses of Reliability Studies the amount of testing. Thus, reliability can be closely linked
Pressures have been increasing for achieving a more finely tuned with project schedules. Furthermore, the cost of testing is
balance among product and process characteristics, including highly correlated with failure intensity improvement. Since two
reliability. Trade offs among product components with respect of the key process attributes that a manager must control are
to reliability are also becoming increasingly important. schedule and cost, reliability can be intimately tied in with
project management.
Reliability and Failure Intensity
Third, one can use software reliability measures to monitor the
operational performance of software and to control new

© Copy Right: Rai University


3E.253 59
features added and design changes made to the software. The models developed to date have been macro models, and the
SOFTWARE ENGINEERING TECHNIQUES

reliability of software usually decreases as a result of such next sub-section is devoted to a discussion of such models.
changes. A reliability objective can be used to determine when,
Macro-Models
and perhaps how large, a change will be allowed. The objective
A large number of macro models have been proposed. From a
would be based on user and other requirements. For example,
data-analytic point of view, the objective of the research into
a freeze on all changes not related to debugging can be imposed
modeling is to find a model that explains failure data well
when the failure intensity rises above the performance objective.
enough to forecast the future. For credibility, the model should
Finally, a quantitative understanding of software quality and the have natural, meaningful parameters. Each model mathemati-
various factors influencing it and affected by it enriches into the cally summarizes a set of assumptions the researcher has made
software product and the software development process. One about the phenomenon of software failure and debugging.
is then much more capable of making informed decisions. The models may give fairly accurate results even though all of
Reliability Models the assumptions are not satisfied. The selection of a model
To model software reliability one must first consider the needs to be justified by the realism of the model’s assumptions
principal factors that affect it: fault introduction, fault removal, and the model’s “predictive validity”.
and the environment. Fault introduction depends primarily on The practitioner has several choices:
the characteristics of the developed code (code created or 1. One model can be chosen a priori, and then that model is
modified for the application) and development process only one used.
characteristics, which include software engineering technologies
2. One model can be chosen a priori, but a recalibration
and tools used and level of experience of personnel. Note that
technique is later applied to adapt the model to the project.
code can be developed to add features or remove
3. Several models can be employed. The results can be
faults. Fault removal depends upon time, operational profile,
combined into a weighted average. The weighted average
and the quality or repair activity. The environment directly
with the best goodness-of-fit is used.
depends on the operational profile. Since some of the forego-
ing are probabilistic in nature and operate over time, software It is important not to use too many models in approaches 3
reliability models are generally formulated in terms of the and 4, because two models might fit the failure data well but
random processes. The models are distinguished from each disagree on how to extrapolate to the future. There is a Yiddish
other in general terms by the nature of the variation of the proverb, “The man who owns one watch always knows what
random process with time. time it is: the man who owes two watches is never quite sure”.
A software reliability model specifies the general form of the It is recommended that software reliability models be compared
dependence of the failure process on the factors mentioned. by criteria discussed below. It is expected that comparisons will
We have assumed that it is, by definition, time based (this is cause some models to be rejected because they meet few of the
not to say that non-time-based models may not provide useful criteria discussed here. On the other hand, there may or may
insights). The possibilities for different mathematical forms to not be a clear choice between the more acceptable models. The
describe the failure process are almost limitless. relative weight to be placed on the different criteria may depend
on the context in which the model is being applied. When
As with any emerging discipline, software reliability has
comparing two models, we should consider all criteria simulta-
produced its share of models. Software reliability models are
neously. We should not eliminate models by one criterion
propounded to assess reliability of software from either
before considering other criteria, except if predictive validity is
specified parameters, which are assumed to be known, or from
grossly unsatisfactory. It is not expected that a model must
software-error generated data.
satisfy all criteria to be useful.
The past few years have seen the introduction of a number of
The proposed criteria include predictive validity, capability,
different software reliability models. These models have been
validity, capability, and quality of assumptions, applicability, and
developed in response to the urgent need of software engi-
simplicity. We will discuss each of the criteria in more details in
neers, system engineers, and managers to quantity the concept
the following sections.
of software reliability.
Predictive Validity
After some thought about how to construct a software
reliability model, two viewpoints emerge – the macro approach Predictive validity is the capability of the model to predict future
and the micro approach. In the macro approach we ignore the failure behaviour from present and past failure behaviour (that
differences between types of statements, details of the control is, data). This capability is significant only when failure
structure, etc. We step back and examine the number of behaviour is changing. Hence, it is usually considered for a test
instructions, the number of errors removed, and the overall phase, but is cant be applied to the operational phase when
details of the control structure and base our model on these repairs are being regularly made.
features. Constants for the model can be evaluated from data There are at least two general ways of viewing predictive validity.
on past systems as well as analysis of test data on the software These are based on the two equivalent approaches to characteriz-
being developed. In the case of micro approach, a detailed ing failure random process, namely:
analysis of the statements and the control structure is per- 1. The number of failures approach and
formed, leading to a detailed model structure. Most of the 2. The failure time approach

© Copy Right: Rai University


60 3E.253
We may apply various detailed methods, some representing model can be applied to particular software system or project

SOFTWARE ENGINEERING TECHNIQUES


approximations for predictive validity. It has not been deter- circumstances.
mined if one is superior at the present time.
Applicability
The number of failure approach may yield a method that is An another important characteristic of a model is its applicabil-
more practical to use than the failure time approach. In the ity. We should judge a model on its degree of applicability
former approach, we describe the failure random process [M (t), across software products that vary in size, structure, and
t> 0], representing failures experienced by time t. Such a function. It is also desirable that it be usable in different
counting process is characterized by specifying the distribution development environments, different operational environ-
of M(t), including the mean value function h(t). ments, and different life cycle phase. However, if a particular
Assume that we have observed q failures by the end of the time model gives outstanding results for just a narrow range of
tq. We use the failure data upto time te (< te) to estimate the products or development environments, we should not
parameters of h(t). Substitution the estimates of the param- necessarily eliminate the model.
eters in the mean value function yield the estimate of the There are at least four special situations that are encountered
number of failures by the time tq. The estimate is compared commonly in practice. A model should either be capable of
with the actually observed number q. this procedure is repeated dealing with them directly or should be compatible with
for various values of te. procedures that can deal with them. These are:
We can visually check the predictive validity by plotting the 1. Program evolution,
relative error against the normalized test time. The error will 2. Classification of severity of failures into different
approach 0 as te approaches tq. If the points are positive categories,
(negative), the model tends to overestimate (underestimate).
Numbers closer to 0 imply more accurate prediction and hence 3. Ability to handle incomplete failure data or data with
better model. measurement uncertainties (although not without loss of
predictive validity).
Capability
4. Operation of the same program on computers of different
Capability refers to the ability of the model to estimate with
performance.
satisfactory accuracy quantities needed by software managers,
engineers, and users in planning and managing software Finally, it is desirable that a model be robust with respect to
development projects or running operational software systems. departures from its assumptions, errors in the data or param-
We must gauge the degree of capability by looking at the relative eters it employs, and unusual conditions.
importance of the quantities as well as their number. The Simplicity
quantities, in approximate order of importance are: A model should be simple in three aspects. The most impor-
1. Present reliability, mean time to failure (MTTF), or failure tant consideration is that is must be simple and inexpensive to
intensity, collect the data required to particularize the model. If this is
2. Expected date of reaching a specified reliability, MTTF, or not the case, we will not use the model. Second, the model
failure intensity objective, and should be simple in concept. Software engineers without
extensive mathematical background should be able to under-
3. Human and computer resource and cost requirements
stand the model and its assumptions. They can then determine
related to the achievement of the objective.
when it is applicable and the extent to which the model may
Any capability of model for prediction of software reliability in diverge from reality in an application. Parameters should have
system design and early development phases is extremely readily understood interpretations. This property makes it
valuable because of the resultant value for system engineering more feasible for software engineers to estimate the values of
and planning purposes. We must make these predictions the parameters when data are not available. The number of
through measurable characteristics of the software (size, parameters in the model is also an important consideration for
complexity, structure, etc.), the software development environ- simplicity. It should be pointed out that we need to compare
ment, and the operational environment. the number or parameters on a common basis (for example,
Quality of Assumptions don’t include calendar time component parameters for one
The following considerations of quality should be applied to model and not another).
each assumption in turn. If it is possible to test an assump- Finally, a model must be readily implementable as a program
tion, the degree to which it is supported by data is an important that is a practical management and engineering too. This means
consideration. This is especially true of assumptions that may that the program must run rapidly and inexpensively with no
be common to an entire group of models. If it is not possible manual intervention required (does not rule our possibility of
to test the assumption, we should evaluate its plausibility from intervention) other than the initial input.
the viewpoint of logical consistency and software engineering On the basis of the above characteristics of a good software
experience. For example, does it relate rationally to other reliability model we select two models for discussion. Two
information about software and software development? Finally, models were chosen because each has certain advantages not
we should judge the clarity and explicitness of an assumption. possessed by the other. However, the effort required to learn
These characteristics are often necessary to determine whether a the application of a model makes presenting more than two a

© Copy Right: Rai University


3E.253 61
question of sharply diminishing returns. The models are the
SOFTWARE ENGINEERING TECHNIQUES

basic execution time model and the logarithmic Poisson


execution time model. Both the models have two compo-
nents, named the execution time component and the calendar
time component. Each component will be described with
respect to both models.
We have restricted ourselves to considering well-developed
models that have been applied fairly broadly with real data and
have given reasonable results. The specific forms can be
determined from the general form by establishing the values of
the parameters of the model through either:
1. Estimation – statistical inference procedures are applied to
failure data taken for the program, or
2. Prediction – determination from properties of the software
product and the development process (this can be done
before any execution of the program).
Notes

© Copy Right: Rai University


62 3E.253

You might also like