Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
1. Introduction
The amount of cost needed for testing a software system to achieve a required reliability
level can sometimes be as high as 60% of the overall cost. As more as the testing and
verification process discover faults, the additional cost of exposing the remaining faults
generally rises very quickly. Even after a lengthy testing period, additional testing will
always potentially detect more faults. Thus there is limit beyond which continuation of
testing to further improvement can be justified only if such improvement is cost effective.
To make cost effective software a careful planning of testing phase and accurate
decisionmaking is required. This Careful planning and decisionmaking requires the use
of a software reliability models.
A Software Reliability Model usually has the form of random process that describes the
behavior of failures with respect to time
A software reliability model specifies the general form of the dependence of the failure
process on the principle factors that affect it: fault introduction, fault removal, and the
operational environment. At a particular time it is possible to observe a history of the
failure rate (failures per unit time) of software. Fault identification and removal generally
force the failure rate of a software system to decrease with time as shown in Fig. 1.
Software reliability modeling is done to estimate the form of the failure rate function by
statistically estimating the parameters associated with a selected mathematical model.
The purpose of the modeling is twofold: (1) To forecast the remaining time required to
achieve a specified objective. (2) To forecast the expected reliability of the software
when the product is released. Project management can use these forecasts as inputs for
cost estimation, resource planning, and schedule validation .
The main factors affecting software reliability are: Fault Introduction and Fault removal
1. Fault Introduction
This depends mainly on the characteristics of the developed code and development
process characteristics. Code size is the most important characteristic. Important process
Software Reliability Models 2
characteristics include the tools used during development and the experience of the
personnel
2. Fault removal
The fault removal process depends on time, operational profile, and the quality of the
repair process.
Time
F
a
i
l
u
r
e
s
p
e
r
H
o
u
r
Current Failure rate
Objective
Failure rate
Remaining Test Time
Fig.1ExpectedsoftwareFailureRateCurve
2. Classification
A number of software reliability models have been proposed to handle the problem of
software reliability measurement. A popular approach for classification of models in
terms of five different attributes is given here. This classification scheme defines
relationship between the models.
1. Time domain: Calendar or execution (CPU or processor) time
2. Category: The number of failures that you can experience in infinite time is finite
or infinite
3. Type: the distribution of the number of failures experienced by the time specified
4. Class (finite failure category only): Functional form of the failure intensity in
terms of failure
5. Family (infinite failures category only): Functional form of the failure intensity
in terms of the expected number of failures experienced
Software Reliability Models 3
3. Characteristics of Good Software Reliability Model
In spite of much research effort, there is no universally applicable software reliability
growth model, which can be trusted to give accurate prediction of reliability in all
circumstances. So it is required to select a model, which gives better prediction accuracy
as compare to rest models. In this section the characteristic of good software reliability
model is stated. A good software reliability model has several characteristics: It should:
• Give good predictions of future failure behavior
A model is reasonably accurate if the number of defects discovered after release is
with in the 90% confidence limit of the model.
• Compute useful quantities
A model must be able to provide information that is useful to the decision making
process or it serves no purpose.
• Be simple enough for many to use
Not everyone is well versed in the statistical considerations of the models. The
models must allow people from a range of backgrounds to obtain useful, easy to
understand information.
• Be widely applicable
The value of the modeling effort is enhanced if the same tool can be used for
multiple releases or across many platforms. This can reduce confusion resulting
from the use of many different models simultaneously.
• Be based on sound assumptions
Each model makes assumptions pertaining to test and defect repair. In choosing a
model which to use it is critical that the underlying assumptions be understood.
They may or may not be appropriate to every organization
• Become and remain stable
If the model's prediction varies greatly from week to week, no one will believe the
results. Ideally, a model should be validated by through calibration using
historical data.
Software Reliability Models 4
4. Recommended Models
4.1. Jelinski & Moranda (JM) Model
This is one of the earliest models proposed. It assumes the elapsed time between failures
is follows exponential distribution.
Category: Finite failure; Class: Exponential; Type: Binomial
Nature of failure process: Time to failure
Assumptions
1. The failure rate remains constant over the intervals between fault occurrences.
2. Each failure is independent of others
3. Each fault has the same probability to cause a failure.
4. A detected fault is removed with certainty in negligible time and no new faults are
introduced during debugging process.
5. The fault detection rate is proportional to residual faults
6. During test, the software is operated in a similar manner as expected operational
usage
The hazard function during t
i
, the time between the (i1)
st
and i
th
failures is given by
)] 1 ( [ ) ( − − = i N t Z
i
φ
Where,
φ is a proportionality constant
N = Number of defects in software at the start of testing
Model Form
If (i1) faults have been discovered by time t, there would be [N(i1)] faults remaining in
the system. If we represent the time between the i'
th
and the (i1)
th
failure by the random
variable Xi, from assumption (2) we can see that Xi has an exponential distribution, f(Xi),
as shown below
i
X i N
i
e i N X f
)) 1 ( (
)) 1 ( ( ) (
− − −
− − =
φ
φ
Software Reliability Models 5
Using assumption 2, the joint density of all the X
i
is:
This is the Likelihood function, which we can use to compute estimates for the
parameters φ and N. To make the computation easier, we can take the natural logarithm
of the likelihood function to produce the loglikelihood function. After doing this, we
then take the partial derivative of the loglikelihood function with respect to each of the
two parameters, giving us two equations in two unknowns. Setting these equations to zero
and then solving them gives us estimated values, and , for the model parameters N
and φ
N
ˆ
φ
ˆ
We have to find the value of numerically from the following equation, and then use it
to solve the previous equation for φ:
N
ˆ
A typical plot for hazard function of JM model
For N=100, φ=0.01
0.95
0.96
0.97
0.98
0.99
1
0 10 20
Cumulative Time
H
a
z
a
r
d
f
u
n
c
t
i
o
n
30
∏ ∏
=
− − −
=
− − = =
n
i
i N X
n
i
i n
i
e i N X f X X X L
1
) 1 ( (
1
2 1
)) 1 ( ( ) ( ) ,....., , (
φ
φ
∑ ∑
= =
− −
=
n
i
i
n
i
i
X i X N
n
1 1
) 1 ( ) (
ˆ
ˆ
φ
∑
∑
∑
=
=
=
− −
=
− −
n
i
n
i
i
n
i
i
X i
X
N
n
i N 1
1
1
) ) 1 ( (
1
ˆ
) 1 (
ˆ
1
Z(t
3
)=0.01[1002]
Fig.2AtypicalplotforhazardfunctionofJMmodel
Software Reliability Models 6
MTTF using JM model is estimated as
MTTF =
)
ˆ
(
ˆ
1
n N − φ
4.2 Shooman Model
This model is essentially similar to jelinski moranda model. For this model hazard rate
function can be expressed in the following form
)] ( [ ) ( τ
c
n
I
N
k t Z − =
Where,
T is the operating time of the system measured from its initial activation,
I is the total number of instructions in the program
τ is the debugging time since start of integration
n
c
(τ) is the total number of faults corrected during τ, normalized with respect to I
k is proportionality constant
4.3 Musa’s Basic Execution Time Model
This model was one of the first to use the actual execution time of the software
component on a computer for the modeling process. The times between failures are
expressed in terms of computational processing units (CPU) rather than elapsed wall
clock time.
Category: Finite failure; Class: Exponential; Type: Poisson
Nature of failure process: Time to failure
Assumptions
1. The execution times between the failures are piecewise exponentially distributed.
2. The cumulative number of failures follows a Poisson process
3. The mean value function ( )] exp( 1 [ ) (
1 0
t t β β u − − = , where
0
β , 0
1
> β ) is such
that the expected number of failure occurrences for any time period is
proportional to the expected number of undetected faults at that time.
4. The quantities of the resources that are available are constant over a segment for
which the software is observed.
Software Reliability Models 7
5. Resources expenditures for the k
th
resources, ∆χ
κ
≈ θ
k
∆t + m
k
∆ u , where ∆t is the
increment of execution time, ∆m is the increment of failures experienced, θ
κ
is a
failure coefficient of resources expenditure.
6. Fault –identification personnel can be fully utilized and computer utilization is
constant.
7. Faultcorrection personnel utilization is established by the limitation of fault
queue length for any faultcorrection person. Fault queue is determined by
assuming that fault correction is a Poisson process and that servers are randomly
assigned in time.
Assumptions 4 through 7 are needed only if the second established component of the
basic execution model linking execution time and calendar time is desired.
Model form
Mean value function is given by
)), exp( 1 ( ) (
1 0
t t β β u − − =
The failure intensity function for this model is
) exp( ) ( ' ) (
1 1 0
t t t β β β u λ − = =
If n failures of the software system observed at times t
1
, t
2
……….t
n
, and from the last
failure time t
n
and additional time of x (x ≥ 0) has elapsed without failure. Using the
model assumptions the likelihood function for this class is obtained as
))]) ( exp( 1 [ exp( ) ( exp ) , (
1 0 1
1
1 0 1 0
x t t L
n i
n
i
n n
+ − − −
− =
∏
=
β β β β β β β
So the MLEs of
1 0
β β and , are obtained as the solutions to the following pair equations
∑
=
= −
− +
+
−
+ − −
=
n
i
i
n
n
n
t
x t
x t n n
x t
n
1
1 1
1
0
0
1 )) (
ˆ
exp(
) (
ˆ
&
)) (
ˆ
exp( 1
β β
β
β
Once the estimates of
1 0
β β and are obtained, we can use the invariance property of the
MLEs to estimate other reliability measures.
Software Reliability Models 8
4.4.GoelOkumoto Model
GoelOkumoto model takes the number of faults per unit of time as independent Poisson
random variables. The model has formed the basis for the models using the observed
number of faults per unit time group.
Category: Finite failure; Class: Exponential; Type: Binomial
Nature of failure process: Fault counts
Assumptions
1. The number of faults detected in each of the respective intervals is independent of
each other.
2. The cumulative number of failures follows a Poisson process with mean value
function u(t).
3. The mean value function is such that the expected number of fault occurrences for
any time t to t+∆t is proportional to the expected number of undetected faults at
time t.It is also assumed to be a bounded, nondecreasing function of time with lim
t→∞
u(t)= N < ∞
Model form
Mean value function is given by
u(t) = N(1e
bt
)
For some constants b>0 and N>0, N is the expected total number of faults to be
eventually detected. Since the failure intensity function is the derivative of u(t) we have,
therefore
λ(t)= N be
bt
Model estimates
The maximum likelihood estimates (MLE
s
) of N and b can be obtained as the solutions
for the following pair of equations:
) 1 (
1
btn
n
i
e
fi
N
−
=
−
=
∑
Software Reliability Models 9
∑
− −
−
−
−
−
−
−
−
=
−
∑
−
− =
i i
i i
n
n
i
i n
t b t b
t b
i
t b
i i
t b
f t b
n
e e
e t e t f
e
e t
ˆ ˆ
ˆ
1
ˆ
ˆ
ˆ
1
1 1
) (
) 1 (
The second equation is solved for b by numerical methods, and the solution is then
substituted into first equation to find N.
4.5.Schneidewind’s Model
In this model it is assumed that current fault rate might be a better predicator of the future
behavior than the observed rates in the distant past. The failure rate processes may be
changing over time so the current data may better model the present reliability. This
model is presented in three forms depending upon the data points used for study.
Assumptions
1. Only "new" failures are counted (i.e., failures that are repeated as a consequence
of not correcting a fault are not counted).
Category: Finite failure; Class: Exponential; Type: Poisson
Nature of failure process: Fault counts
2. The fault correction rate is proportional to the number of faults to be corrected.
3. The number of failures detected in one interval is independent of the failure count
in another.
4. The mean number of detected failures decreases from one interval to the next.
5. The intervals are all the same length, where the length can be chosen at the
convenience of the user (in practice, the length can be varied).
6. The rate of failure detection is proportional to the number of faults in the program
at the time of test.
7. The failure detection process is a nonhomogeneous Poisson process with an
exponentially decreasing failure detection process
Model form
From the assumptions, the cumulative mean number of faults by the ith time period is
)] exp( 1 [ ) (
i i i
t D β
β
α
u − − = = .
Thus the expected number of faults in the ith period is
Software Reliability Models 10
)] exp( )) 1 ( [exp( ) ( ) (
1 1
i i t t D D m
i i i i i
β β
β
α
u u − − − − = − = − =
− −
Using the assumptions again pertaining to the f
i
’s being independent nonhomogenous
Poisson random variables and incorporating the concept of the different model types, we
have the joint density.
∏
=
−
− −
− −
− n
s i
i
i
f
i
s
s
F
s
f
m m
F
M M
i s
!
) exp( ) exp(
1
1 1
1
Where s is some integer value chosen in the range 1 to n, M
s1
is the cumulative mean
number of faults in the intervals up to s – 1, and F
s1
is the cumulative number of faults
detected up through interval s1
Model 1 estimates
Model 1 All the fault counts from the n periods are utilized. This means that all data
points are equal importance. The MLE estimates of model are
∑
−
=
+
=
−
−
−
− −
=
1
0
1
1 )
ˆ
exp( 1 )
ˆ
exp(
1
)
ˆ
exp( 1
ˆ
n
k n
k
n
F
f
k
n
n
and
n
F
β β
β
β
α
Where F
n
= and the f
∑
=
n
i
i
f
1
i
’s are fault counts in intervals 1 to n.
Model 2 estimates
Model 2. In this type, the fault counts from the first through the s1 time periods
ignored completely, i.e. only use the data from period s through n. This
means that the early time periods contribute little if anything in predicting
future behavior. The MLE estimates of model are
)) 1 (
ˆ
exp( 1
ˆ
+ − − −
=
s n
F
n r
β
β
α
n s
k s
s n
k
F
f
k
s n
s n
,
0 1 )) 1 (
ˆ
exp(
1
1 )
ˆ
exp(
1
+
−
=
∑
=
− + −
+ −
−
− β β
Software Reliability Models 11
Where F
s,n
= Notice if we let s= 1, model 2 estimates become equivalent to
model 1.
.
k
n
k
f s
∑
=
Model 3 estimates
Model 3. In model 3,the cumulative fault counts from the intervals 1 to s1 is used as
the first data point and individual fault counts for periods through n as the
additional data points. This is an approach, intermediate between the two
that reflects the belief that a combination of the first s 1 period is indicative
of the failure rate process during the later stages. The MLE estimates of
model are
)
ˆ
exp( 1
ˆ
n
F
n
β
β
α
− −
=
k s
s n
k
n
n s
s
f k s
n
nF
F
s
F s
+
−
=
−
− + =
−
−
−
+
− −
−
∑
) 1 (
1 )
ˆ
exp( 1 )
ˆ
exp( 1 )) 1 (
ˆ
exp(
) 1 (
0
,
1
β β β
where .
1
1
1 ∑
−
=
−
=
s
k
k s
f F
If s = 1 is substituted into the above equations we obtain the equivalent estimates for
model 1.
4.6.Hyperexponential Model
The basic idea is that the different sections (or classes) of the software experience an
exponential failure rate; however, the rates vary over these sections to reflect their
different natures. This could be due to different programming groups doing the different
parts, old versus new code, sections written in different languages, etc. The basic idea is
that different failure behaviors are represented in the different sections. We thus reflect
the sum of these different exponential growth curves, not by another exponential, but by a
hyperexponential growth curves. If in observing a software system, you notice that
different clusters of that software appeared to behave differently in their failure rates, the
hyperexponential model may be more appropriate than the classical exponential model
that assumes a similar failure rate.
Category: Finite failure; Class: Extension to Exponential; Type: Poisson
Nature of failure process: Failure Counts
Assumptions
Software Reliability Models 12
The basic assumptions are as follows. Suppose there are K sections (classes of the
software) such that within each class
1. The rate of fault detection is proportional to the current fault content within that
section of the software.
2. The fault detection rate remains constant over the intervals between fault
occurrence.
3. A fault is corrected instantaneously without introducing new faults into the
software.
And for the software system as a whole:
4. The cumulative number of failures by time t, M(t), follows a Poisson
Model form
The failure intensity function is the derivative of ) (t u , we therefore have
) exp( ) (
1
t p N t
i
K
i
i i
β β λ − =
∑
=
The failure intensity function is strictly decreasing for t >0.
By letting N* = N , that is, N* is the number of faults in the i
i
i
p i
th
class, one can obtain
the MLE estimates for each class.
4.7.SchickWolverton
One of the most widely used model used models for hardware reliability modeling is the
Weibull distribution. It can accommodate increasing, decreasing, or constant failure rates
because of the great flexibility expressed through the model’s parameters. Schick
wolverton is an important example of this type model. In this section, initially model is
developed for standard weibull distribution and then derived for schickwolverton
Category: Finite failure; Class: Weibull; Type: Binomial
Nature of failure process: Fault counts
Assumptions
Including the Standard Assumptions of jelinski and moranda models basic assumptions
are:
1. At the initial of software testing, software having fixed number of faults (N) in it
2. The time to failure of fault a, denoted as T
a
, is distributed as Weibull distribution
with parameter ∝ and β
Software Reliability Models 13
3. The numbers of faults detected in each of the respective intervals are independent
for any finite collection of times.
Model form
Failure intensity and mean value function is given by
)) exp( 1 ( ) ( ) (
& ) exp( ) ( ) (
1
α
α
β u
β αβ λ
t N t NF t
t t N t Nf t
a
a
a
− − = =
− = =
−
The total number of faults in the system at the start is limt N t = ∞ → ) ( u . If α =2
weibull model becomes SchickWolverton model. Also from the assumptions we have
that if α =1, the distribution f
a
becomes the exponential, and if it equals 2 we have the
Rayleigh distribution, another important failure model in hardware reliability theory. if 0
< α < 1, per fault hazard rate is decreasing with respect to time; if α equals 1
(exponential) it is constant; and if α > 1, it increases. This form of the conditional hazard
rate is shown to be
t z( ' for )
1
1
) ( ) 1 (
−
−
+ + − =
α
αβ
i
t t i N
1 − i
t
i i i
t t t t < + ≤
− − 1 1
The reliability function is obtained from the cumulative distribution function as
R (t) = 1 –F (t)=exp (βt
α
)
MTTF =
∫
+ Γ
=
α
β
α
1
) 1
1
(
) ( dt t R
Where (•) is the gamma function. Γ
4.8.Sshaped Reliability Growth Model
The Sshaped reliability model assumes curve formed by mean value function u(t) is S
shaped curve rather than the exponential growth of the GoelOkumoto model. This model
supposes reliability decay prior to reliability growth. This is to reflect the initial learning
curve at the beginning, as test team members become familiar with the software,
followed by growth and then leveling off as the residual faults become more difficult to
uncover.
Software Reliability Models 14
Category: Finite failure; Class: Gamma; Type: Poisson
Nature of failure process: Fault counts
Assumptions
1. The failure occurrences are independent and random
2. The initial fault content is random variable
3. The cumulative number of failures by time t, M(t) follows a Poisson process with
mean value function u(t). The mean value function is of the formu(t) = α[1
(1+βt)
e
βt
] for α , β > 0.
4. The time between failures of the (i1)
st
and the i
th
depends on the time to failure of
the (i1)
5. When a failure occurs, the fault, which caused it, is immediately removed and no
other faults are introduced.
6. The software is operated in a similar operational profile as the anticipated usage
Model form
The testing period is divided in various parts. These time intervals are independent of
each other. Suppose f
i
is the number of faults occurring in the interval of length l
i
= t
i
*

t
i
*
1.
From the assumptions we have, each f
i
is an independent Poisson random variable
with mean
] ) 1 [( ) 1 ( 1 [ ] ) 1 ( 1 [ ) ( ) (
*
1
*
* * *
* 1 *
1
*
1
ti
i t ti
i i i
e t e ti e t t t
i
β
β β
β α β α β α u u
− − −
−
−
−
−
+ = + − − + − = −
Also, form the mean value function t ( u ) = α [1(1+βt)e
βt
], we have the failure intensity
function
λ(t)= u’(t) = αβ
2
te
βt
The model gets its Sshaped form because of the mean value function.
The joint density of the fault counts over the given partition is
 
∏
=
− − − − −
n
i
i i i i
t t t t
1
1
* *
1
* *
))) ( ) ( ( exp( ( ) ( u u u u
Software Reliability Models 15
using the assumptions from the previous section. The MLEs of α and α are shown to be
the solutions of the following pair of equations:
n
t
n
n
i
i
t f
*
ˆ *
1
)
ˆ
1 ( 1 ( ˆ
β
β α
−
=
+ − =
∑
∑
∑ ∑
=
− −
−
−
−
−
−
= =
−
\






.

+ − +
−


.

−
\

=
−
−
n
i
t
i
t
i
t
i
t
i
i
j
j
i
j
i
n t
n
i i
i i
e t e t
e t e t f f
e t
1
1
2 *
1
2 *
1
1 1
2 *
)
ˆ
1 ( )
ˆ
1 ((
) ) ( ) ((
) ( ˆ
1
*
1
*
*
β β
β β
β
β β
α
4.9.Duane’s Model
In this model, time of failures is considered. The number of such occurrences considered
per unit of time is assumed to follow a nonhomogeneous poisson process. This model is
sometimes referred to as the power model since the mean value function for the
cumulative number of failures by time t is taken as power of t, that is u(t) = αt
β
for some
β > 0 and α > 0. (For the case where β =1, we have the homogeneous Poisson process
model.) This model is an infinite failures model since lim
t
→ ∞ u (t) = ∞.
Category: Infinite failure; Family: Power; Type: Poisson
Nature of failure process: Fault Counts
Assumption
1. The software is operated in a similar operational profile as anticipated usage
2. The failure occurrences are independent.
3. The cumulative number of failures by time t, M (t), follows a Poisson process
with mean value function u(t) = αt
β
for some β > 0 and α > 0.
Model form.
This model represents Poisson process with a mean value function of
u(t) = αt
β
If T is the total time the software is observed, then we have
Software Reliability Models 16
= =
T
T
T
T
β
α
u
) (
Expected number of failures by time T
Total testing time
If
T
T
T
T
β
α
u =
) (
is plotted on loglog paper a straight of the form Y = a + b*X with a =
ln(λ), b = b and X= ln(t) is obtained.
The failure intensity function is obtained by taking the derivative of the mean value
function, that is
λ(t) = du(t) ⁄ dt = αβt
β1
For β > 1, the failure intensity function is strictly increasing
β=1. the failure intensity is remains constant (homogeneous Poisson process)
1 > β > 0, failure intensity is strictly decreasing
β > 1, there can be no reliability growth.
The maximum likelihood estimates for Duane model
∑
−
=
=
=
1
1
ˆ
) / ln(
ˆ
& ˆ
n
i
i n
n
t t
n
t
n
β
α
β
Where the t
i
’s are the observed failure times in either CPU time or wall clock time and n
is the number of failures of observed to date.
4.10.Geometric Model
The time between failures is take to an exponential distribution whose mean decreases in
a geometric fashion. The discovery of the earlier faults is taken to have a larger impact on
reducing the hazard rate than the later ones. As failures occur the hazard rate decreases in
a geometric progression.
Category: Infinite failure; Family: Geometric
Software Reliability Models 17
Assumption
Including the Standard Assumptions jelinski and moranda model, the basic assumptions
are:
1. There are an infinite number of total faults in the system.
2. The fault detection rate forms a geometric progression and is constant between
fault detections.
3. The time between fault detection follows an exponential distribution.
Model form
The density for the time between failures of the ith and (i1)
st
is exponential of the form:
f(X
i
) = Dφ
i1
exp(Dφ
i1
X
i
) = z(t
i1
)exp(z(t
i1
)X
i
).
Thus the expected time between failures is
1
1
1
) (
1
) (
−
−
= =
i
i
i
D t z
X E
φ
for i = 1,……, n
) 1 )] exp( ln([
1
) ( + = t D t β β
β
u
1 )] exp( [
) exp(
) (
+
=
t D
D
t
β β
β
λ
Where ) ln(φ β − = for 0 < φ < 1
From assumptions the joint density function for the X
i
’s is
\


.

− =
∑ ∏ ∏
=
−
=
=
=
n
i
i
i
n
i
i n
n
i
i
X D D X f
1
1
1
1
1
exp ) ( φ φ
Taking the natural log of this function and taking the partials with respect to φ and D, the
maximum likelihood k estimates are the solutions of the following pair of equations:
∑
=
=
n
i
i
i
X
n
D
1
ˆ
ˆ
ˆ
φ
φ
Software Reliability Models 18
2
1
1
1
1
1
+
=
∑
∑
=
−
=
−
n
X i
X i
i
n
i
i
i
n
i
i
φ
φ
4.11.Musa Okumoto Logarithmic Poisson
The exponential rate of decrease reflects the view that the earlier discovered failures have
a greater impact on reducing the failure intensity function than those encountered later. It
is called logarithmic because the expected number of failures over time is a logarithmic
function.
Category: Infinite failure; Family: Geometric; Type: Poisson
Nature of failure process: Time to failure
Assumptions
Including the Standard Assumptions of jelinski and moranda model, the basic
assumptions are:
1. The failure intensity decreases exponentially with the expected number of failures
experienced, that is,λ(t) = λ
0
exp(θu(t)), where u(t) is the mean value function θ
> 0 is the failure rate decay parameter, and λ
0
> 0 is the initial failure rate.
2. The cumulative number of failures by time t, M(t), follows a Poisson process
λ(t) = λ
0
/(λ
0
θt + 1).
A second expression of the logarithmic Poisson model to aid in obtaining the maximum
likelihood estimates is through a reparameterization of the model. We let β
0
= θ
1
and β
1
= λ
0
θ . The intensity and mean value functions become in this case:
λ(t) = β
0
β
1
/ (β
1
t +1) and u(t) = β
0
ln (β
1
t+1)
Using the reparameterized model, the maximum likelihood estimates of β
0
and β
1
are
)
ˆ
1 ln(
1
0
n
t
n
β
β
+
=
)
ˆ
1 ln( )
ˆ
1 (
ˆ
1
1
ˆ
1
1 1
1
1 1 n n
n
n
i
i
t t
nt
t β β β β + +
=
+
∑
=
Software Reliability Models 19
4.12.Littlewood Verrall Reliability Growth Model
The LittlewoodVerrall model is the best example of Bayesian type models. The model
attempts to account the fault existed due to fault correction process. Impact of this the
software program could become less reliable than before. With each fault correction, a
sequence of software programs is generated. Each is obtained from its predecessor by
attempting to fix the fault. Because of uncertainty, the new version could be better or
worse than its predecessor; thus another source of variation is introduced. This is
reflected in the parameters that define the failure time distributions, which are taken to be
random. The distribution of failure times is, as in the earlier models, assumed to be
exponential with a certain failure rate, but it is that rate that is assumed to be random
rather than constant as before. The distribution of this rate, as reflected by prior, is
assumed to be gamma distribution.
Category: Infinite failure; Bayesian Model
Nature of failure process: Time to failure
Assumptions
1. Times between failures (successive), that is, Xi’s are assumed to be independent
exponential random variables with parameter ξ
i , i = 1,
….., n.
2. The ξ
i
’
form a sequence of independent random variables, each with a gamma
distribution of parameters α and ψ (i). The function ψ (i) is taken to be an
increasing function of i that describes the quality of the programmer and the
difficulty of the task. A good programmer would have a more rapidly increasing
function than a poorer one.
3. The software is operated in a manner similar to the anticipated operational usage.
Model form
The prior distribution for the ξ
i
’s is of the form:
0 ,
) (
) ) ( exp( )] ( [
) ), ( (
1
>
Γ
−
=
−
i
i i
i
i i
i g ξ
α
ξ ψ ξ ψ
α ψ ξ
α α
The marginal distribution of the x
i
can be shown to be:
Software Reliability Models 20
1
)] ( [
)] ( [
)) ( , / (
+
+
=
α
α
ψ
ψ α
ψ α
i x
i
i x f
i
i
for 0 >
i
x
The joint density is
∏
∏
=
+
=
+
=
n
i
i
n
i
n
n
i x
i
x x x f
1
1
1
2 1
)] ( [
)] ( [
) ,......., , (
α
α
ψ
ψ α
for n i x
i
,....., 1 , 0 = >
The posterior distribution for the ξ
i
’s is therefore obtained as
n i for
i x
i x
h
i
n
i
i
n
n
i
n
i
i i i
n
,...., 1 , 0
)) ( ( )] 1 ( [
)) ( ( exp
) ,..., , (
1
1
1 1
2 1
= >
+ + Γ


.

\

+ −
=
∏
∏ ∑
=
+
= =
ξ
ψ α
ψ ξ ξ
ξ ξ ξ
α
α
The failure intensity functions for the linear and quadratic forms can be shown to be
and
t
t
linear
) 1 ( 2
1
) (
1
2
0
− +
−
=
α β β
α
λ
3 / 1 2 / 1
2
2 3 / 1 2 / 1
2
2
2
2
1
) ) ( ( ) ) ( (( ) ( v t t v t t
t
t
quadratic
+ − − + +
+
=
ν
ν
λ
Where v
1 = ) ) 18 /( ) 1 (
3 / 1
1
3 / 1
β α −
) ) 1 ( 9 /( 4
2 0
3 2
β α β − = v
Using the marginal distribution function for the x
i
’s, the maximum likelihood estimates
of α, β
0
, and β
1
can be found as the solutions to the following system of equations
∑ ∑
= =
= + − +
n
i
n
i
i
i x In i In
n
1 1
0 )) ( ˆ ( )) ( ˆ (
ˆ
ψ ψ
α
Software Reliability Models 21
∑ ∑
= =
=
+
+ −
n
i
n
i i
i x i
1 1
0
) ( ˆ
1
) 1 ˆ (
) ( ˆ
1
ˆ
ψ
α
ψ
α
∑ ∑
= =
=
+
+ −
n
i
n
i i
i x
i
i
i
1 1
0
) ( ˆ
'
) 1 ˆ (
) ( ˆ
'
ˆ
ψ
α
ψ
α
Where ψ(i) = β
0
, β
1
i’ and i’ is either i or i
.2
Using a uniform prior for α.
This action might not be possible to undo. Are you sure you want to continue?
Software Reliability Models will be available on