Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Save to My Library
Look up keyword
Like this
3Activity
0 of .
Results for:
No results containing your search query
P. 1
integrating testing with reliability

integrating testing with reliability

Ratings: (0)|Views: 710|Likes:
Published by kualitatem
http://www.kualitatem.com
http://www.kualitatem.com

More info:

Published by: kualitatem on Sep 04, 2009
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

02/01/2013

pdf

text

original

SOFTWARE TESTING, VERIFICATION AND RELIABILITY
Softw. Test. Verif. Reliab.2009;19:175\u2013198
Published online 15 July 2008 in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/stvr.395
Integrating testing with
reliability
Norman Schneidewind\u2217,\u2020,\u2021,\u00a7
Professor Emeritus of Information Sciences, Naval Postgraduate School,
U.S. Senate, U.S.A.
SUMMARY

The activities of software testing and reliability are integrated for the purpose of demonstrating how the two activities interact in achieving testing ef\ufb01ciency and the reliability resulting from these tests. Integrating means modeling the execution of a variety of tests on a directed graph representation of an example program. A complexity metric is used to construct the nodes, edges, and paths of the example program. Models are developed to represent the ef\ufb01ciency and achieved reliability of black box and white box tests. Evaluations are made of path, independent path, node, program construct, and random tests to ascertain which, if any, is superior with respect to ef\ufb01ciency and reliability. Overall, path testing has the edge in test ef\ufb01ciency. The results depend on the nature of the directed graph in relation to the type of test. Although there is no dominant method, in most cases the tests that provide detailed coverage are better. For example, path testing discovers more faults than independent path testing. Predictions are made of the reliability and fault correction that results from implementing various test strategies. It is believed that these methods can be used by researchers and practitioners to evaluate the ef\ufb01ciency and reliability of other programs. Copyright\u00a92008 John Wiley & Sons, Ltd.

Received 22 August 2007; Revised 20 March 2008; Accepted 2 April 2008
KEY WORDS: test ef\ufb01ciency; software reliability; modeling ef\ufb01ciency and reliability
1. INTRODUCTION

Software is a complex intellectual product. Inevitably, some errors are made during requirements formulation as well as during designing, coding, and testing the product. State-of-the-practice soft- ware development processes to achieve high-quality software includes measures that are intended to discover and correct faults resulting from these errors, including reviews, audits, screening by language-dependent tools and several levels of tests. Managing these errors involves describing,

\u2217Correspondence to: Norman Schneidewind, Professor Emeritus of Information Sciences, Naval Postgraduate School,
U.S. Senate, U.S.A.
\u2020E-mail: ieeelife@yahoo.com
\u2021Fellow of the IEEE.
\u00a7IEEE Congressional Fellow, 2005.
Copyrightq 2008 John Wiley & Sons, Ltd.
176
N. SCHNEIDEWIND
classifying, and modeling the effects of the remaining faults in the delivered product and thereby
helping to reduce their number and criticality[1].

One approach to achieving high-quality software is to investigate the relationship between testing and reliability. Thus, the problem that this research addresses is the comprehensive integration of testing and reliability methodologies. Although other researchers have addressed bits and pieces of the relationship between testing and reliability, it is believed that this is the \ufb01rst research to integrate testing ef\ufb01ciency, the reliability resulting from tests, modeling the execution of tests with directed graphs, using complexity metrics to represent the graphs, and evaluations of path, independent path, node, random node, white box, and black box tests.

One of the reasons for advocating the integration of testing with reliability is that, as recommended by Hamlet[2], the risk of using software can be assessed based on reliability information. He states that the primary goal of testing should be to measure the reliability of tested software. Therefore, it is undesirable to consider testing and reliability prediction as disjoint activities.

When integrating testing and reliability, it is important to know when there has been enough testing to achieve reliability goals. Thus, determining when to stop a test is an important management decision. Several stopping criteria have been proposed, including the probability that the software has a desired reliability and the expected cost of remaining faults[3]. Use the probabilities associated with path and node testing in a directed graph to estimate the closeness to the desired reliability of 1.0 that can be achieved. To address the cost issue, explicitly estimate the cost of remaining faults in monetary units and estimate it implicitly by the number of remaining faults compared with the total number of faults in the directed graph of a program.

Given that it cannot be shown that there are no more errors in the program, use heuristic arguments
based on thoroughness and sophistication of testing effort andtrends in the resulting discovery
of faultsto argue the plausibility of the lower risk of remaining faults[4]. The progress in fault

discovery and removal is used as a heuristic metric when testing is \u2018complete\u2019. At each stage of testing, reliability is estimated to note the ef\ufb01ciency of various testing methods: path, independent path, random path, node, and random node.

1.1. Challenges to ef\ufb01cient testing

A pessimistic but realistic view of testing is offered by Beizer[5]. An interesting analogy parallels the dif\ufb01culty in software testing with pesticides, known as the Pesticide Paradox. Every method that is used to prevent or \ufb01nd bugs leaves a residue of subtler bugs against which those methods are ineffectual. This problem is compounded because the Complexity Barrier principle states[5] that Software complexity and presence of bugs grow to the limit of the ability to manage complexity and bug presence. By eliminating the previously easily detected bugs, another escalation of features and complexity has arisen. But this time there are subtler bugs to face, just to retain the previous reliability. Society seems to be unwilling to limit complexity because many users want extra features. Thus, users usually push the software to the complexity barrier. How close to approach that barrier is largely determined by the strength of the techniques that can be wielded against ever more complex and subtle bugs. Even in developing the relatively simple example program this paradox was found to be true: as early detected bugs (i.e. faults) were easily removed and complexity and features were increased, a residue of subtle bugs remained and was compounded by major bugs attributed to increased complexity. Perhaps as the \ufb01elds of testing and reliability continue to mature, the \ufb01elds will learn how to model these effects.

Copyrightq 2008 John Wiley & Sons, Ltd.
Softw. Test. Verif. Reliab.2009;19:175\u2013198
DOI: 10.1002/stvr
INTEGRATING TESTING WITH RELIABILITY
177

A further complication involves the dynamic nature of programs. If a failure occurs during preliminary testing and the code is changed, the software may now work for a test case that did not work previously. But the code\u2019s behavior on preliminary testing can no longer be guaranteed. To account for this possibility, testing should be restarted. The expense of doing this is often prohibitive[6]. It would be possible to model this effect but at the cost of unmanageable model complexity engendered by restarting the testing. It appears that this effect would have been modeled by simulation.

The analysis starts with the notations that are used in the integrated testing and reliability approach to achieving high-quality software. Refer to these notations when reading the equations and analyses.

1.2. Notations and de\ufb01nitions
edge: arc emanating from a node;
node: connection point of edges;
i: identi\ufb01cation of an edge;
n: identi\ufb01cation of a node;
c: identi\ufb01cation of a program construct;
k: test number;
empirical: reliability metrics based on historical fault data.
1.2.1. Independent variables (i.e. not computed; generated by random process)
f(n): fault count in node n;

nf: number of faults in a program;
en: number: number of edges at noden;
ne: number of edges in a program (generated by random process in random path testing);

n(c, k): number of faults encountered and removed by testing construct con test k.
1.2.2. Dependent variables (i.e. computed or obtained by inspection)
1.2.2.1. Number of program elements.

nnj: number of nodes in pathj ;
nn: number of nodes in a program;
nj: number of paths in a program.

1.2.2.2. Probabilities.
p( j): probability of traversing path j;
p(n): probability of traversing node n.
1.2.2.3. Expected values.
E(n): expected number of faults at node nduring testing;
E( j): expected number of faults on path jduring testing;
Ep: expected number of faults encountered in a program based on path testing.
Copyrightq 2008 John Wiley & Sons, Ltd.
Softw. Test. Verif. Reliab.2009;19:175\u2013198
DOI: 10.1002/stvr

You're Reading a Free Preview

Download
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->