The activities of software testing and reliability are integrated for the purpose of demonstrating how the two activities interact in achieving testing ef\ufb01ciency and the reliability resulting from these tests. Integrating means modeling the execution of a variety of tests on a directed graph representation of an example program. A complexity metric is used to construct the nodes, edges, and paths of the example program. Models are developed to represent the ef\ufb01ciency and achieved reliability of black box and white box tests. Evaluations are made of path, independent path, node, program construct, and random tests to ascertain which, if any, is superior with respect to ef\ufb01ciency and reliability. Overall, path testing has the edge in test ef\ufb01ciency. The results depend on the nature of the directed graph in relation to the type of test. Although there is no dominant method, in most cases the tests that provide detailed coverage are better. For example, path testing discovers more faults than independent path testing. Predictions are made of the reliability and fault correction that results from implementing various test strategies. It is believed that these methods can be used by researchers and practitioners to evaluate the ef\ufb01ciency and reliability of other programs. Copyright\u00a92008 John Wiley & Sons, Ltd.
Software is a complex intellectual product. Inevitably, some errors are made during requirements formulation as well as during designing, coding, and testing the product. State-of-the-practice soft- ware development processes to achieve high-quality software includes measures that are intended to discover and correct faults resulting from these errors, including reviews, audits, screening by language-dependent tools and several levels of tests. Managing these errors involves describing,
One approach to achieving high-quality software is to investigate the relationship between testing and reliability. Thus, the problem that this research addresses is the comprehensive integration of testing and reliability methodologies. Although other researchers have addressed bits and pieces of the relationship between testing and reliability, it is believed that this is the \ufb01rst research to integrate testing ef\ufb01ciency, the reliability resulting from tests, modeling the execution of tests with directed graphs, using complexity metrics to represent the graphs, and evaluations of path, independent path, node, random node, white box, and black box tests.
One of the reasons for advocating the integration of testing with reliability is that, as recommended by Hamlet, the risk of using software can be assessed based on reliability information. He states that the primary goal of testing should be to measure the reliability of tested software. Therefore, it is undesirable to consider testing and reliability prediction as disjoint activities.
When integrating testing and reliability, it is important to know when there has been enough testing to achieve reliability goals. Thus, determining when to stop a test is an important management decision. Several stopping criteria have been proposed, including the probability that the software has a desired reliability and the expected cost of remaining faults. Use the probabilities associated with path and node testing in a directed graph to estimate the closeness to the desired reliability of 1.0 that can be achieved. To address the cost issue, explicitly estimate the cost of remaining faults in monetary units and estimate it implicitly by the number of remaining faults compared with the total number of faults in the directed graph of a program.
discovery and removal is used as a heuristic metric when testing is \u2018complete\u2019. At each stage of testing, reliability is estimated to note the ef\ufb01ciency of various testing methods: path, independent path, random path, node, and random node.
A pessimistic but realistic view of testing is offered by Beizer. An interesting analogy parallels the dif\ufb01culty in software testing with pesticides, known as the Pesticide Paradox. Every method that is used to prevent or \ufb01nd bugs leaves a residue of subtler bugs against which those methods are ineffectual. This problem is compounded because the Complexity Barrier principle states that Software complexity and presence of bugs grow to the limit of the ability to manage complexity and bug presence. By eliminating the previously easily detected bugs, another escalation of features and complexity has arisen. But this time there are subtler bugs to face, just to retain the previous reliability. Society seems to be unwilling to limit complexity because many users want extra features. Thus, users usually push the software to the complexity barrier. How close to approach that barrier is largely determined by the strength of the techniques that can be wielded against ever more complex and subtle bugs. Even in developing the relatively simple example program this paradox was found to be true: as early detected bugs (i.e. faults) were easily removed and complexity and features were increased, a residue of subtle bugs remained and was compounded by major bugs attributed to increased complexity. Perhaps as the \ufb01elds of testing and reliability continue to mature, the \ufb01elds will learn how to model these effects.
A further complication involves the dynamic nature of programs. If a failure occurs during preliminary testing and the code is changed, the software may now work for a test case that did not work previously. But the code\u2019s behavior on preliminary testing can no longer be guaranteed. To account for this possibility, testing should be restarted. The expense of doing this is often prohibitive. It would be possible to model this effect but at the cost of unmanageable model complexity engendered by restarting the testing. It appears that this effect would have been modeled by simulation.
The analysis starts with the notations that are used in the integrated testing and reliability approach to achieving high-quality software. Refer to these notations when reading the equations and analyses.
nf: number of faults in a program;
en: number: number of edges at noden;
ne: number of edges in a program (generated by random process in random path testing);
nnj: number of nodes in pathj ;
nn: number of nodes in a program;
nj: number of paths in a program.
Now bringing you back...
Does that email address look wrong? Try again with a different email.