[Software Testing is the process of executing a program or system with the intent of finding errors. [Myers79] Or, it involves any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results.] [Hetzel88] Software is not unlike other physical processes where inputs are received and outputs are produced. Where software differs is in the manner in which it fails. Most physical systems fail in a fixed (and reasonably small) set of ways. By contrast, software can fail in many bizarre ways. Detecting all of the different failure modes for software is generally infeasible. [Rstcorp] Unlike most physical systems, most of the defects in software are design errors, not manufacturing defects. Software does not suffer from corrosion, wear-and-tear -- generally it will not change until upgrades, or until obsolescence. So once the software is shipped, the design defects -- or bugs -- will be buried in and remain latent until activation. Software bugs will almost always exist in any software module with moderate size: not because programmers are careless or irresponsible, but because the complexity of software is generally intractable -- and humans have only limited ability to manage complexity. It is also true that for any complex systems, design defects can never be completely ruled out. Discovering the design defects in software, is equally difficult, for the same reason of complexity. Because software and any digital systems are not continuous, testing boundary values are not sufficient to guarantee correctness. All the possible values need to be tested and verified, but complete testing is infeasible. Exhaustively testing a simple program to add only two integer inputs of 32-bits (yielding 2^64 distinct test cases) would take hundreds of years, even if tests were performed at a rate of thousands per second. Obviously, for a realistic software module, the complexity can be far beyond the example mentioned here. If inputs from the real world are involved, the problem will get worse, because timing and unpredictable environmental effects and human interactions are all possible input parameters under consideration. A further complication has to do with the dynamic nature of programs. If a failure occurs during preliminary testing and the code is changed, the software may now work for a test case that it didn't work for previously. But its behavior on pre-error test cases that it passed before can no longer be guaranteed. To account for this possibility, testing should be restarted. The expense of doing this is often prohibitive. [Rstcorp] An interesting analogy parallels the difficulty in software testing with the pesticide, known as the Pesticide Paradox [Beizer90]: Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual. But this alone will not guarantee to make the software better, because the Complexity Barrier [Beizer90] principle states: Software complexity(and therefore that of bugs) grows to the limits of our ability to manage that complexity. By eliminating the (previous) easy bugs you allowed another escalation of features and complexity, but his time you have subtler bugs to face, just to retain the reliability you had before. Society seems to be unwilling to limit complexity because we all want that extra bell, whistle, and feature interaction. Thus, our users always push us to the complexity barrier and

how close we can approach that barrier is largely determined by the strength of the techniques we can wield against ever more complex and subtle bugs. [Beizer90] Regardless of the limitations, testing is an integral part in software development. It is broadly deployed in every phase in the software development cycle. Typically, more than 50% percent of the development time is spent in testing. Testing is usually performed for the following purposes:

To improve quality.

As computers and software are used in critical applications, the outcome of a bug can be severe. Bugs can cause huge losses. Bugs in critical systems have caused airplane crashes, allowed space shuttle missions to go awry, halted trading on the stock market, and worse. Bugs can kill. Bugs can cause disasters. The so-called year 2000 (Y2K) bug has given birth to a cottage industry of consultants and programming tools dedicated to making sure the modern world doesn't come to a screeching halt on the first day of the next century. [Bugs] In a computerized embedded world, the quality and reliability of software is a matter of life and death. Quality means the conformance to the specified design requirement. Being correct, the minimum requirement of quality, means performing as required under specified circumstances. Debugging, a narrow view of software testing, is performed heavily to find out design defects by the programmer. The imperfection of human nature makes it almost impossible to make a moderately complex program correct the first time. Finding the problems and get them fixed [Kaner93], is the purpose of debugging in programming phase.

For Verification & Validation (V&V)

Just as topic Verification and Validation indicated, another important purpose of testing is verification and validation (V&V). Testing can serve as metrics. It is heavily used as a tool in the V&V process. Testers can make claims based on interpretations of the testing results, which either the product works under certain situations, or it does not work. We can also compare the quality among different products under the same specification, based on results from the same test. We can not test quality directly, but we can test related factors to make quality visible. Quality has three sets of factors -- functionality, engineering, and adaptability. These three sets of factors can be thought of as dimensions in the software quality space. Each dimension may be broken down into its component factors and considerations at successively lower levels of detail. Table 1 illustrates some of the most frequently cited quality considerations.

Functionality (exterior quality) Correctness Reliability Usability

Engineering (interior quality) Efficiency Testability Documentation

Adaptability (future quality) Flexibility Reusability Maintainability

Integrity Structure Table 1. Typical Software Quality Factors [Hetzel88]

[Good testing provides measures for all relevant factors. The importance of any particular factor varies from application to application. Any system where human lives are at stake must place extreme emphasis on reliability and integrity. In the typical business system usability and maintainability are the key factors, while for a one-time scientific program neither may be significant. Our testing, to be fully effective, must be geared to measuring each relevant factor and thus forcing quality to become tangible and visible. [Hetzel88] Tests with the purpose of validating the product works are named clean tests, or positive tests. The drawbacks are that it can only validate that the software works for the specified test cases. A finite number of tests can not validate that the software works for all situations. On the contrary, only one failed test is sufficient enough to show that the software does not work. Dirty tests, or negative tests, refers to the tests aiming at breaking the software, or showing that it does not work. A piece of software must have sufficient exception handling capabilities to survive a significant level of dirty tests. A testable design is a design that can be easily validated, falsified and maintained. Because testing is a rigorous effort and requires significant time and cost, design for testability is also an important design rule for software development.]

For reliability estimation [Kaner93] [Lyu95]

Software reliability has important relations with many aspects of software, including the structure, and the amount of testing it has been subjected to. Based on an operational profile (an estimate of the relative frequency of use of various inputs to the program [Lyu95]), testing can serve as a statistical sampling method to gain failure data for reliability estimation. Software testing is not mature. It still remains an art, because we still cannot make it a science. We are still using the same testing techniques invented 20-30 years ago, some of which are crafted methods or heuristics rather than good engineering methods. Software testing can be costly, but not testing software is even more expensive, especially in places that human lives are at stake. Solving the software-testing problem is no easier than solving the Turing halting problem. We can never be sure that a piece of software is correct. We can never be sure that the specifications are correct. No verification system can verify every correct program. We can never be certain that a verification system is correct either.

Key Concepts

timing.[There is a plethora of testing methods and testing techniques. design phase testing. software testing can be classified into the following categories: requirements phase testing.only the inputs. Due to limitations of the language used in the specifications (usually natural language). component testing. acceptance testing and maintenance testing. evaluating test results. the essential purpose of testing. Therefore. software testing can be divided into: correctness testing. various inputs are exercised and the outputs are compared against specification to validate the correctness. Correctness testing Correctness is the minimum requirement of software. In testing. we may still fail to write down all the possible cases in the specification. serving multiple purposes in different life cycle phases. integration testing. y Black-box testing The black-box approach is a testing method in which test data are derived from the specified functional requirements without regard to the final program structure. ambiguity is often inevitable. software testing can be categorized as follows: unit testing. Even if we use some type of formal or restricted language.a testing method emphasized on executing the functions and examination of their input and output data. the more problems we will find and therefore we will be more confident about the quality of the software. what they want after they have been finished.g. e. But as stated above. And people can seldom specify clearly what they want -. Sometimes. we can never be sure whether the specification is either correct or complete. and resource variables. control flow. and system testing]. All test cases are derived from the specification. Combinatorial explosion is the major roadblock in functional testing. or is not. To make things worse. data flow. The tester may or may not know the inside details of the software module under test. to tell the right behavior from the wrong one. installation phase testing. Because only the functionality of the software module is of concern. program phase testing. usually the number of test cases. exhaustively testing the combinations of valid inputs will be impossible for most of the programs. We must note that the black-box and white-box ideas are not limited in correctness testing only. Correctness testing will need some type of oracle. the specification itself becomes an intractable problem: it is not possible to specify precisely every situation that can be encountered using limited words. performance testing. [Perry90] It is also termed data-driven. Specification problems contributes approximately 30 percent of all bugs in software. [Beizer95] The research in black-box testing mainly focuses on how to maximize the effectiveness of testing with minimum cost. [Howden87] The tester treats the software under test as a black box -. and the functionality is determined by observing the outputs to corresponding inputs. or requirements-based [Hetzel88] testing. . By scope. etc.they usually can tell whether a prototype is. It is obvious that the more we have covered in the input space. input/output driven [Myers79]. let alone considering invalid inputs. Ideally we would be tempted to exhaustively test the input space. reliability testing and security testing. black-box testing also mainly refers to functional testing -. Classified by lifecycle phase. either a white-box point of view or black-box point of view can be taken in testing software. sequence. outputs and specification are visible. Classified by purpose. It is not possible to exhaust the input space. No implementation details of the code are considered.

traverse every branch statements (branch coverage). [Parrington89] Control-flow testing. finite-state testing. which can not be discovered by functional testing. Boundary values are of special interest. Boundary value analysis [Myers79] requires one or more boundary values selected as representative test cases. logic-driven testing [Myers79] or design-based testing [Hetzel88]. If we have partitioned the input space and assume all the input values in a partition is equivalent. Good partitioning requires knowledge of the software structure. The more mutants a test case can kill. loop testing. The problem with mutation testing is that it is too computationally expensive to use. as the structure and flow of the software under test are visible to the tester. The boundary between black-box approach and white-box approach is not clear-cut. and styles. each contains one fault. the better the test case is considered. Partitioning is one of the common techniques. White-box testing is also called glass-box testing. and combinations of the two. and many other testing strategies not discussed in this text. It is also true for transaction-flow testing. There are many techniques available in white-box testing. Domain testing [Beizer95] partitions the input domain into regions. One reason is that all the above techniques will need some knowledge of the specification of the . Test cases are carefully selected based on the criterion that all the nodes or paths are covered or traversed at least once. A good testing plan will not only contain black-box testing. By doing so we may discover unnecessary "dead" code -.code that is of no use. such as executing each line of code at least once (statement coverage). but also white-box approaches. or glass-box in white-box testing. and consider the input values in each domain an equivalent class. Many testing strategies mentioned above. Domains can be exhaustively tested and covered by selecting a representative value(s) in each domain. the original program code is perturbed and many mutated programs are created. or cover all the possible combinations of true and false condition predicates (Multiple condition coverage). Experience shows that test cases that explore boundary conditions have a higher payoff than test cases that do not. The difficulties with domain testing are that incorrect domain definitions in the specification can not be efficiently discovered. and some degree of exhaustion can be achieved. Testing plans are made according to the details of the software implementation. syntax testing. The intention of exhausting some aspect of the software is still strong in white-box testing. Test data are selected based on the effectiveness of failing the mutants. may not be safely classified into black-box testing or white-box testing. y White-box testing Contrary to black-box testing. In mutation testing. such as programming language.but it is possible to exhaustively test a subset of the input space. Test cases are derived from the program structure. then we only need to test one representative value in each partition to sufficiently cover the whole input space. logic. Each faulty version of the program is called a mutant. and data-flow testing. software is viewed as a white-box. or never get executed at all. because the problem of intractability is eased by specific knowledge and attention on the structure of the software under test. all maps the corresponding flow structure of the software into a directed graph.

Effectively combining random testing with other testing techniques may yield more powerful and cost-effective testing strategies. disk access operations. workload or trace designed to be representative of the typical system usage. The goal of performance testing can be performance bottleneck identification. [Hamlet94] Robustness testing and stress testing are variances of reliability testing based on this simple criterion. and an estimation model can be further used to analyze the data to estimate the present reliability and predict future reliability. and memory usage [Smith90]. performance comparison and evaluation. And it is also not inferior in coverage than other carefully designed testing techniques. The test case selection is simple and straightforward: they are randomly chosen. Some very subtle errors can be discovered with low cost. based on the estimation. programming language. software testing (usually blackbox testing) can be used to obtain failure data. Guided by the operational profile. Risk of using software can also be assessed based on reliability information. Performance testing Not all software systems have specifications on performance explicitly. [Hamlet94] advocates that the primary goal of testing should be to measure the dependability of tested software. and the users can decide whether to adopt and use the software. Therefore. . Study in [Duran84] indicates that random testing is more cost effective for many programs. Performance has always been a great concern and a driving force of computer evolution. One can also obtain reliability estimate using random testing results based on operational profiles. stimulus-response time and queue lengths detailing the average or maximum number of tasks waiting to be serviced by selected resources. Directly estimating software reliability by quantifying its related factors can be difficult. Testing is an effective sampling method to measure software under test. The software should not take infinite time or infinite resource to execute. including the testing process. and programming style as part of the specification content. We may be reluctant to consider random testing as a testing technique. [Vokolos98] Reliability testing Software reliability refers to the probability of failure-free operation of a system. the developers can decide whether to release the software. Performance evaluation of a software system usually includes: resource usage. throughput.a program. The typical method of doing performance testing is using a benchmark -. Typical resources that need to be considered include network bandwidth requirements. There is agreement on the intuitive meaning of dependable software: it does not fail in unexpected or catastrophic ways. etc. It is related to many aspects of software. "Performance bugs" sometimes are used to refer to those design problems in software that cause the system performance to degrade. Another reason is that the idea of specification itself is broad -. CPU cycles. But every system will have implicit performance requirements. disk may contain any requirement including the structure.

Stress testing. [IEEE90] Robustness testing differs with correctness testing in the sense that the functional correctness of the software is not of concern. With the development of the Internet. We can not test till all the defects are unearthed and removed -. The purpose of security testing of these systems include identifying and removing software flaws that may potentially lead to security violations. Similar simple metrics can also be used in stress testing. The question is when. In general. Security testing Software quality. When to stop testing? Testing is potentially endless. and generate test cases to test the target software against the oracles to decide their correctness. In order to automate the process. such as the work in [Koopman97] [Kropp98] [Ghosh98] [Devale99] [Koopman99].it is simply impossible. Flaws in software can be exploited by intruders to open security holes. Typical stress includes resource exhaustion. reliability and security are tightly coupled. The problem is lessened in reliability testing and performance testing. . and validating the effectiveness of security measures. significant amount of human intervention is still needed in testing. we have to have some ways to generate oracles from the specification. process hangs or abnormal termination. It only watches for robustness problems such as machine crashes. In such tests the software or system are exercised with or beyond the specified limits. or load testing. The oracle is relatively simple. This research has drawn more and more interests recently. At some point. is often used to test the whole system rather than the software alone. Software testing tools and techniques usually suffer from a lack of generic applicability and scalability. Today we still don't have a full-scale system that has achieved this goal. software security problems are becoming even more severe. Automation is a good way to cut down time and cost. bursts of activities. The degree of automation remains at the automated test script level.The robustness of a software component is the degree to which it can function correctly in the presence of exceptional inputs or stressful environmental conditions. we have to stop testing and ship the software. Many critical software applications and services have integrated security measures against malicious attacks. most of which uses commercial operating systems as their target. Simulated security attacks can be performed to find vulnerabilities. In robustness testing. The reason is straight-forward. doesn't hang suffices. Testing automation Software testing can be very costly. therefore robustness testing can be made more portable and scalable than correctness testing. the simple specification and oracle: doesn't crash. and sustained high loads.

or the benefit from continuing testing cannot justify the testing cost. such as the telephone outage in California and eastern seaboard in 1991. branch coverage in testing really related to software quality? There is no definite proof. or any of the allocated resources -. Why do we need more effective testing methods anyway. which are more error-prone. It is driven by profit models. time and quality. But such methods were washed away by the tide of pipelined manufacturing and good quality engineering process. testing is a trade-off between budget. "design for testability".time. walkthroughs. But this method can not surmount the complexity barrier either. Using testing to locate and correct software defects can be an endless process. and unfortunately most often used approach is to stop testing whenever some. The optimistic stopping rule is to stop testing when either reliability meets the requirement. happened after bug fixes. Each evaluation requires repeated running of the following cycle: failure data gathering -. full-fledged large software systems. For relatively simple software. [Hamlet94] advocates inspection as a cost-effect alternative to unit testing. the so-called "human testing" -. because the real field failure data will take too long to accumulate. reviews -.modeling -prediction. budget. however. It does not scale well to those complex.are suggested as possible alternatives to traditional testing methods. This is the leap for software from craftsmanship to engineering. Coverage testing. Bugs cannot be completely ruled out. Sometimes fixing a problem may introduce much more severe problems into the system. Alternatives to testing Software testing is more and more considered a problematic method toward better quality. since finding defects and removing them does not necessarily lead to better quality. Testing is used solely for quality monitoring and management. this method works well. As early as in [Myers79]. or test cases -. .Realistically. or. The pessimistic. many testing techniques may have flaws. Just as the complexity barrier indicates: chances are testing and fixing problems may not necessarily improve the quality and reliability of the software. In the craftsmanship epoch.are exhausted. In a broader view. we make cars and hack away the problems and defects. we may start to question the utmost purpose of testing. The experimental results in [Basili85] suggests that code reading by stepwise abstraction is at least as effective as on-line functional and structural testing in terms of number and cost of faults observed. In a narrower view.including inspections. This method does not fit well for ultra-dependable systems. Using formal methods to "prove" the correctness of software is also an attracting research direction. [Yang95] This will usually require the use of reliability models to evaluate and predict reliability of the software under test. An analogy of the problem is like the car manufacturing process. The disaster happened after changing 3 lines of code in the signaling system. for example. This indicates that engineering the design process (such as clean-room software engineering) to make the product have less defects may be more effective than engineering the testing process. which makes the car defect-free in the manufacturing phase. Is code coverage.

locate and remove faults or bugs. Robustness and stress testing tools are more likely to be made generic. determine input-output correctness. The Ballista testing harness is an full-scale automated robustness testing tool. and use most of all features of the application to ensure correct behavior. because a lack of knowledge will lead to incompleteness in testing. and metrics There are an abundance of software testing tools exist. They can both check and protect against memory leaks and pointer problems. exploratory testing may be sufficient. A systematic approach focuses on predetermined test cases and generally involves the following steps. Large scale engineering projects that rely on manual software testing follow a more rigorous methodology in order to maximize the number of defects that can be found. Mothora [DeMillo91] is an automated mutation testing tool-set developed at Purdue University. For small scale engineering efforts (including prototypes). Ballista COTS Software Robustness Testing Harness [Ballista99]. They are run-time checking and debugging aids. and control and document the test. One of the key advantages of an informal approach is to gain an intuitive insight to how it feels to use the application. techniques. measure test case adequacy.[1] . the tester can create and execute test cases. It requires a tester to play the role of an end user. The goal is to automatically test and harden Commercial Off-The-Shelf (COTS) software against robustness failures. The second version also supports testing of user functions provided that the data types are recognized by the testing server. Manual testing is the process of manually testing software for defects. The Ballista testing harness gives quantitative measures of robustness comparisons across operating systems. NuMega's Boundschecker [NuMega99] Rational's Purify [Rational99]. Using Mothora. the tester does not follow any rigorous testing procedure. The first version supports testing up to 233 POSIX function calls in UNIX operating systems. but rather explores the user interface of the application using as many of its features as possible. using information gained in prior tests to intuitively derive additional tests. To ensure completeness of testing. With this informal approach. the tester often follows a written test plan that leads them through a set of important test cases. The correctness testing tools are often specialized to certain systems and have limited ability and generality.Available tools. The success of exploratory manual testing relies heavily on the domain expertise of the tester. A key step in the process of software engineering is testing the software for correct behavior prior to release to end users.

identifying clear and concise steps to be taken by the tester. then playing them back and observing that the user interface responds in the same way every time.[2] However. In cases such as these. graphical user interfaces whose layout changes frequently are very difficult to test automatically. 3. Conversely. test automation becomes more cost effective when the same tests can be reused many times over. A test program is written that exercises the software and identifies its defects. and when the results can be interpreted quickly.1. The report is used by managers to determine whether the software can be released. Test automation can be used to automate the sometimes menial and time consuming task of following the steps of a use case and reporting the results. Things such as device drivers and software libraries must be tested using test programs. then a manual approach is preferred.g.[3] Test automation is the technique of testing software using software rather than people. and resources such as people. Write detailed test cases. and the automation tools that are chosen. If future reuse of the test software is unlikely. Unfortunately.[4] Original software that does not have a graphical user interface tends to be tested by automatic methods. From a cost-benefit perspective. Author a test report. These test programs may be written from scratch. Depending on the type of application to be tested. computers. There are test frameworks that can be used for regression testing of user interfaces. A rigorous test case based approach is often traditional for large software engineering projects that follow a Waterfall model. They rely on recording of sequences of keystrokes and mouse gestures. such as for regression testing and test-driven development. 4. Assign the test cases to testers. the labor that is saved in actual testing must be spent instead authoring the test program. In addition. or they may be written utilizing a generic Test automation framework that can be purchased from a third party vendor. An automatic regression test may also be fooled if the program output varies significantly (e. it is used by engineers to identify and correct the problems. A computer can follow a root sequence of steps more quickly than a person. and it can run the tests overnight to present the results in the morning. 2. Test automation may be able to reduce or eliminate the cost of actual testing. this may require more labor than a manual approach. detailing the findings of the testers. at least one recent study did not show a dramatic difference in defect detection efficiency between exploratory testing and test case based testing. testing of large numbers of users (performance testing and load testing) is typically simulated in software rather than performed in practice. In addition. the display includes the current system time). potentially creating a time consuming task of interpreting the results. some testing tools present a very large amount of data. manual testing may be more effective. However. with expected outcomes. Choose a high level test plan where a general methodology is chosen. and software licenses are identified and acquired. and if not.[5] . such as those by Original Software. these recordings may not work properly when a button is moved or relabeled in a subsequent release. who manually follow the steps and record the results.

Supporting test code. The Unit Testing comes at the very basic level as it is carried out as and when the unit of the code is developed or a particular functionality is built. Software testing is partly intuitive. and quality of developed computer software. the process of executing a program or application with the intent of finding errors. exposing the completed building. which encompasses all business process areas. In the construction industry. but largely systematic. walk-through. WHITE BOX TESTING UNIT TESTING The developer carries out unit testing in order to check if the particular module or unit of code is working fine. The exact scope of a unit is left to interpretation. inspections. sometimes called scaffolding. in order to answer the question ³Does this software behave as specified?´ Software testing is used in association with Verification and Validation. Software Testing is the process of executing software in a controlled manner. testing furnishes a criticism or comparison that compares the state and behaviour of the product against a specification. scaffolding is a temporary. for conformance and consistency with an associated specification. Thorough analysis of the program under test. Only when this environment . performed on behalf of stakeholders. frame placed around a building to facilitate the construction of the building. which also uses techniques as reviews. not just testing. Similarly. White box and black box testing are terms used to describe the point of view a test engineer takes when designing test cases. it is value to some person. Unit testing deals with testing a unit as a whole. The construction workers first build the scaffolding and then the building. Quality is not an absolute. This type of testing is driven by the architecture and implementation teams. This would test the interaction of many functions but confine the test within one unit. security. This includes. Software testing is just one kind of verification. Testing is a process of technical investigation. Later the scaffolding is removed. in software testing. may be necessary to support an individual test. but is not limited to. that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate. This focus is also called black-box testing because only the details of the interface are visible to the test. With that in mind. Black box being an external view of the test object and white box being an internal view. Validation is the process of checking what has been specified is what the user actually wanted. Limits that are global to a unit are tested here. one particular test may need some supporting software. An important point is that software testing should be distinguished from the separate discipline of Software Quality Assurance (SQA). backed by a broad knowledge of testing techniques and tools are prerequisites to systematic testing. Good testing involves much more than just running the program a few times to see whether it works. easy to assemble and disassemble. testing can never completely establish the correctness of arbitrary computer software. including software.Software Testing It is the process used to help identify the correctness. Verification is the checking of or testing of items. completeness. This software establishes an environment around the test.

It helps in assuring that all the statements execute without any side effect. at some point we need to branch out the code in order to perform a particular functionality. First the coverage tool is used to augment the source by placing informational prints after each line of code. The scaffolding software may establish state and values for data structures as well as providing dummy external functions for the test. This audit trail is analyzed and reports the percent of the total system code executed during the test suite. Sometimes the scaffolding software becomes larger than the system software being tested. STATIC & DYNAMIC ANALYSIS Static analysis involves going through the code in order to find out any possible defect in the code. Dynamic analysis involves executing the code and analyzing the output. then no more additional tests are required. the coverage tool is used in a slightly different way. Branch coverage testing helps in validating of all the branches in the code and making sure that no branching leads to abnormal behavior of the application. Usually the scaffolding software is not of the same quality as the system software and frequently is quite established can a correct evaluation of the test take place. BRANCH COVERAGE No software application can be written in a continuous mode of coding. A coverage tool analyzes the source code and generates a test that will execute every alternative thread of execution. . Scaffolding software rarely is considered part of the system. If the coverage is high and the untested source lines are of low impact to the system's overall quality. It is still up to the programmer to combine this test into meaningful cases to validate the result of each thread of execution. A small change in the test may lead to much larger changes in the scaffolding. Typically. Different scaffolding software may be needed from one test to another test. Internal and unit testing can be automated with the help of coverage tools. Then the testing suite is executed generating an audit trail. STATEMENT COVERAGE In this type of testing the code is executed in such a manner that every statement of the application is executed at least once.

. can be implemented with the same characteristics.[1] Software testing also provides an objective. most of the test effort occurs after the requirements have been defined and the coding process has been completed. BLACK BOX TESTING FUNCTIONAL TESTING In this type of testing. as well as the back-end operations (such as security and how upgrades affect the system). depending on the testing method employed. 2. It also helps in finding out which code and which strategy of coding can help in developing the functionality effectively. The tests are written in order to check if the application behaves as expected. hacking ± cracking.SECURITY TESTING Security Testing is carried out in order to find out how well the system can protect itself from unauthorized access. This type of testing needs sophisticated testing techniques. searches and business processes. Software testing. but are not limited to. and 3. Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. which deals with the code of application. data manipulation. Individual components and processes can be tested early on. it can²and should. Functional testing covers the obvious surface type of functions. Although functional testing is often done toward the end of the development cycle. and integrations. ²be started much earlier. Functional testing covers how well the system executes the functions it is supposed to execute²including user commands. can be implemented at any time in the development process. As such. Test techniques include. the methodology of the test is governed by the software development methodology adopted. any code damage etc. Software testing can also be stated as the process of validating and verifying that a software program/application/product: 1. even before it's possible to do functional testing on the entire system. user screens. meets the business and technical requirements that guided its design and development. the application is tested for the code that was modified after fixing a particular bug/defect. independent view of the software to allow the business to appreciate and understand the risks at implementation of the software. works as expected. MUTATION TESTING A kind of testing in which. the software is tested for the functional requirements. the process of executing a program or application with the intent of finding software bugs. However.

8 Software verification and validation o 3.7 Alpha testing o 5.9 The software testing team o 3.1 Unit testing o 5.1.1 Traditional CMMI or waterfall development model .4 System integration testing o 5.10 Software quality assurance (SQA) 4 Testing methods o 4.1 White box testing  4. dynamic testing o 3.6 Input combinations and preconditions o 3.2 Functional vs non-functional testing o 3.1 The box approach  4. often employ test driven development and place an increased portion of the testing in the hands of the developer.1. before it reaches a formal team of testers.Different software development models will focus the test effort at different points in the development process. Newer development models.4 Security testing o 6.3 Grey box testing 5 Testing levels o 5.1 Scope o 3.4 Finding faults early o 3.8 Beta testing 6 Non-functional testing o 6.2 Black box testing  4.6 Destructive testing 7 The testing process o 7.5 Internationalization and localization o 6. such as Agile. Contents [hide] y y y y y y y 1 Overview 2 History 3 Software testing topics o 3.5 Compatibility o 3.5 Regression testing o 5.6 Acceptance testing o 5.1.3 Defects and failures o 3. most of the test execution occurs after the requirements have been defined and the coding process has been completed.2 Stability testing o 6.1 Software performance testing and load testing o 6.2 Integration testing o 5. In a more traditional model.7 Static vs.3 System testing o 5.3 Usability testing o 6.

economy $59. Therefore. from that of verification. For example.Demonstration oriented[8] 1979 1982 . or other criteria. A study conducted by NIST in 2002 reports that software bugs cost the U. its target audience. Myers in 1979. applicable laws.S. its purchasers.y y y y y y y 7. Every software product has a target audience.2 Measurement in software testing 9 Testing artifacts 10 Certifications 11 Controversy 12 See also 13 References 14 External links o o [edit] Overview Testing can never completely identify all the defects within software.[4] Although his attention was on breakage testing ("a successful test is one that finds a bug"[4][5]) it illustrated the desire of the software engineering community to separate fundamental development activities. relevant standards. and other stakeholders. user or customer expectations.[2] comparable products.Destruction oriented[9] 1983 1987 .3 A sample testing cycle 8 Automated testing o 8. contracts. Hetzel classified in 1988 the phases and goals in software testing in the following stages:[6] y y y y y Until 1956 .5 billion annually. More than a third of this cost could be avoided if better software testing was performed.Debugging oriented[7] 1957 1978 . such as debugging. Software testing is the process of attempting to make this assessment.Evaluation oriented[10] 1988 2000 . past versions of the same product. when an organization develops or otherwise invests in a software product. it can assess whether the software product will be acceptable to its end users. These oracles may include (but are not limited to) specifications.[3] [edit] History The separation of debugging from testing was initially introduced by Glenford J. Dave Gelperin and William C.1 Testing tools o 8. it furnishes a criticism or comparison that compares the state and behavior of the product against oracles² principles or mechanisms by which someone might recognize a problem. inferences about intended or expected purpose. the audience for video game software is completely different from banking software.Prevention oriented[11] . Instead.2 Agile or Extreme development model 7.

There are various roles for testing team members. maintainability. scalability. alterations in source data or interacting with different software.[edit] Software testing topics [edit] Scope A primary purpose for testing is to detect software failures so that defects may be discovered and corrected. Software faults occur through the following processes. If this defect is executed.[13] [edit] Functional vs non-functional testing Functional testing refers to tests that verify a specific action or function of the code. a testing organization may be separate from the development team. [edit] Defects and failures Not all software defects are caused by coding errors. In the current culture of software development. and security. although some development methodologies work from use cases or user stories. e. performance. usability. [edit] Finding faults early It is commonly believed that the earlier a defect is found the cheaper it is to fix it. This is a non-trivial pursuit. causing a failure. For example. Information derived from software testing may be used to correct the process by which software is developed. Non-functional testing tends to answer such questions as "how many people can log in at once". A defect can turn into a failure when the environment is changed.g.[15] A single defect may result in a wide range of failure symptoms. Non-functional testing refers to aspects of the software that may not be related to a specific function or user action. such as scalability or security.[16] The following table shows the cost of fixing the defect depending on the stage it was found..[17] For . Functional tests tend to answer the question of "can the user do this" or "does this particular feature work". Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions. These are usually found in the code requirements documentation.[15] Not all defects will necessarily result in failures. which results in a defect (fault. A programmer makes an error (mistake).[14] A common source of requirements gaps is non-functional requirements such as testability. bug) in the software source code. defects in dead code will never result in failures. Examples of these changes in environment include the software being run on a new hardware platform.[12] The scope of software testing often includes examination of code as well as execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. that result in errors of omission by the program designer. in certain situations the system will produce wrong results. One common source of expensive defects is caused by requirement gaps. unrecognized requirements.

For example. walkthroughs. More significantly. or on older hardware that earlier versions of the target environment was capable of using. or target environments that differ greatly from the original (such as a terminal or GUI application intended to be run on the desktop now being required to become a web application. whereas actually executing programmed code with a given set of test cases is referred to as dynamic testing. even with a simple product. old or new). this can occur because the programmers develop and test software only on the latest version of the target environment. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment. operating systems (or operating system versions. Sometimes such issues can be fixed by proactively abstracting operating system functionality into a separate program module or library. which must render in a web browser). if a problem in the requirements is found only post-release.example. non-functional dimensions of quality (how it is supposed to be versus what it is supposed to do)²usability. Dynamic testing takes place when the program itself is used for the first time (which . then it would cost 10±100 times more to fix than if it had already been found by the requirements review. Reviews. [edit] Static vs. which not all users may be running. compatibility. scalability. [edit] Input combinations and preconditions A very fundamental problem with software testing is that testing under all combinations of inputs and preconditions (initial state) is not feasible. Time detected Requirements Architecture Construction Requirements Time introduced Architecture Construction 1× 3× 1× 5 10× 10× 1× System test 10× 15× 10× Postrelease 10 100× 25 100× 10 25× [edit] Compatibility A common cause of software failure (real or perceived) is a lack of compatibility with other application software. performance. dynamic testing There are many approaches to software testing. Static testing can be (and unfortunately in practice often is) omitted.[12][18] This means that the number of defects in a software product can be very large and defects that occur infrequently are difficult to find in testing. something that constitutes sufficient value to one person may be intolerable to another. or inspections are considered as static testing. reliability²can be highly subjective. in the case of a lack of backward compatibility.

automation developer. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code (modules or discrete functions). [edit] Software quality assurance (SQA) Though controversial.. software testing may be viewed as an important part of the software quality assurance (SQA) process. Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements. What constitutes an "acceptable defect rate" depends on the nature of the software. [edit] The software testing team Software testing can be done by software testers. software process specialists and auditors take a broader view on software and its development. spreadsheet programs are. The terms verification and validation are commonly used interchangeably in the industry.e.. [edit] Software verification and validation Software testing is used in association with verification and validation:[19] y y Verification: Have we built the software right? (i.[12] In SQA. tester. Until the 1980s the term "software tester" was used generally. Regarding the periods and the different goals in software testing. and test administrator. and there may be no SQA function in some companies. test lead. it is also common to see these two terms incorrectly defined. . Validation: Have we built the right software? (i. Typical techniques for this are either using stubs/drivers or execution from a debugger environment. is this what the customer wants). does it match the specification).[20] different roles have been established: manager.e. testing departments often exist independently. According to the IEEE Standard Glossary of Software Engineering Terminology: Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. by their very nature. test designer. They examine and change the software engineering process itself to reduce the amount of faults that end up in the delivered software: the so-called defect rate. with results displayed immediately after each calculation or text manipulation. Although there are close links with SQA. For example. but later it was also seen as a separate generally considered the beginning of the testing stage). A flight simulator video game would have much higher defect tolerance than software for an actual airplane. tested to a large extent interactively ("on the fly").

which reports on the number of lines executed to complete the test . Types of white box testing The following types of white box testing exist: y y y y y API testing (application programming interface) .Software testing is a task intended to detect defects in software by contrasting a computer program's expected results with its actual results for a given set of inputs.. which reports on functions executed Statement coverage.creating tests to satisfy some criteria of code coverage (e.and black-box testing. the test designer can create tests to cause all statements in the program to be executed at least once) Fault injection methods . QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from occurring in the first place.White box testing includes all static testing Test coverage White box testing methods can also be used to evaluate the completeness of a test suite that was created with black box testing methods.testing of the application using public and private APIs Code coverage . This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested.[21] Two common forms of code coverage are: y y Function coverage. These two approaches are used to describe the point of view that a test engineer takes when designing test cases. [edit] White box testing Main article: White box testing White box testing is when the tester has access to the internal data structures and algorithms including the code that implement these.g. [edit] Testing methods [edit] The box approach Software testing methods are traditionally divided into white. By contrast.improving the coverage of a test by introducing faults to test code paths Mutation testing methods Static testing .

because the input and output are clearly outside of the "black-box" that we are calling the system under test. boundary values or error messages. and a tester's perception is very simple: a code must have bugs. the test object. Using the principle. fuzz testing." on the one hand. who then can simply verify that for a given input.[23] Advantages and disadvantages: The black box tester has no "bonds" with the code. where only the interfaces are exposed for test. and the disadvantage of "blind exploring. black box testing has been said to be "like a walk in a dark labyrinth without a flashlight. modifying a data repository does qualify as grey box. traceability matrix. the output value (or behavior). As a result. there are situations when (1) a tester writes many test cases to check something that could have been tested by only one test case. on the other hand. for instance. as the user would not normally be able to change the data outside of the system under test. or black-box level.They both return a code coverage metric. "Ask and you shall receive. and only sees the output from. [24] [edit] Grey box testing Grey box testing (American spelling: gray box testing) involves having knowledge of internal data structures and algorithms for purposes of designing the test cases. Specification-based testing is necessary.[22] Thus." black box testers find bugs where programmers do not. and/or (2) some parts of the back-end are not tested at all. Therefore." on the other. Manipulating input data and formatting output do not qualify as grey box." because the tester doesn't know how the software being tested was actually constructed. [edit] Black box testing Main article: Black box testing Black box testing treats the software as a "black box"²without any knowledge of internal implementation. but testing at the user. [edit] Testing levels . However. all-pairs testing. Black box testing methods include: equivalence partitioning. exploratory testing and specification-based testing. measured as a percentage. Grey box testing may also include reverse engineering to determine. boundary value analysis. But. black box testing has the advantage of "an unaffiliated opinion. the tester inputs data into. This level of testing usually requires thorough test cases to be provided to the tester. model-based testing. but it is insufficient to guard against certain risks. Specification-based testing: Specification-based testing aims to test the functionality of software according to the applicable requirements. either "is" or "is not" the same as the expected value specified in the test case. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers.

and the minimal unit tests include the constructors and destructors. Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system. usually at the function level. In an object-oriented environment. Software components may be integrated in an iterative way or all together ("big bang"). or by the level of specificity of the test.[citation needed] . Unit testing alone cannot verify the functionality of a piece of software. Unit testing is also called component testing.Tests are frequently grouped by where they are added in the software development process. [edit] Integration testing Main article: Integration testing Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design.[25] These type of tests are usually written by developers as they work on code (white-box style). but rather is used to assure that the building blocks the software uses work independently of each other.[26] [edit] System testing Main article: System testing System testing tests a completely integrated system to verify that it meets its requirements. Integration testing works to expose defects in the interfaces and interaction between integrated components (modules). One function might have multiple tests. this is usually at the class level. to catch corner cases or other branches in the code.[27] [edit] System integration testing Main article: System integration testing System integration testing verifies that a system is integrated to any external or third party systems defined in the system requirements. [edit] Unit testing Main article: Unit testing Unit testing refers to tests that verify the functionality of a specific section of code. to ensure that the specific function is working as expected. Normally the former is considered a better practice since it allows interface issues to be localised more quickly and fixed.

2. http://www. to very shallow. Specifically. is known as user acceptance testing (UAT). Acceptance testing performed by the customer. Versions of the software. for changes added late in the release or deemed to be risky. Such regressions occur whenever software functionality that was previously working correctly stops working as intended. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing. A smoke test is used as an acceptance test prior to introducing a new build to the main testing process. known as beta versions.astqb. van Veenendaal. [edit] Acceptance testing Main article: Acceptance testing Acceptance testing can mean one of two things: 1.[citation needed] [edit] Alpha testing Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. if the changes are early in the release or deemed to be of low risk. are released to a limited audience outside of the programming team. often in their lab environment on their own hardware.[edit] Regression testing Main article: Regression testing Regression testing focuses on finding defects after a major code change has occurred. regressions occur as an unintended consequence of program changes.e. or old bugs that have come back. Sometimes.[citation needed] [edit] Non-functional testing . [edit] Beta testing Beta testing comes after alpha testing. Typically. before integration or regression. They can either be complete. consisting of positive tests on each feature. before the software goes to beta testing. Retrieved 17 June 2010. it seeks to uncover software regressions. Erik. Common methods of regression testing include re-running previously run tests and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added features. "Standard glossary of terms used in Software Testing". Acceptance testing may be performed as part of the hand-off process between any two phases of development. i.php#A. when the newly developed part of the software collides with the previously existing code. beta versions are made available to the open public to increase the feedback field to a maximal number of future

This activity of non-functional software testing is often referred to as load (or endurance) testing. Stress testing is a way to test reliability. performance testing. Volume testing is a way to test functionality. Software fault injection. The terms load testing. non-functional testing verifies that the software functions properly even when it receives invalid or unexpected inputs. [edit] Internationalization and localization Internationalization and localization is needed to test these aspects of software. there are also numerous open-source and free software tools available that perform non-functional testing. are often used interchangeably. It can also serve to validate and verify other quality attributes of the system. is designed to establish whether the device under test can tolerate invalid or unexpected inputs. [edit] Stability testing Stability testing checks to see if the software can continuously function well in or above an acceptable period. reliability and resource usage. In contrast to functional testing. It will verify that the application still works.Special methods exist to test non-functional aspects of software. [edit] Usability testing Usability testing is needed to check if the user interface is easy to use and understand. for which a pseudolocalization method can be used. This is generally referred to as software scalability. especially for software. is an example of non-functional testing. [edit] Security testing Security testing is essential for software that processes confidential data to prevent system intrusion by hackers. in the form of fuzzing. and volume testing. Load testing is primarily concerned with testing that can continue to operate under a specific load. reliability testing. such as scalability. which establishes the correct operation of the software (correct in that it matches the expected behavior defined in the design requirements). The related load testing activity of when performed as a non-functional activity is often referred to as endurance testing. thereby establishing the robustness of input validation routines as well as error-handling routines. Various commercial non-functional testing tools are linked from the software fault injection page. whether that be large quantities of data or a large number of users. [edit] Software performance testing and load testing Performance testing is executed to determine how fast a system or sub-system performs under a particular workload. Load testing is a way to test performance. Non-functional testing. even after it . There is little agreement on what the specific goals of load testing are.

The test suites are continuously updated as new failure conditions and corner cases are discovered. Of course these tests fail initially.has been translated into a new language or adapted for a new culture (such as different currencies or time zones).[30] [edit] Agile or Extreme development model In counterpoint. and they are integrated with any regression tests that are developed. [31] [32] [edit] A sample testing cycle Although variations exist between organizations. there is a typical cycle for testing[33]. In this process. as they are expected to. . Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process). unit tests are written first.[28] This practice often results in the testing phase being used as a project buffer to compensate for project delays.[29] Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes. in order to test its robustness. [edit] The testing process [edit] Traditional CMMI or waterfall development model A common practice of software testing is that testing is performed by an independent group of testers after the functionality is developed. adhere to a "test-driven software development" model. [edit] Destructive testing Main article: Destructive testing Destructive testing attempts to cause the software or a sub-system to fail. The sample below is common among organizations employing the Waterfall development model. Then as code is written it passes incrementally larger portions of the test suites. testers work with developers in determining what aspects of a design are testable and with what parameters those tests work. thereby compromising the time devoted to testing. The ultimate goal of this test process is to achieve continuous deployment where software updates can be published to the public frequently. During the design phase. by the software engineers (often with pair programming in the extreme programming methodology). y Requirements analysis: Testing should begin in the requirements phase of the software development life cycle. some emerging software disciplines such as extreme programming and the agile software development movement. before it is shipped to the customer.

e. especially groups that use test-driven development. permitting step-by-step execution and conditional breakpoint at source level or in machine code o Code coverage reports Formatted dump or symbolic debugging. in order to ensure that the latest delivery has not ruined anything. a plan is needed. and continuous integration software will run tests automatically every time code is checked into a version control system. found software working properly) or deferred to be dealt with later. fixed. Since many activities will be carried out during testing. modified. There are many frameworks to write tests in. Defect Retesting: Once a defect has been dealt with by the development team. [edit] Automated testing Main article: Test automation Many programming groups are relying more and more on automated testing. test datasets. permitting full or partial monitoring of program code including: o Instruction set simulator. Testing/debug tools include features such as: y y Program monitors. it does require a well-developed test suite of testing scripts in order to be truly useful. it can be very useful for regression testing. Test Closure: Once the test meets the exit criteria. logs. test scenarios. it is retested by the testing team. rejected (i. Regression testing: It is common to have a small test program built of a subset of tests. is done by the development team usually along with the client. AKA Resolution testing. permitting complete instruction level monitoring and trace facilities o Program animation. [edit] Testing tools Program testing and fault detection can be aided significantly by testing tools and debuggers. tools allowing inspection of program variables on error or at chosen points . testbed creation. test cases. test scripts to use in testing software.y y y y y y y y Test planning: Test strategy. Test development: Test procedures. or fixed software. Test execution: Testers execute the software based on the plans and test documents then report any errors found to the development team. for each integration of new. Test reporting: Once testing is completed. the activities such as capturing the key outputs. While automation cannot reproduce everything that a human can do (and all the strange ways they think of doing it). and that the software product as a whole is still working correctly. results. testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release. documents related to the project are archived and used as a reference for future projects. test plan. However. Test result analysis: Or Defect Analysis. lessons learned. in order to decide what defects should be treated.

input. security. compatibility. which are used to assist in determining the state of the software or the adequacy of the testing. Test plan A test specification is called a test plan. [edit] Measurement in software testing Usually. related requirement(s). It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases. often called metrics. expected result. The optional fields are a test case ID. and check boxes for whether the test is automatable and has been automated. requirement references from a design specification. efficiency. Larger test cases may also . Clinically defined a test case is an input and an expected result. Some companies have a higher-level document called a test strategy. test category. There are a number of frequently-used software measures.y y y Automated functional GUI testing tools are used to repeat system-level tests through the GUI Benchmarks.[34] This can be as pragmatic as 'for condition x your derived result is y'. events. The developers are well aware what test plans will be executed and this information is made available to management and the developers. test step. The idea is to make them more cautious when developing their code or making additional changes. as a matter of economy) but with one expected result or expected outcome. a series of steps (also known as actions) to follow. author. depth. It is used to change tests when the source documents are changed. [edit] Testing artifacts Software testing process can produce several artifacts. and usability. Test case A test case normally consists of a unique identifier. or to verify that the test results are correct. portability. or order of execution number. and actual result. preconditions. output. whereas other test cases described in more detail the input scenario and what results might be expected. such as capability. Traceability matrix A traceability matrix is a table that correlates requirements or design documents to test documents. quality is constrained to such topics as correctness.[citation needed] but can also include more technical requirements as described under the ISO standard ISO/IEC 9126. maintainability. completeness. allowing run-time performance comparisons to be made Performance analysis (or profiling tools) that can help to highlight hot spots and resource usage Some of these features may be incorporated into an Integrated Development Environment (IDE). reliability.

. No certification is based on a widely accepted body of knowledge. The test suite often also contains more detailed instructions or goals for each collection of test cases. and configurations are all referred to collectively as a test harness. In a database system. Test harness The software. This has led some to declare that the testing field is not ready for certification.g. test procedure. database. test scripts can be manual. and what system configuration was used to generate those results. Test script The test script is the combination of a test case.[35] Certification itself cannot measure an individual's productivity. Initially the term was derived from the product of work created by automated regression test tools.contain prerequisite states or steps. These steps can be stored in a word processor document. and descriptions. for ISTQB or QAI][37] . who generated the results. Today. or a combination of both. or professionalism as a tester. and cannot guarantee their competence. Test suite The most common term for a collection of test cases is a test suite. It definitely contains a section where the tester identifies the system configuration used during testing. you may also be able to see past test results. multiple sets of values or data are used to test the same functionality of a particular feature. It is also useful to provide this data to the client and with the product or a project. A test case should also contain a place for the actual result. A group of test cases may also contain prerequisite states or steps. which need to be passed. samples of data input and output. can also be learned by selfstudy [e.[36] Software testing certification types y Exam-based: Formalized exams. All the test values and changeable environmental components are collected in separate files and stored as test data. their skill. No certification currently offered actually requires the applicant to demonstrate the ability to test software. automated. and test data. tools. spreadsheet. or practical knowledge. These past results would usually be stored in a separate table. and descriptions of the following tests. [edit] Certifications Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. or other common repository. Test data In most cases.

[46] Agile vs. .. International Institute for Software Testing (IIST)]. but rather that testing is a set of skills that allow the tester to select or invent testing practices to suit each unique situation. Testing certifications y y y y y y y y y y Certified Associate in Software Testing (CAST) offered by the Quality Assurance Institute (QAI)[38] CATe offered by the International Institute for Software Testing[39] Certified Manager in Software Testing (CMST) offered by the Quality Assurance Institute (QAI)[38] Certified Software Tester (CSTE) offered by the Quality Assurance Institute (QAI)[38] Certified Software Test Professional (CSTP) offered by the International Institute for Software Testing[39] CSTP (TM) (Australian Version) offered by K. in the Waterfall model). where each course has to be passed [e. J. whereas government and military[49] software providers are slow to embrace this methodology[neutrality is disputed] in favour of traditional test-last models (e.y Education-based: Instructor-led sessions. Ross & Associates[40] ISEB offered by the Information Systems Examinations Board ISTQB Certified Tester.g. CSQA offered by the Quality Assurance Institute (QAI)[38] CSQE offered by the American Society for Quality (ASQ)[44] CQIA offered by the American Society for Quality (ASQ)[44] [edit] Controversy Some of the major software testing controversies include: What constitutes responsible software testing? Members of the "context-driven" school of testing[45] believe that there are no "best practices" of testing. traditional Should testers learn to work under conditions of uncertainty and constant change or should they aim at process "maturity"? The agile testing movement has received growing popularity since 2006 mainly in commercial circles [47][48]. Foundation Level (CTFL) offered by the International Software Testing Qualification Board [41][42] ISTQB Certified Tester. Advanced Level (CTAL) offered by the International Software Testing Qualification Board [41][42] TMPF TMap[dubious discuss] Next Foundation offered by the Examination Institute for Information Science[43] Quality assurance certifications y y y y CMSQ offered by the Quality Assurance Institute (QAI)[38].g.

Exploratory test vs. Software design vs. software implementation[52] Should testing be carried out only at the end or throughout the whole process? Who watches the watchmen? The idea is that any form of observation is also an interaction the act of testing can also affect that which is being tested[53]. SDLC models Introduction There are various software development approaches defined and designed which are used/employed during development process of software. Waterfall Model Waterfall approach was first Process Model to be introduced and followed widely in Software Engineering to ensure success of the project. test-driven development states that developers should write unit-tests of the XUnit type before coding the functionality. In "The Waterfall" approach. Each process model follows a particular life cycle in order to ensure success in process of software development. scripted[50] Should tests be designed at the same time as they are executed or should they be designed beforehand? Manual testing vs. Implementation and Testing & Maintenance. Software Design. these approaches are also referred as "Software Development Process Models". The phases in Waterfall model are: Requirement Specifications phase. the whole process of software development is divided into separate process phases. The tests then can be considered as a way to capture and implement the requirements. automated Some writers believe that test automation is so expensive relative to its value that it should be used sparingly. All these phases are cascaded to each other so that second phase is started as and when defined set of goals are achieved for first phase and it is .[51] More in particular.

so the name "Waterfall Model". these requirements are analyzed for their validity and the possibility of incorporating the requirements in the system to be development is also studied. Requirements are set of functionalities and constraints that the end-user (who will be using the system) expects from the system. Waterfall Model The stages of "The Waterfall Model" are: Requirement Analysis & Definition: All possible requirements of the system to be developed are captured in this phase. Unit testing mainly verifies if the modules/units meet their specifications. this is referred to as Unit Testing. the work is divided in modules/units and actual coding is started. which are integrated in the next phase. Implementation & Unit Testing: On receiving system design documents. The system design specifications serve as input for the next phase of the model. All the methods and processes undertaken in Waterfall Model are more visible. The requirements are gathered from the end-user by consultation. System & Software Design: Before a starting for actual coding.signed off. Each unit is developed and tested for its functionality. The system is first developed in small programs called units. a Requirement Specification document is created which serves the purpose of guideline for the next phase of the model. . Finally. it is highly important to understand what we are going to create and what it should look like? The requirement specifications from first phase are studied in this phase and system design is prepared. System Design helps in specifying hardware and system requirements and also helps in defining overall system architecture.

Once an application is in the testing stage. and theoretically. Development moves from concept. troubleshooting. These units are integrated into a complete system during Integration phase and tested to check if all modules/units coordinate between each other and the system as a whole behaves as per the specifications. problems with the system developed (which are not found during the development life cycle) come up after its practical use starts. it is very difficult to go back and change something that was not wellthought out in the concept stage. be delivered on time. synch and stabilize. Each phase of development proceeds in strict order. Generally.Integration & System Testing: As specified above. installation. it is delivered to the customer. without any overlapping or iterative steps. testing. After successfully testing the software. and ends up at operation and maintenance. rapid application development (RAD). implementation. so the issues related to the system are solved after deployment of the system. Operations & Maintenance: This phase of "The Waterfall Model" is virtually never ending phase (Very long). the system is first divided in units which are developed and tested for their functionalities. A schedule can be set with deadlines for each stage of development and a product can proceed through the development process like a car in a carwash. Not all the problems come in picture directly but they arise time to time and needs to be solved. hence this process is referred as Maintenance. Advantages and Disadvantages Advantages The advantage of waterfall development is that it allows for departmentalization and managerial control. Disadvantages The disadvantage of waterfall development is that it does not allow for much reflection or revision. through design. Alternatives to the waterfall model include joint application development (JAD). buildAdvantages and Disadvantages Advantages .

Instead. testing. through design. Alternatives to the waterfall model include joint application development (JAD). Each phase of development proceeds in strict order. Once an application is in the testing stage. A schedule can be set with deadlines for each stage of development and a product can proceed through the development process like a car in a carwash. Consider an iterative lifecycle model which consists of repeating the following four phases in sequence: . which can then be reviewed in order to identify further requirements. and ends up at operation and maintenance. rapid application development (RAD). implementation. and the spiral model. producing a new version of the software for each cycle of the model. without any overlapping or iterative steps. development begins by specifying and implementing just part of the software. installation. and theoretically.The advantage of waterfall development is that it allows for departmentalization and managerial control. This process is then repeated. it is very difficult to go back and change something that was not well-thought out in the concept stage. synch and stabilize. build and fix. be delivered on time. Development moves from concept. Iterative Model An iterative lifecycle model does not attempt to start with a full specification of requirements. Disadvantages The disadvantage of waterfall development is that it does not allow for much reflection or revision. troubleshooting.

The first three phases of the example iterative model is in fact an abbreviated form of a sequential V or .An Implementation and Test phase. and a fresh start has to be made. For each cycle of the model. and changes and additions to requirements proposed.A Review phase. in which the requirements for the software are gathered and analyzed. . or an extension of an earlier design. the current requirements are reviewed.A Design phase. in which a software solution to meet the requirements is designed. .A Requirements phase. Drawing an analogy with mathematical methods that use successive approximation to arrive at a final solution. This may be a new design. or kept as a starting point for the next cycle (sometimes referred to as incremental prototyping). or it becomes impossible to enhance the software as required. The iterative lifecycle model can be likened to producing software by successive approximation. a decision has to be made as to whether the software produced by the cycle will be discarded. Eventually a point will be reached where the requirements are complete and the software can be delivered. The key to successful use of an iterative software development lifecycle is rigorous validation of requirements. in which the software is evaluated. the benefit of such methods depends on how rapidly they converge on a solution. when the software is coded. integrated and tested. Iteration should eventually result in a requirements phase that produces a complete and final specification of requirements. . and verification (including testing) of each version of the software against those requirements within each cycle of the model.

The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. As the software evolves through successive cycles. the process steps are bent upwards after the coding phase. Verification Phases . Each cycle of the model produces software that requires testing at the unit level. tests have to be repeated and extended to verify each version of the software. for software integration. V-Model The V-model is a software development model which can be presumed to be the extension of the waterfall model. Instead of moving down in a linear way. for system integration and for acceptance. to form the typical V shape.waterfall lifecycle model.

The software specification document which serves as a blueprint for the development phase is generated. 3. However. The documents for system testing is prepared in this phase. reports for the better understanding. Architecture Design: This phase can also be called as high-level design. brief functionality of each module. physical. menu structures. The user requirements document will typically describe the system¶s functional. data. database tables.1. technology details etc. interface. The baseline in selecting the architecture is that it should realize all which typically consists of the list of modules. Usually. The user acceptance tests are designed in this phase. the users are interviewed and a document called the user requirements document is generated. This document contains the general system organization. security requirements etc as expected by the user. Requirements analysis: In this phase. the requirements of the proposed system are collected by analyzing the needs of the user(s). The integration testing design is carried out in this phase. Other technical documentation like entity diagrams. architecture diagrams. dependencies. the user is informed of the issue. it does not determine how the software will be designed or built. their interface relationships. performance. If any of the requirements are not feasible. This phase is concerned about establishing what the ideal system has to perform. They figure out possibilities and techniques by which the user requirements can be implemented. data structures etc. A resolution is found and the user requirement document is edited accordingly. . System Design: System engineers analyze and understand the business of the proposed system by studying the user requirements document. It may also hold example business scenarios. data dictionary will also be produced in this phase. The users carefully review this document as this document would serve as the guideline for the system designers in the system design phase. It is one which the business analysts use to communicate their understanding of the system back to the users. 2. sample windows.

with all elements. Module Design: This phase can also be called as low-level design. .database tables.error message listingscomplete input and outputs for a module.4. The designed system is broken up in to smaller units or modules and each of them is explained so that the programmer can start coding directly. including their type and size . The low level design document or program specifications will contain a detailed functional logic of the module. in pseudocode . The unit test design is developed in this stage.all dependency issues.all interface details with complete API references.

The Spiral Model The spiral model. This usually involves interviewing a number of users representing all the external or internal users and other aspects of the existing system. . The spiral model is intended for large. is a systems development method (SDM) used in information technology (IT). The new system requirements are defined in as much detail as possible. and represents an approximation of the characteristics of the final product. also known as the spiral lifecycle model. A first prototype of the new system is constructed from the preliminary design. expensive. but it was the first model to explain why the iteration matters. the iterations were typically 6 months to 2 years long. Analysis and engineering efforts are applied at each phase of the project. This model of development combines the features of the prototyping model and the waterfall model. This is usually a scaled-down system. The steps in the spiral model can be generalized as follows: 1. 3. As originally envisioned. with an eye toward the end goal of the project. 2. This model was not the first model to discuss iterative development. and complicated projects. A preliminary design is created for the new system. Each phase starts with a design goal and ends with the client (who may be internal) reviewing the progress thus far.Spiral Model History The spiral model was defined by Barry Boehm in his 1988 article A Spiral Model of Software Development and Enhancement.

5. Risk factors might involve development cost overruns. The preceding steps are iterated until the customer is satisfied that the refined prototype represents the final product desired. if necessary. The existing prototype is evaluated in the same manner as was the previous prototype. 7. 9. weaknesses. The final system is thoroughly evaluated and tested. A second prototype is evolved by a fourfold procedure: (1) evaluating the first prototype in terms of its strengths. At the customer's option. or any other factor that could. result in a less-than-satisfactory final product. the entire project can be aborted if the risk is deemed too great. (3) planning and designing the second prototype. 8. operatingcost miscalculation. and. 6. based on the refined prototype. (2) defining the requirements of the second prototype. Applications .4. in the customer's judgment. The final system is constructed. (4) constructing and testing the second prototype. another prototype is developed from it according to the fourfold procedure outlined above. and risks. Routine maintenance is carried out on a continuing basis to prevent large-scale failures and to minimize downtime.

The spiral model is used most often in large projects. etc.For a typical shrink-wrap application. Applied differently for each application 3. at some point. Highly customized limiting re-usability 2. and. For smaller projects. Risk of not meeting budget or schedule .) become more realistic as work progresses. 2. It is more able to cope with the (nearly inevitable) changes that software development generally entails. The US military has adopted the spiral model for its Future Combat Systems program. Disadvantages 1. Advantages and Disadvantages Advantages Estimates (i. Software engineers (who can get restless with protracted design processes) can get their hands in and start working on a project earlier. add the final graphics. schedule. 1.e. the spiral model might mean that you have a rough-cut of user elements (without the polished / pretty graphics) as an operable application. budget. because important issues are discovered earlier. the concept of agile software development is becoming a viable alternative. add features in phases.

Risk of not meeting budget or schedule Prototype Model Introduction A prototype is a working model that is functionally equivalent to a component of the product. including end-users. examine the prototype and provide feedback on additions or . Identify basic requirements Determine basic requirements including the input and output information desired. The developmental process only continues once the client is satisfied with the functioning of the prototype. This model reflects an attempt to increase the flexibility of the development process by allowing the client to interact and experiment with a working representation of the product. 3. At that stage the developer determines the specifications of the client¶s real needs. the prototyping model may be employed.4. can typically be ignored. such as security. Develop Initial Prototype The initial prototype is developed that includes only user interfaces. the processing needs and the output requirements. Details. The process of prototyping involves the following steps 1. 2. In such a scenario where there is an absence of detailed information regarding the input to the system. In many instances the client only has a general view of what is expected from the software product. Review The customers.

g. input approaches and output formats). Note that if the first version of the prototype does not meet the client¶s needs. Negotiation about what is within the scope of the contract/product may be necessary. Revise and Enhancing the Prototype Using the feedback both the specifications and the prototype can be improved.changes. The emphasis of the prototype is on representing those aspects of the software that will be visible to the client/user (e. 4. Versions There are two main versions of prototyping model: 1. 2. Version One This approach. Thus it does not matter if the prototype hardly works. Version I: Prototyping is used as a requirements technique. . it is discarded once the specifications have been agreed on. If changes are introduced then a repeat of steps #3 ands #4 may be needed. then it must be rapidly converted into a second version. Version II: Prototype is used as the specifications or a major part thereof. uses the prototype as a means of quickly determining the needs of the client.


as not time is spent on drawing up written specifications. This advantage of this approach is speed and accuracy. the prototype is actually used as the specifications for the design phase. The inherent difficulties associated with that phase (i.e. .Version Second In this approach. incompleteness. contradictions and ambiguities) are then avoided.

The final product is more likely to satisfy the users desire for look. the early determination of what the user really wants can result in faster and less expensive software. feel and performance. Improved and increased user involvement: Prototyping requires user involvement and allows them to see and interact with a prototype allowing them to provide better and more complete feedback and specifications. increased interaction can result in final product that has greater tangible and intangible quality. The presence of the prototype being examined by the user prevents many misunderstandings and miscommunications that occur when each side believe the other understands what they said. Because changes cost exponentially more to implement as they are detected later in development.Advantages of Prototyping There are many advantages to using prototyping in software development. some tangible some abstract. Reduced time and costs: Prototyping can improve the quality of requirements and specifications provided to developers. Since users know the problem domain better than anyone on the development team does. .

Users can also become attached to features that were included in a prototype for consideration and then removed from the specification for a final system. intended to be thrown away. Developer attachment to prototype: Developers can also become attached to prototypes they have spent a great deal of effort producing. If the developers lose sight of this fact. or perhaps misusing. Expense of implementing prototyping: the start up costs for building a development team focused on prototyping may be high. they very well may try to develop a prototype that is too complex. preparation of incomplete specifications or the conversion of limited prototypes into poorly engineered final projects that are hard to maintain. rather than evolutionary prototyping. If users are able to require all proposed features be included in the final system this can lead to feature creep. This can lead to overlooking better solutions. holding up the development team and delaying the final product. Many companies have development methodologies in place.) Excessive development time of the prototype: A key property to prototyping is the fact that it is supposed to be done quickly. When the prototype is thrown away the precisely developed requirements that it provides may not yield a sufficient increase in productivity to make up for the time spent developing the prototype. and changing them can mean retraining. Many companies tend to just jump into the . or both.) This can lead them to expect the prototype to accurately model the performance of the final system when this is not the intent of the developers. (This may suggest that throwaway prototyping. Users can become stuck in debates over details of the prototype. retooling. User confusion of prototype and finished system: Users can begin to think that a prototype. is actually a final system that merely needs to be finished or polished. (They are. which may not be noticed if developers are too focused on building a prototype as a model. this can lead to problems like attempting to convert a limited prototype into a final system when it does not have an appropriate underlying architecture.Disadvantages of Prototyping Using. since a prototype is limited in functionality it may not scale well if the prototype is used as the basis of a final deliverable. Further. Insufficient analysis: The focus on a limited prototype can distract developers from properly analyzing the complete project. prototyping can also have disadvantages. for example. often unaware of the effort needed to add error-checking and security features which a prototype may not have. should be used.

prototyping without bothering to retrain their workers as much as they should.
A common problem with adopting prototyping technology is high expectations for productivity with insufficient effort behind the learning curve. In addition to training for the use of a prototyping technique, there is an often overlooked need for developing corporate and project specific underlying structure to support the technology. When this underlying structure is omitted, lower productivity can often result.

Verification and Validation (software)
From Wikipedia, the free encyclopedia Jump to: navigation, search

In software project management, software testing, and software engineering, Verification and Validation (V&V) is the process of checking that a software system meets specifications and that it fulfils its intended purpose. It is normally part of the software testing process of a project.

Also known as software quality control. Validation checks that the product design satisfies or fits the intended usage (high-level checking) ² i.e., you built the right product. This is done through dynamic testing and other forms of review. According to the Capability Maturity Model (CMMI-SW v1.1),


Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610]. Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610]

In other words, validation ensures that the product actually meets the user's needs, and that the specifications were correct in the first place, while

verification is ensuring that the product has been built according to the requirements and design specifications. Validation ensures that µyou built the right thing¶. Verification ensures that µyou built it right¶. Validation confirms that the product, as provided, will fulfil its intended use. From testing perspective:
y y y

Fault - wrong or missing function in the code. Failure - the manifestation of a fault during execution. Malfunction - according to its specification the system does not meet its specified functionality.

Within the modeling and simulation community, the definitions of validation, verification and accreditation are similar:

y y

Validation is the process of determining the degree to which a model, simulation, or federation of models and simulations, and their associated data are accurate representations of the real world from the perspective of the intended use(s).[1] Accreditation is the formal certification that a model or simulation is acceptable to be used for a specific purpose.[1] Verification is the process of determining that a computer model, simulation, or federation of models and simulations implementations and their associated data accurately represents the developer's conceptual description and specifications.[1]

Related concepts
Both verification and validation are related to the concepts of quality and of software quality assurance. By themselves, verification and validation do not guarantee software quality; planning, traceability, configuration management and other aspects of software engineering are required.

Classification of methods
In mission-critical systems where flawless performance is absolutely necessary, formal methods can be used to ensure the correct operation of a system. However, often for non-mission-critical systems, formal methods prove to be very costly[citation needed] and an alternative method of V&V must be sought out. In this case, syntactic methods are often used.[citation needed]

Test cases
Main article: Test case

A test case is a tool used in the process. Test cases are prepared for verification: to determine if the process that was followed to develop the final product is right. Test case are executed for validation: if the product is built according to the requirements of the user. Other methods, such as reviews, are used when used early in the Software Development Life Cycle provide for validation.

Independent Verification and Validation
Verification and validation often is carried out by a separate group from the development team; in this case, the process is called "Independent Verification and Validation", or IV&V.

Regulatory environment
Verification and validation must to meet the compliance requiements of law regulated industries, which is often guided by government agencies[2][3] or industrial administrative authorities. e.g. The FDA requires software versions and patches to be validated.[4]

Static testing
Static testing is a form of software testing where the software isn't actually used. This is in contrast to dynamic testing. It is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or document. It is primarily syntax checking of the code and/or manually reviewing the code or document to find errors. This type of testing can be used by the developer who wrote the code, in isolation. Code reviews, inspections and walkthroughs are also used. From the black box testing point of view, static testing involves reviewing requirements and specifications. This is done with an eye toward completeness or appropriateness for the task at hand. This is the verification portion of Verification and Validation. Even static testing can be automated. A static testing test suite consists of programs to be analyzed by an interpreter or a compiler that asserts the programs syntactic validity. Bugs discovered at this stage of development are less expensive to fix than later in the development cycle.[citation needed]

integration and system levels of the software testing process. Software Testing portal White box test design techniques include: y y y Control flow testing Data flow testing Branch testing . It can test paths within a unit. Dynamic testing From Wikipedia. are required and used to design test cases. Unit Tests. e. Integration Tests. System Tests and Acceptance Tests are few of the Dynamic Testing methodologies. testers. While white box testing can be applied at the unit. paths between units during integration. it is usually done at the unit level. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. and business analyst. White box testing (a. In dynamic testing the software must actually be compiled and run. it might not detect unimplemented parts of the specification or missing requirements.a. glass box testing. It is analogous to testing nodes in a circuit. dynamic analysis refers to the examination of the physical response from the system to variables that are not constant and change with time. Actually Dynamic Testing involves working with the software. This is in contrast to Static testing. These are the Validation activities. Dynamic testing means testing based on specific test cases by execution of the test object or running programs. giving input values and checking if the output is as expected. Dynamic testing is used to test software through executing it.g.k. clear box testing. Though this method of test design can uncover many errors or problems. the free encyclopedia Jump to: navigation. That is. in-circuit testing (ICT). or structural testing) is a method of testing software that tests internal structures or workings of an application as opposed to its functionality (black box testing). An internal perspective of the system. search Dynamic testing (or dynamic analysis) is a term used in software engineering to describe the testing of the dynamic behavior of code. transparent box testing.The people involved in static testing are application developers. and between subsystems during a system level test. as well as programming skills.

4 Design 2 Separation of interface from implementation 3 Unit testing limitations 4 Applications o 4.2 Simplifies integration o 1. unit testing is a method by which individual units of source code are tested to determine if they are fit for use.3 Documentation o 1. Unit tests are created by programmers or occasionally by white box testers. A unit is the smallest testable part of an application. each test case is independent from the others: substitutes like method stubs. Its implementation can vary from being very manual (pencil and paper) to being formalized as part of build automation. . As a result.y Path testing Unit testing n computer programming. mock objects.[1] fakes and test harnesses can be used to assist testing a module in isolation. it affords several benefits.1 Facilitates change o 1. Contents [hide] y y y y y y y 1 Benefits o 1.[2] A unit test provides a strict. Ideally.2 Techniques o 4.4 Language-level unit testing support 5 See also 6 Notes 7 External links [edit] Benefits The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. written contract that the piece of code must satisfy. Unit tests are typically written and run by software developers to ensure that code meets its design and behaves as intended. In procedural programming a unit may be an individual function or procedure.1 Extreme Programming o 4.3 Unit testing frameworks o 4.

and make sure the module still works correctly (i. Depending upon established development practices and unit test coverage. In continuous unit testing environments.. On the other hand. An elaborate hierarchy of unit tests does not equal integration testing. Readily-available unit tests make it easy for the programmer to check whether a piece of code is still working properly.. Unit test cases embody characteristics that are critical to the success of the unit. [edit] Facilitates change Unit testing allows the programmer to refactor code at a later date. unit tests will continue to accurately reflect the intended use of the executable and code in the face of any change. Developers looking to learn what functionality is provided by a unit and how to use it can look at the unit tests to gain a basic understanding of the unit API.Unit tests find problems early in the development cycle. regression testing).e. Integration testing cannot be fully automated and thus still relies heavily on human testers. up-to-thesecond accuracy can be maintained. [edit] Simplifies integration Unit testing may reduce uncertainty in the units themselves and can be used in a bottom-up testing style approach. in and of itself. feature creep. A unit test case. although many software development environments do not rely solely upon code to document the product in development. ordinary narrative documentation is more susceptible to drifting from the implementation of the program and will thus become outdated (e. The procedure is to write test cases for all functions and methods so that whenever a change causes a fault. integration testing becomes much easier. By testing the parts of a program first and then testing the sum of its parts.[citation needed] [edit] Documentation Unit testing provides a sort of living documentation of the system.g. These characteristics can indicate appropriate/inappropriate use of a unit as well as negative behaviors that are to be trapped by the unit. it can be quickly identified and fixed. design changes. through the inherent practice of sustained maintenance. documents these critical characteristics. relaxed practices in keeping .

which returns another integer. assert(adder. } } In this case the unit test. 1) == 2).add(-1. using a unit-test as a design has one significant advantage. the tests will never pass if the developer does not implement the solution according to the design. and observable behaviour.add(1.documents up-to-date). with two integer parameters. methods. assert(adder. int b). which are left for the programmer. Following the "do the simplest thing that could possibly work" practice. [edit] Design When software is developed using a test-driven approach.add(1. It also specifies the behaviour of this method for a small range of values. the easiest solution that will make the test pass is shown below. With the unittest design method. Here is a test class that specifies a number of elements of the implementation. 2) == 4). assert(adder.add(0. public class TestAdder { public void testSum() { Adder adder = new AdderImpl(). assert(adder. Each unit test can be seen as a design element specifying classes. . int b) { return a + b. assert(adder. having been written first. assert(adder. assert(adder. } class AdderImpl implements Adder { int add(int a.add(1234. The design document (the unit-test itself) can be used to verify that the implementation adheres to the design. -2) == -3). and an implementing class with a zero-argument constructor called AdderImpl. but not the implementation details. It goes on to assert that the Adder interface should have a method called add.add(2. the unit test may take the place of formal design. 1) == 0). 988) == 2222). The following Java example will help illustrate this point. acts as a design document specifying the form and behaviour of a desired solution. that there must be an interface called Adder. 0) == 0). } } Unlike other diagram-based design methods.add(-1. interface Adder { int add(int a. 2) == 3). First.

and then implement that interface with their own mock object. See also Fakes. and when test cases fail. the tester often writes code that interacts with the database. and especially should not cross such process/network boundaries because this can introduce unacceptable performance problems to the unit test-suite. Crossing such unit boundaries turns unit tests into integration tests. Therefore. it will not catch integration errors or broader system-level errors (such as functions performed across multiple units. [edit] Unit testing limitations Testing cannot be expected to catch every error in the program: it is impossible to evaluate every execution path in all but the most trivial programs. Unit testing should be done in conjunction with other software testing activities. the software developer should create an abstract interface around the database queries. The same is true for unit testing. Software testing is a combinatorial problem. A common example of this is classes that depend on a database: in order to test the class. mocks and integration tests Instead. There are also many problems that cannot easily be tested at all ± for example those that are . This results in a higher quality unit that is also more maintainable. for every line of code written. but UML diagrams are now easily generated for most modern languages by free tools (usually available as extensions to IDEs). makes it less clear which component is causing the failure. As a result. Additionally. Free tools. the independent unit can be more thoroughly tested than may have been previously achieved. Like all forms of software testing. because a unit test should usually not go outside of its own class boundary.[3] This obviously takes time and its investment may not be worth the effort. unit tests can only show the presence of errors. This is a mistake. like those based on the xUnit framework. every boolean decision statement requires at least two tests: one with an outcome of "true" and one with an outcome of "false".It is true that unit testing lacks some of the accessibility of a diagram. [edit] Separation of interface from implementation Because some classes may have references to other classes. outsource to another system the graphical rendering of a view for human consumption. unit testing by definition only tests the functionality of the units themselves. they cannot show the absence of errors. By abstracting this necessary attachment from the code (temporarily reducing the net effective coupling). programmers often need 3 to 5 lines of test code. or non-functional test areas such as performance). testing a class can frequently spill over into testing another class. For example.

[edit] Applications [edit] Extreme Programming Unit testing is the cornerstone of Extreme Programming. over the traditional "test every execution path" method. as classical methods have rarely ever been followed methodically enough for all execution paths to have been thoroughly tested. Then. Meaning. pass. Always take one or three. The developer writes a unit test that exposes either a software requirement or a defect. the application will evolve out of sync with the unit test suite. increasing false positives and reducing the effectiveness of the test suite.g.. xUnit. This automated unit testing framework can be either third party. the versioncontrol software can provide a list of the source code changes (if any) that have been applied to the unit since that time. This test will fail because either the requirement isn't implemented yet. It is essential to keep careful records not only of the tests that have been performed. e.[4] If such a process is not implemented and ingrained into the team's workflow. writing code for a unit test is as likely to be at least as buggy as the code it is testing. Extreme Programming uses the creation of unit tests for test-driven development. It is also essential to implement a sustainable process for ensuring that test case failures are reviewed daily and addressed immediately. more a restatement of fact. but not necessarily all paths through the code. Use of a version control system is essential. the developer writes the simplest code to make the test. Fred Brooks in The Mythical Man-Month quotes: never take two chronometers to sea. which relies on an automated unit testing framework. along with other tests. Most code in a system is unit tested. how do you know which one is correct? To obtain the intended benefits from unit testing. This leads developers to develop fewer tests than classical methods. or created within the development group. if two chronometers contradict.nondeterministic or involve multiple threads. In addition. If a later version of the unit fails a particular test that it had previously passed. but also of all changes that have been made to the source code of this or any other unit in the software. but this isn't really a problem. rigorous discipline is needed throughout the software development process. or because it intentionally exposes a defect in the existing code.[citation needed] Extreme Programming simply recognizes that testing is rarely exhaustive (because it is often too expensive and timeconsuming to be economically viable) and provides guidance on how to . Extreme Programming mandates a "test everything that can possibly break" strategy.

This practice promotes healthy habits in software development. unit testing. and more modular designs. and refactoring often work together so that the best solution may emerge. accurate documentation. Extreme Programming's thorough unit testing allows the benefits mentioned above. . such as simpler and more confident code development and refactoring. that is. Under the automated approach. Automation is efficient for achieving this. with all duplication removed. and thus preclude the achievement of most if not all of the goals established for unit testing. Depending upon the severity of a failure. The IEEE does not favor one over the other. Nevertheless.effectively focus limited resources. a careless manual unit test case may execute as an integration test case that involves many software components. [edit] Unit testing frameworks This section requires expansion. Many frameworks will also automatically flag and report in a summary these failed test cases. the developer codes criteria into the test to verify the correctness of the unit. During execution of the test cases. These unit tests are also constantly run as a form of regression test. the framework logs those that fail any criterion. unit testing is traditionally a motivator for programmers to create decoupled and cohesive code bodies. Developers release unit testing code to the code repository in conjunction with the code it tests.[5] A manual approach to unit testing may employ a step-by-step instructional document. [edit] Techniques Unit testing is commonly automated. Design patterns. As a consequence. the objective in unit testing is to isolate a unit and validate its correctness. Testing in an isolated manner has the benefit of revealing unnecessary dependencies between the code being tested and other units or data spaces in the product. simplified code integration. and enables the many benefits listed in this article. but may still be performed manually. Crucially. the unit or code body subjected to the unit test is executed within a framework outside of its natural environment. Using an automation framework. Conversely. outside of the product or calling context for which it was originally created. if not planned carefully. to fully realize the effect of isolation. These dependencies can then be eliminated. the framework may halt subsequent testing. the test code is considered a first class project artifact in that it is maintained at the same quality as the implementation code.

abbreviated "I&T"[citation needed]) is the phase in software testing in which individual software modules are combined and tested as a group.Unit testing frameworks are most often third-party products that are not distributed as part of the compiler suite. adding unit tests becomes relatively easy. [edit] Language-level unit testing support Some programming languages support unit testing directly. and delivers as its output the integrated system ready for system testing. Their grammar allows the direct declaration of unit tests without importing a library (whether third party or standard). Integration testing takes as its input modules that have been unit tested. such as what is used for if and while statements.[6] In some frameworks many advanced unit test features are missing or must be hand-coded. having scant unit tests is hardly better than having none at all. Contents . They help simplify the process of unit testing. Additionally. exception handling. Languages that directly support unit testing include: y Cobra . whereas once a framework is in place. applies tests defined in an integration test plan to those aggregates. It occurs after unit testing and before system testing. the free encyclopedia Jump to: navigation.D Integration testing From Wikipedia. the boolean conditions of the unit tests can be expressed in the same syntax as boolean expressions used in non-unit test code. groups them in larger aggregates. It is generally possible to perform unit testing without the support of a specific framework by writing client code that exercises the units under test and uses assertions. search Integration testing (sometimes called Integration and Testing. or other control flow mechanisms to signal failure. Unit testing without a framework is valuable in that there is a barrier to entry for the adoption of unit testing. having been developed for a wide variety of languages.

e. Some different types of integration testing are big bang. the environment is proofed. the entire integration process will be more complicated and may prevent the testing team from achieving the goal of integration testing. success and error cases being simulated via appropriate parameter and data inputs. unit testing. and bottom-up. top-down. Test cases are constructed to test that all components within assemblages interact correctly. The Big Bang method is very effective for saving time in the integration testing process. and reliability requirements placed on major design items.e. i.2 Top-down and Bottom-up 2 Limitations 3 References 4 See also [edit] Purpose The purpose of integration testing is to verify functional.[citation needed] needed] A type of Big Bang Integration testing is called Usage Model testing. These "design items". performance. for example across procedure calls or process activations. The basis behind this type of integration testing is to run user-like workloads in integrated user-like environments. The overall idea is a "building block" approach. and this is done after testing individual modules.1 Big Bang o 1. In doing the testing in this manner. Simulated usage of shared data areas and inter-process communication is tested and individual subsystems are exercised through their input interface. all or most of the developed modules are coupled together to form a complete software system or major part of the system and then used for integration testing. while the individual components . [edit] Big Bang Main article: Big Bang (project management) In this approach. if the test cases and their results are not recorded properly. in which verified assemblages are added to a verified base which is then used to support the integration testing of further assemblages.[citation needed] However. are exercised through their interfaces using Black box testing. i.[hide] y y y y 1 Purpose o 1.[citation Usage Model testing can be used in both software and hardware integration testing. assemblages (or groups of units).

it is easier to find a missing branch link[citation needed]. The strategy relies heavily on the component developers to do the isolated unit testing for their product. This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage. This approach is helpful only when all or most of the modules of the same development level are ready. The main advantage of the Bottom-Up approach is that bugs are more easily found[citation needed]. With Top-Down. For integration testing. Usage Model testing takes an optimistic approach to testing. . care must be used in defining the user-like workloads for creating realistic scenarios in exercising the environment. After the integration testing of lower level integrated modules. The goal of the strategy is to avoid redoing the testing done by the developers. Usage Model testing can be more efficient and provides better test coverage than traditional focused functional integration testing. To be more efficient and accurate. outside of the confirmation of the execution of design items. All the bottom or low-level modules. the next level of modules will be formed and can be used for integration testing. will generally not be tested. Top Down Testing is an approach to integrated testing where the top integrated modules are tested and the branch of the module is tested step by step until the end of the related module. because it expects to have few problems with the individual components. then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested. and instead flesh out problems caused by the interaction of the components in the environment.are proofed indirectly through their use. [edit] Top-down and Bottom-up Main article: Top-down and bottom-up design Bottom Up Testing is an approach to integrated testing where the lowest level components are tested first. This gives that the integrated environment will work as expected for the target customers. procedures or functions are integrated and then tested. Sandwich Testing is an approach to combine top down testing with bottom up testing. [edit] Limitations Any conditions not stated in specified integration tests.

system testing takes. . search System testing of software or hardware is testing conducted on a complete. the free encyclopedia Jump to: navigation. as its input. integrated system to evaluate the system's compliance with its specified requirements. all of the "integrated" software components that have successfully passed integration testing and also the software system itself integrated with any applicable hardware system(s). [1] As a rule. It is also intended to test up to and beyond the bounds defined in the software/hardware requirements specification(s). System testing is an investigatory testing phase. Software Testing portal Contents [hide] y y y y 1 Testing the whole system 2 Types of tests to include in system testing 3 See also 4 References [edit] Testing the whole system System testing is performed on the entire system in the context of a Functional Requirement Specification(s) (FRS) and/or a System Requirement Specification (SRS). it seeks to detect defects both within the "interassemblages" and also within the system as a whole. System testing is a more limiting type of testing. and as such.System testing From Wikipedia. but also the behaviour and even the believed expectations of the customer. where the focus is to have almost a destructive attitude[citation needed] and tests not only the design. The purpose of integration testing is to detect any inconsistencies between the software units that are integrated together (called assemblages) or between any of the assemblages and the hardware. should require no knowledge of the inner design of the code or logic. System testing falls within the scope of black box testing.

product. For example. a behind-the-wheel driving test is a performance test of whether a person is able to perform the functions of a competent driver of an automobile. including compliance with: o Americans with Disabilities Act of 1990 o Section 508 Amendment to the Rehabilitation Act of 1973 o Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C) Although different testing organizations may prescribe different tests as part of System testing. Performance testing can refer to the assessment of the performance of a human examinee. software program or .[edit] Types of tests to include in system testing The following examples are different types of testing that should be considered during System testing: y y y y y y y y y y y y y y y y y y y y GUI software testing Usability testing Performance testing Compatibility testing Error handling testing Load testing Volume testing Stress testing Security testing Scalability testing Sanity testing Smoke testing Exploratory testing Ad hoc testing Regression testing Reliability testing Installation testing Maintenance testing Recovery testing and failover testing. Accessibility testing. or person is not specified by detailed material or component specifications: rather. In the computer industry. system. Performance Testing covers a broad range of engineering or functional evaluations where a material. emphasis is on the final measurable performance characteristics. this list serves as a general framework or foundation to begin with. network. Testing can be a qualitative or quantitative procedure. software performance testing is used to determine the speed or effectiveness of a computer.

HP 9000. part of software non-functional tests. etc. AirTel. etc.device. Bandwidth handling capacity of networking hardware Compatibility of peripherals (Printer. Performance test (assessment) Several Defense Standards Software performance testing. Windows. Qualitative attributes such as reliability. such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions. Sprint. DVD drive. ASTM D4169) Performance Index for Tires (ASTM F538) Personal protective equipment performance Performance test (bar exam) for lawyers Proficiency Testing. Compatibility testing. to ensure the following: y y Users have the same visual experience irrespective of the browsers through which they view the web application. Netscape. is testing conducted on the application to evaluate the application's compatibility with the computing environment.. networking/ messaging tool. DB2. Web testing Wear of Textiles (ASTM D123) and many others. [edit] Examples y y y y y y y y y y y Building and Construction Performance Testing Fire protection (ASTM D176) Packaging Performance (hazardous materials.) Other System Software (Web server. etc. O2. dangerous goods. etc. This process can involve quantitative tests done in a lab.). Internet Explorer. etc. can be more appropriately referred to as user experience testing.) Database (Oracle.) Backwards compatibility.) Operating systems (MVS. This requires that the web applications are tested on different web browsers. Performance testing is often done in conjunction with stress testing. Computing environment may contain some or all of the below mentioned elements: y y y y y y y Computing capacity of Hardware Platform (IBM 360. the application must behave and respond the same way across different browsers. Safari. In terms of functionality. UNIX. Orange. scalability and interoperability may also be evaluated. Sybase.) Browser compatibility testing. etc. For more information please visit the link BCT y y Carrier compatibility (Verizon. . etc.) Browser compatibility (Firefox.

This is normally done through the use of a variety of test cases. computer interfaces. GUI software testing In computer science. Usability testing Usability testing is a technique used to evaluate a product by testing it on users. Security testing is a process to determine that an information system protects data and maintains functionality as intended. to ensure it meets its written specifications. GUI software testing is the process of testing a product that uses a graphical user interface. documents. consumer products. integrity. Product Vendors run the complete suite of testing on the newer computing environment to get their application certified for a specific Operating Systems or Databases. and devices. of a specific object or set of objects. or ease of use. whereas general human-computer interaction studies attempt to formulate universal principles. Usability testing focuses on measuring a human-made product's capacity to meet its intended purpose. authentication. The six basic security concepts that need to be covered by security testing are: confidentiality.[1] This is in contrast with usability inspection methods where experts use different methods to evaluate a user interface without involving users. web sites or web applications. availability and . Usability testing measures the usability. since it gives direct input on how real users use the system. authorization.y y y Hardware (different phones) Different Compilers (compile the code correctly) Runs on multiple host/guest Emulators Certification testing falls within the scope of Compatibility testing. This can be seen as an irreplaceable usability practice. Examples of products that commonly benefit from usability testing are foods.

non-repudiation. Integrity schemes often use some of the same underlying technologies as confidentiality schemes. but they usually involve adding additional information to a communication to form the basis of an algorithmic check rather than the encoding all of the communication. etc [edit] Authorization y y The process of determining that a requester is allowed to receive a service or perform an operation. [edit] Authentication y y The process of establishing the identity of the user. Contents [hide] y y y y y y y 1 Confidentiality 2 Integrity 3 Authentication 4 Authorization 5 Availability 6 Non-repudiation 7 See also [edit] Confidentiality y A security measure which protects against the disclosure of information to parties other than the intended recipient that is by no means the only way of ensuring the security. biometrics. Access control is an example of authorization. radio frequency identification. . Authentication can take many forms including but not limited to: passwords. [edit] Integrity y y A measure intended to allow the receiver to determine that the information which it is providing is correct.

[edit] Availability y y Assuring information and communications services will be ready for use when expected. or a communication that took place etc. In communication terms this often involves the interchange of authentication information combined with some form of provable time stamp. Information must be kept available to authorized persons when they need it. This testing is typically done by the software testing engineer in conjunction with the configuration manager. Deployment plans in such circumstances may include back-out procedures whose use is . partial or upgrades install/uninstall processes. [edit] Non-repudiation y y A measure intended to prevent the later denial that an action happened. In distributed systems. sometimes called package software. from which it may or may not progress into production. This package software typically uses a setup program which acts as a multi-configuration wrapper and which may allow the software to be installed on a variety of machine and/or operating environments. Implementation testing is usually defined as testing which places a compiled version of code into the testing or preproduction environment. This generally takes place outside of the software development environment to limit code corruption from other future releases which may reside on the development network. Every possible configuration should receive an appropriate level of testing so that it can be released to customers with confidence. particularly where software is to be released into an already live target environment (such as an operational website) installation (or software deployment as it is sometimes called) can involve database schema changes as well as the installation of new software. Installation testing Implementation testing installation testing is a kind of quality assurance work in the software industry that focuses on what customers will need to do to install and set up the new software successfully. The testing process may involve full. The simplest installation approach is to run an install program.

A factor that can increase the organizational requirements of such an exercise is the need to synchronize the data in the test deployment environment with that in the live environment with minimum disruption to live operation. the deployment plan itself should be tested in an environment that is a replica of the live environment.. Ideally. Test cases are built around specifications and requirements. Black-box testing is a method of testing software that tests the functionality of an application as opposed to its internal structures or workings (see white box testing)."[1] Black-box testing From Wikipedia. Specific knowledge of the application's code/internal structure and programming knowledge in general is not required. This type of testing is commonly compared to a dress rehearsal or may even be called a ³dry run´ Regression testing Regression testing is any type of software testing that seeks to uncover software errors by partially retesting a modified program. what the application is supposed . The intent of regression testing is to provide a general assurance that no additional errors were introduced in the process of fixing other problems. Common methods of regression testing include rerunning previously run tests and checking whether previously fixed faults have re-emerged.intended to roll the target environment back if the deployment is unsuccessful. the free encyclopedia (Redirected from Black box testing) Jump to: navigation. search Black box diagram. This type of implementation may include testing of the processes which take place during the installation or upgrade of a multi-tier application. i.e. "One of the main reasons for regression testing is that it's often extremely difficult for a programmer to figure out how a change in one part of the software will echo in other parts of the software. Regression testing is commonly used to test the system efficiently by systematically selecting the appropriate minimum suite of tests needed to adequately cover the affected change.

acceptance testing by the system provider is often distinguished from acceptance testing by the customer (the user or client) prior to accepting transfer of ownership. Unsourced material may be challenged and removed. or factory acceptance testing. This is also known as end-user testing. In such environments. or batches of chemical products) prior to its delivery. the free encyclopedia (Redirected from Acceptance test) Jump to: navigation. validation testing. QA testing. black-box testing. software. These tests can be functional or non-functional. This method of test can be applied to all levels of software testing: unit. do. Please help improve this article by adding reliable references. application testing. It typically comprises most if not all testing at higher levels. final testing. acceptance testing performed by the customer is known as user acceptance testing (UAT). It uses external descriptions of the software. confidence testing. functional. release acceptance. system and acceptance. (September 2006) In engineering and its various subdisciplines. lots of manufactured mechanical parts. and design to derive test cases. though usually functional.[1] It is also known as functional testing. There is no knowledge of the test object's internal structure. or field (acceptance) testing. site (acceptance) testing.[citation needed] In software development. search This article needs additional citations for verification. Contents [hide] y y 1 Overview 2 Process . but can also dominate unit testing as we Acceptance testing From Wikipedia. acceptance testing is blackbox testing performed on a system (e. integration. The test designer selects valid and invalid inputs and determines the correct output. including specifications.g. A smoke test is used as an acceptance test prior to introducing a build to the main testing process.

Quantified User Acceptance Testing 5 Acceptance testing in Extreme Programming 6 Types of acceptance testing 7 List of development to production (testing) environments 8 List of Acceptance Testing Frameworks 9 See also 10 References 11 External links [edit] Overview Acceptance testing generally involves running a suite of tests on the completed system. however the business customers (product owners) are the primary owners of these tests. known as a case. and will result in a pass or fail. The test environment is usually designed to be identical. before development begins so that the developers have a clear idea of what to develop.y y y y y y y y y 3 User acceptance testing 4 Q-UAT . . business analysts. including extremes of such. outcome. the business owners can be sure of the fact that the developers are progressing in the right direction about how the application was envisaged to work and so it's essential that these tests include both business logic tests as well as UI validation elements (if need be). Each individual test. Acceptance Tests/Criterion (in Agile Software Development) are usually created by business customers and expressed in a business domain language. These tests are created ideally through collaboration between business customers. exercises a particular operating condition of the user's environment or feature of the system. Sometimes (due to bad planning!) acceptance tests may span multiple stories (that are not implemented in the same sprint) and there are different ways to test them out during actual sprints. There is generally no degree of success or failure. to the anticipated user's environment. or boolean. As the user stories pass their acceptance criteria. These are high level tests to test the completeness of a user story or stories 'played' during any sprint/iteration. A user story is not considered complete until the acceptance tests have passed. These test cases must each be accompanied by test case input data or a formal description of the operational activities (or both) to be performed²intended to thoroughly exercise the specific case²and a formal description of the expected results. One popular technique is to mock external interfaces or data to mimick other stories which might not be played out during an iteration (as those stories may have been relatively lower business priority). Acceptance test cards are ideally created during sprint planning or iteration planning meeting. or as close as possible. testers and developers.

preferably the owner or client of the object under test. the system may either be rejected or accepted on conditions previously agreed between the sponsor and the manufacturer. where any quality defects not previously detected may be uncovered. Users of the system perform these tests. [edit] User acceptance testing User Acceptance Testing (UAT) is a process to obtain confirmation by a Subject Matter Expert (SME). These tests. UAT is one of the final stages of a project and often occurs before a client or customer accepts the new system. In software development. The UAT acts as a final verification of the required business function and proper functioning of the system. however there are some situations where this may not be avoided. Then the results obtained are compared with the expected results. and deliver final payment. The objective is to provide confidence that the delivered system meets the business requirements of both sponsors and users. that a system meets mutually agreed-upon requirements. If the software works as intended and without issues during normal use. If there is a correct match for every case. If not. the test suite is said to pass. The acceptance phase may also act as the final quality gateway. which are usually performed by clients or end-users. emulating real-world usage conditions on behalf of the paying client or a specific large customer. the sponsors will then sign off on the system as satisfying the contract (previously agreed between sponsor and manufacturer). which developers derive from the client's contract or the user requirements specification. testers and developers previously identify and fix these issues during earlier unit testing. through trial or review. nor show stopper defects. integration testing.[edit] Process The acceptance test suite is run against the supplied input data or using an acceptance test script to direct the testers. and system testing . such as software crashes. A principal purpose of acceptance testing is that. and provided certain additional (contractually agreed) acceptance criteria are met. It is preferable that the designer of the user acceptance tests not be the creator of the formal integration and system test cases for the same system. once completed successfully. Test designers draw up formal tests and devise a range of severity levels. are not usually focused on identifying simple problems such as spelling errors and cosmetic problems. one can reasonably infer the same level of stability in production.

Quantified User Acceptance Testing Quantified User Acceptance Testing (Q-UAT or. [edit] Q-UAT . The results of these tests give confidence to the clients as to how the system will perform in production. the 1st dimension) Recursive Testing (RT. A reliance on better quality code delivery from Development/Build phase is assumed and a complete understanding of the appropriate Business Process is a pre-requisite.phases. allowing products/changes to be brought to market quicker. a decreased number of test scenarios which are more complex and wider in breadth than traditional UAT and ultimately the equivalent confidence level attained via a shorter delivery window. the 3rd dimension). the 2nd dimension) Adaptive Testing (AT. This methodology if carried out correctly results in a quick turnaround against plan. The Quantified Approach was shaped by the former "guerilla" method of Acceptance Testing which was itself a response to testing phases which proved too costly to be sustainable for many small/medium-scale projects. more simply. There may also be legal or contractual requirement for acceptance of the system. The Approach is based on a 'gated' 3-dimensional model the key concepts of which are: y y y Linear Testing (LT. [edit] Acceptance testing in Extreme Programming Acceptance testing is a term used in agile software development . The four 'gates' which conjoin and support the 3-dimensional model act as quality safeguards and include contemporary testing concepts such as: y y y Internal Consistency Checks (ICS) Major Systems/Services Checks (MSC) Realtime/Reactive Regression (RTR). the Quantified Approach) is a revised Business Acceptance Testing process which aims to provide a smarter and faster alternative to the traditional UAT phase.[citation needed] Depth-testing is carried out against Business Requirement only at specific planned points in the application or service under test.

A user story is not considered complete until it has passed its acceptance tests.[2] This section requires expansion. Contract and regulation acceptance testing In contract acceptance testing. after which site acceptance testing may be performed by the users at the site. particularly Extreme Programming. procedures for disaster recovery. A story can have one or many acceptance tests. i. This may include checks done to back-up facilities. Acceptance tests are black box system tests. a system is tested to ensure it meets governmental. Customers are responsible for verifying the correctness of the acceptance tests and reviewing test scores to decide which failed tests are of highest priority. referring to the functional testing of a user story by the software development team during the implementation phase. The customer specifies scenarios to test when a user story has been correctly implemented. training for end users. the testing done by factory users before the factory is moved to its own site. In regulation acceptance testing. legal and safety standards.e. Operational acceptance testing Also known as operational readiness testing. and involves testing of the .methodologies. This means that new acceptance tests must be created for each iteration or the development team will report zero progress. before the system is accepted. Alpha and beta testing Alpha testing takes place at developers' sites. Each acceptance test represents some expected result from the system. [edit] Types of acceptance testing Typical types of acceptance testing include the following User acceptance testing This may include factory acceptance testing. maintenance procedures. this refers to the checking done to a system to ensure that processes and procedures are in place to allow the system to be used and maintained. and security procedures. Acceptance tests are also used as regression tests prior to a production release. a system is tested against acceptance criteria as documented in a contract. whatever it takes to ensure the functionality works.

Development System Testing SIT. Development Integration Testing DST. Development Environment [1] DTE. functional web testing capabilities. server based. and involves testing by a group of customers who use the system at their own locations and provide feedback. before the system is released to other customers. User Acceptance Testing [3] OAT. Please help improve this article by adding citations to reliable sources. (April 2007) . Selenium (software) iMacros Ranorex Watir Test Automation FX System integration testing From Wikipedia.operational system by internal staff. Unsourced material may be challenged and removed. Operations Acceptance Testing PROD. Quality Assurance (Testing Environment) [2] DIT. System Integration Testing UAT. search This article does not cite any references or sources. Beta testing takes place at customers' sites. [edit] List of Acceptance Testing Frameworks y y y y y y y y Framework for Integrated Test (Fit) FitNesse. before it is released to external customers. The latter is often called field testing . a fork of Fit ItsNat Java AJAX web framework with built-in. Production Environment [4] [1-4] Usual development environment stages in medium sized development projects. the free encyclopedia Jump to: navigation. Development Testing Environment QA. [edit] List of development to production (testing) environments y y y y y y y y y DEV.

is a testing process that exercises a software system's coexistence with others.Software Testing portal System Integration Testing (SIT). 1988). Many organizations do not have a SIT phase and the first test of UAT may include the first integrated test of all software components. Spiral model From Wikipedia. the deliverable systems are passed on to acceptance testing. System integration testing takes multiple integrated systems that have passed system testing as input and tests their required interactions. Systems integration testing (SIT) is a testing phase that may occur after unit testing and prior to user acceptance testing (UAT). search Spiral model (Boehm. Following this process. in the context of software systems and software engineering. the free encyclopedia Jump to: navigation. .

Also known as the spiral lifecycle model (or spiral development). [edit] Steps The steps in the spiral model iteration can be generalized as follows: . Analysis and engineering efforts are applied at each phase of the project. it is a systems development method (SDM) used in information technology (IT). Each phase starts with a design goal and ends with the client (who may be internal) reviewing the progress thus far. This model of development combines the features of the prototyping model and the waterfall model. but it was the first model to explain why the iteration matters.[citation needed] As originally envisioned. expensive and complicated projects. The spiral model is intended for large.The spiral model is a software development process combining which elements of both design and prototyping-in-stages. This model was not the first model to discuss iterative development. Contents [hide] y y y y 1 History 2 Steps 3 Applications 4 References [edit] History The spiral model was defined by Barry Boehm in his 1986 article "A Spiral Model of Software Development and Enhancement"[1]. the iterations were typically 6 months to 2 years long. with an eye toward the end goal of the project. in an effort to combine advantages of top-down and bottom-up concepts.

the Spiral Architecture Driven Development is the spiral based SDLC which shows the possible way how to reduce a risk of non effective architecture with the help of spiral model in conjunction with the best practices from other models. It was canceled in May. planning and designing the second prototype. the concept of agile software development is becoming a viable alternative. 2. 3. This is usually a scaled-down system. This phase has been added specially in order to identify and resolve all the possible risks in the project development.2009). system of systems. A preliminary design is created for the new system. This phase is the most important part of "Spiral Model". 2009.[2] The spiral model is mostly used in large projects. A first prototype of the new system is constructed from the preliminary design. A second prototype is evolved by a fourfold procedure: 1. and represents an approximation of the characteristics of the final product. evaluating the first prototype in terms of its strengths. 2. If risks indicate any kind of uncertainty in requirements. Agile software development . constructing and testing the second prototype. In this phase all possible (and available) alternatives. Also it is reasonable to use the spiral model in projects where business goals are unstable but the architecture must be realized well enough to provide high loading and stress ability. 4.every 2 years). [edit] Applications Game development is a main area where the spiral model is used and needed.1. that is because of the size and the constantly shifting goals of those large projects. The FCS project was canceled after six years (2003 . The US military has adopted the spiral model for its Future Combat Systems program. prototyping may be used to proceed with the available data and find out possible solution in order to deal with the potential changes in the requirements. This usually involves interviewing a number of users representing all the external or internal users and other aspects of the existing system. it had a 2 year iteration (spiral). 3. which can help in developing a cost effective project are analyzed and strategies to use them are decided. weaknesses. For example. Spiral model thus may suit small (up to $3M) software applications and not complicated ($3B) distributed. interoperable. For smaller projects. The system requirements are defined in as much detail as possible. FCS should have resulted in 3 consecutive prototypes (one prototype per spiral . 4. defining the requirements of the second prototype. and risks.

search Software development process Activities and steps Requirements · Specification Architecture · Design Implementation · Testing Deployment · Maintenance Methodologies Agile · Cleanroom · Iterative · RAD · RUP · Spiral Waterfall · Lean V-Model · TDD Supporting disciplines Configuration management Documentation Quality assurance (SQA) Project management User experience design Tools Compiler · Debugger · Profiler GUI designer Integrated development environment v d e . the free encyclopedia Jump to: navigation.From Wikipedia.

and to make it neutral in tone. cross-functional teams. teamwork.2 Agile Manifesto 2 Common characteristics 3 Comparison with other methods o 3.This article may be written from a fan's point of view. Contents [hide] y y y y y y y y y y y 1 History o 1. Please clean it up to conform to a higher standard of quality. self-organization and accountability. where requirements and solutions evolve through collaboration between selforganizing. a set of engineering best practices for rapid delivery of high-quality software.2 Cowboy coding 4 Method tailoring 5 Methods 6 Measuring agility 7 Experience and reception o 7. and a business approach that aligns development with customer needs and company goals. rather than a neutral point of view.1 Predecessors o 1. (August 2010) This article may require cleanup to meet Wikipedia's quality standards.1 Other iterative development methods o 3. Please improve this article if you can. Proponents of agile methods believe that they promote a disciplined project management process for software development that encourages frequent inspection and adaptation.1 Suitability o 7. (August 2010) Agile software development is a group of software development methodologies based on iterative and incremental development.2 Experience reports 8 See also 9 References 10 Further reading 11 External links . The Agile Manifesto[1] introduced the term in 2001.

contend that they are a return to development practices from early in the history of software development. Edmonds introduced an adaptive software development process. regimented. Utah.[2] In 1974. Crystal Clear.[edit] History [edit] Predecessors Jeff Sutherland. one of the developers of the Scrum agile software development process Incremental software development methods have been traced back to 1957. which were characterized by their critics as a heavily regulated. They published the Manifesto for Agile Software Development[1] to define the approach now known as agile software development. a non-profit organization that promotes software development according to the manifesto's principles. Proponents of lightweight methods. a paper by E.[2] Early implementation of lightweight methods include Scrum (1995). Adaptive Software Development. waterfall model of development. Extreme Programming (1996). to discuss lightweight development methods. A. . These are now typically referred to as agile methodologies. after the Agile Manifesto published in 2001. Some of the manifesto's authors formed the Agile Alliance. Feature Driven Development.[3] So-called "lightweight" software development methods evolved in the mid1990s as a reaction against so-called "heavyweight" methods.[4] [edit] Agile Manifesto In February 2001. and Dynamic Systems Development Method (DSDM) (1995). micromanaged. 17 software developers[5] met at a ski resort in Snowbird. and now agile methods.

Agile software sevelopment poster Twelve principles underlie the Agile Manifesto. we value the items on the left more. daily cooperation between business people and developers Face-to-face conversation is the best form of communication (co-location) Projects are built around motivated individuals. including:[6] y y y y y y y y y y Customer satisfaction by rapid. in its entirety. continuous delivery of useful software Working software is delivered frequently (weeks rather than months) Working software is the principal measure of progress Even late changes in requirements are welcomed Close. as follows:[1] We are uncovering better ways of developing software by doing it and helping others do it. who should be trusted Continuous attention to technical excellence and good design Simplicity Self-organizing teams . Through this work we have come to value: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan That is. while there is value in the items on the right.Agile Manifesto reads.

Team members normally take responsibility for tasks that deliver the functionality an iteration requires. a group headed by Alistair Cockburn and Jim Highsmith wrote an addendum of project management principles.y Regular adaptation to changing circumstances In 2005.[8] Multiple iterations may be required to release a product or new features. unit testing. Stakeholders produce documentation as required. and do not directly involve long-term planning. This helps minimize overall risk. and acceptance testing when a working product is demonstrated to stakeholders. coding. An iteration may not add enough functionality to warrant a market release. Each iteration involves a team working through a full software development cycle including planning. design. collaboration. and process adaptability throughout the life-cycle of the project. Team composition in an agile project is usually cross-functional and selforganizing without consideration for any existing corporate hierarchy or the corporate roles of team members. They decide individually how to meet an iteration's requirements.[7] to guide software project management according to agile development methods. and lets the project adapt to changes quickly. the Declaration of Interdependence. but the goal is to have an available release (with minimal bugs) at the end of each iteration. Agile methods emphasize face-to-face communication over written documents when the team is all in the same location. Most promote development. When a team works in . teamwork. requirements analysis. an agile development technique There are many specific agile development methods. [edit] Common characteristics Pair programming. Iterations are short time frames (timeboxes) that typically last from one to four weeks. Agile methods break tasks into small increments with minimal planning.

domain-driven design. This person is appointed by stakeholders to act on their behalf and makes a personal commitment to being available for developers to answer mid-iteration problem-domain questions. voice. Most agile implementations use a routine and formal daily face-to-face communication among team members. which facilitates such communication. Larger development efforts may be delivered by multiple teams working toward a common goal or different parts of an effort. In a brief session. This may also require a coordination of priorities across teams. code refactoring and other techniques are often used to improve quality and enhance project agility. they maintain daily contact through videoconferencing. stakeholders and the customer representative review progress and re-evaluate priorities with a view to optimizing the return on investment and ensuring alignment with customer needs and company goals. (August 2010) Agile methods are sometimes characterized as being at the opposite end of the spectrum from "plan-driven" or "disciplined" methods. etc. This specifically includes the customer representative and any interested stakeholders as observers. At the end of each iteration. test driven development. produces less written documentation than other methods. [edit] Comparison with other methods This section needs additional citations for verification. design patterns. what they intend to do today. each agile team will contain a customer representative. pair programming. e-mail. team members report to each other what they did the previous day. Unsourced material may be challenged and removed.different locations. Specific tools and techniques such as continuous integration. Most agile teams work in a single open office (called a bullpen). The agile method encourages stakeholders to prioritize wants with other iteration outcomes based exclusively on business value perceived at the beginning of the iteration. Agile emphasizes working software as the primary measure of progress. and what their roadblocks are. Please help improve this article by adding reliable references. combined with the preference for face-to-face communication. This. No matter what development disciplines are required. This standing face-to-face communication prevents problems from being hidden. automated or xUnit test. This distinction is . Team size is typically small (5-9 people) to help make team communication and team collaboration easier.

Some provers do not easily scale. The plan is typically optimized for the original destination and changing direction can cause completed work to be thrown away and done over differently.[9] Agile methods lie on the "adaptive" side of this continuum. A predictive team can report exactly what features and tasks are planned for the entire length of the development process. Further. When the needs of a project change. agile teams may employ very highly disciplined formal methods. an adaptive team changes as well. Agile methods have much in common with the "Rapid Application Development" techniques from the 1980/90s as espoused by James Martin and others. An adaptive team will have difficulty describing exactly what will happen in the future. A formal method attempts to prove the absence of errors with some level of determinism. Predictive teams will often institute a change control board to ensure that only the most valuable changes are considered. in contrast.misleading. Formal methods. manifestos relevant to high integrity software have been proposed in Crosstalk. Unsourced material may be challenged and removed. When asked about a release six months from now. and may be combined with other development approaches. cost. Predictive teams have difficulty changing direction. or a statement of expected value vs. Predictive methods. The further away a date is. Like agile methods. An adaptive team can report exactly what tasks are being done next week. mathematical models (often supported through special languages see SPIN model checker) map to assertions about requirements. in contrast to adaptive and predictive methods focus on computer science theory with a wide array of types of provers. as it implies that agile methods are "unplanned" or "undisciplined". Some Formal methods are based on model checking and provide counter examples for code that cannot be proven. Generally. Please help improve this article by adding citations to reliable sources. (August 2010) . Adaptive methods focus on adapting quickly to changing realities. but only which features are planned for next month. focus on planning the future in detail. the more vague an adaptive method will be about what will happen on that date. A more accurate distinction is that methods exist on a continuum from "adaptive" to "predictive". Formal method are heavily dependent on a tool driven approach. an adaptive team may only be able to report the mission statement for the release. [edit] Other iterative development methods This section does not cite any references or sources.

different terms refer to the notion of method adaptation.[11] Situation-appropriateness can be considered as a distinguishing characteristic between agile methods and traditional software development methods.g. Agile controls offer stronger levels of accountability.[10] Potentially. At a more extreme level. the philosophy behind the .. However. and method fragments determine a system development approach for a specific project situation. µmethod fragment adaptation¶ and µsituational method engineering¶. (August 2010) Cowboy coding is a derogatory term for software development without a a defined or structured method: team members do whatever they feel is right. The practical implication is that agile methods allow project teams to adapt working practices according to the needs of individual projects. and dynamic interplays between contexts. Practices are concrete activities and products that are part of a method framework. Further. The degradation of such controls or procedures can lead to activities that are often categorized as cowboy coding. it is likely the flexibility and adaptability of the overall methodology which causes the confusion. including µmethod tailoring¶. Most agile methods also differ by treating their time period as a timebox. Please help improve this article by adding citations to reliable sources. [edit] Method tailoring In the literature. Agile teams follow clearly defined. even rigid processes and controls (e. Even the DSDM method is being used for this purpose and has been successfully tailored in a CMM context. The Agile approach is sometimes confused with cowboy coding due to its frequent re-evaluation of plans. Method tailoring is defined as: A process or capability in which human agents through responsive changes in. intentions. Unsourced material may be challenged and removed. Agile development differs from other development models: in this model. with the latter being relatively much more rigid and prescriptive. deadlines for completion of coding/testing). time periods are measured in weeks rather than months and work is performed in a highly collaborative manner. and relatively sparse use of documentation. emphasis on face-to-face communication.Most agile methods share other iterative and incremental development methods' emphasis on building releasable software in short time periods. [edit] Cowboy coding This section does not cite any references or sources. almost all agile methods are suitable for method tailoring.

could be adapted (Aydin. as suggested by Beck. assumes that projects are situated in an emergent context. consisting of a number of principles. route maps can be used in order to determine which structured method fragments should be used for that particular project. In such a case prescriptive route maps are not appropriate.[12] A tailoring practice is proposed by Mehdi Mirakhorli which provides sufficient roadmap and guideline for adapting all the practices. Partial adoption of XP practices. has been reported on several occasions. but rather that practices should be tailored to the needs of individual projects. this practice has the capability of extending to other methodologies.[13] [edit] Methods Well-known agile software development methods: y y y y y y y y Agile Modeling Agile Unified Process (AUP) Dynamic Systems Development Method (DSDM) Essential Unified Process (EssUP) Extreme Programming (XP) Feature Driven Development (FDD) Open Unified Process (OpenUP) Scrum . but changing during project execution. The practical implication of dynamic method adaptation is that project managers often have to modify structured fragments or even innovate new fragments.[13] The key assumption behind static method adaptation is that the project context is given at the start of a project and remains fixed during project execution. based on predefined sets of criteria.. The result is a static definition of the project context. in contrast. Dynamic method adaptation. RDP Practice is designed for customizing XP.[10] Extreme Programming (XP) makes the need for method adaptation explicit.method. At first glance. This practice first time proposed as a long research paper in APSO workshop at ICSE 2008 conference and yet it is the only proposed and applicable method for customizing XP. during the execution of a project (Aydin et al. Given such a definition. 2005). An emergent context implies that a project has to deal with emergent factors that affect relevant conditions but are not predictable. 2004). The distinction between static method adaptation and dynamic method adaptation is subtle. this practice seems to be in the category of static method adaptation but experiences with RDP Practice says that it can be treated like dynamic method adaptation. This also means that a project context is not fixed. One of the fundamental ideas of XP is that no one process fits every project. Although it is specifically a solution for XP.

though scaling strategies[29][dead link] and evidence to the contrary[30] have been described. and Boehm and Turner[28]) as working well for small (<10 developers) co-located teams.[24] [edit] Suitability Large-scale agile software development remains an active research area. the practical application of such metrics has yet to be seen. While such approaches have been proposed to measure agility.[21] A similar survey conducted in 2006 by Scott Ambler.[23] Others claim that agile development methods are still too young to require extensive academic proof of their success. Some things that can negatively impact the success of an agile project are: y y y Large-scale development efforts (>20 developers). and interaction). productivity. 55% of respondents answered that Agile methods had been successful in 90-100% of cases. the Practice Leader for Agile Development with IBM Rational's Methods Group reported similar benefits. Other techniques are based on measurable goals. below. 157.[19] 42 points test[20]). effort.[edit] Measuring agility While agility can be seen as a means to an end. a number of approaches have been proposed to quantify agility. There are agile self assessments to determine whether a team is using agile practices (Nokia test.[16] Another study using fuzzy mathematics[17] has suggested that project velocity can be used as a metric of agility. risk.[15] scores developments against five dimensions of a software project (duration. and business satisfaction by using Agile methods was a survey conducted by Shine Technologies from November 2002 to January 2003. [edit] Experience and reception One of the early studies reporting gains in quality.[22] In a survey conducted by VersionOne in 2008. Agility Index Measurements (AIM)[14] score projects against a number of agility factors to achieve a total. as well as Beck[27] pg. Strategies have been described in Bridging the Distance[31][dead link] and Using an Agile Software Process with Offshore Development[32] Forcing an agile process on a development team[33] . novelty. Distributed development efforts (non-co-located teams). The similarly-named Agility Measurement Index.[18] Karlskrona test.[25][26] Agile development has been widely documented (see Experience Reports.

2003. As of 2006. Several successful large-scale agile projects have been documented. 2002. The experience reports share industry experiences with agile software development. Ireland and India working collaboratively on projects and using Agile methods. experience reports have been or will be presented at the following conferences: y XP (2000.y Mission-critical systems where failure is not an option at any cost (software for surgical procedures).[35] 2010[36] ) . 2006. including a peer-reviewed experience report track.[where?] BT has had several hundred developers situated in the UK.[34] 2001. 2004. 2005.[citation needed] Barry Boehm and Richard Turner suggest that risk analysis be used to choose between adaptive ("agile") and predictive ("plan-driven") methods.[28] The authors suggest that each side of the continuum has its own home ground as follows: Agile home ground:[28] y y y y y Low criticality Senior developers Requirements change often Small number of developers Culture that thrives on chaos Plan-driven home ground:[28] y y y y y High criticality Junior developers Requirements do not change often Large number of developers Culture that demands order Formal methods: y y y y y Extreme criticality Senior developers Limited requirements. Some of these conferences have had academic backing and included peer-reviewed papers. limited features see Wirth's law Requirements that can be modeled Extreme quality [edit] Experience reports Agile development has been the subject of several conferences.

1 Waterfall Model o 3. Many of them are in the defense industry. there are many specific software development processes that 'fit' the spiral lifecycle model.1 Planning o 2.2 Spiral Model o 3. Contents [hide] y y y y y y y y 1 Overview 2 Software development activities o 2. proceedings published by IEEE) Software development process From Wikipedia.[38] 2003. the free encyclopedia (Redirected from Software development life cycle) A software development process is a structure imposed on the development of a software product. There are several models for such processes.2 Implementation.[39] 2004[40]) Agile Development Conference[41] starting from 2003 to present (peerreviewed.4 Agile Development 4 Process Improvement Models 5 Formal methods 6 See also 7 References 8 External links [edit] Overview The large and growing body of software development organizations implement process methodologies. each describing approaches to a variety of tasks or activities that take place during the process. testing and documenting o 2.3 Deployment and maintenance 3 Software Development Models o 3. .y y y XP Universe (2001[37]) XP/Agile Universe (2002. For example.3 Iterative and Incremental Development o 3. Similar terms include software life cycle and software process. Some people consider a lifecycle model a more general term and a software development process a more specific term.

requires a rating based on 'process models' to obtain contracts. Organizations may create a Software Engineering Process Group (SEPG). Composed of line practitioners who have varied skills. With large numbers of software projects not meeting their expectations in terms of functionality. [edit] Planning The important task in creating a software product is extracting the . Others apply project management techniques to writing software. implementing and monitoring the life cycle for software is ISO 12207. Some try to systematize or formalize the seemingly unruly task of writing software.S. predictable processes that improve productivity and quality. [edit] Software development activities The activities of the software development process represented in the waterfall model. Without project management. the group is at the center of the collaborative effort of everyone in the organization who is involved with software engineering process improvement. effective project management appears to be lacking. A decades-long goal has been to find repeatable. which is the focal point for process improvement. The international standard for describing the method of selecting. cost. or delivery schedule. software projects can easily be delivered late or over budget.which in the U. There are several other models to represent this process.

but not what software should do. It would not matter how much time and planning a development team puts into creating software if nobody in an organization ends up using it. It may be necessary to add code that does not fit the original .requirements or requirements analysis. so as a part of the deployment phase. [edit] Implementation. Software Training and Support is important and a lot of developers fail to realize that. be it external or internal. This may also include the writing of an API. [edit] Deployment and maintenance Deployment starts after the code is appropriately tested. Incomplete. an analysis of the scope of the development should be determined and clearly stated. or even contradictory requirements are recognized by skilled and experienced software engineers at this point. People are often resistant to change and avoid venturing into an unfamiliar area. it is very important to have training classes for new clients of your software. Customers typically have an abstract idea of what they want as an end result. It is very important to document everything in the project. testing and documenting Implementation is the part of the process where software engineers actually program the code for the project. If the development is done externally. ambiguous. Certain functionality may be out of scope of the project as a function of cost or as a result of unclear requirements at the start of development. this document can be considered a legal document so that if there are ever disputes. any ambiguity of what was promised to the client can be clarified. Once the general requirements are gathered from the client. Software testing is an integral and important part of the software development process. This is often called a scope document. is approved for release and sold or otherwise distributed into a production environment. Frequently demonstrating live code may help reduce the risk that the requirements are incorrect. Documenting the internal design of software for the purpose of future maintenance and enhancement is done throughout development. Maintaining and enhancing software to cope with newly discovered problems or new requirements can take far more time than the initial development of the software. This part of the process ensures that defects are recognized as early as possible.

There is little agreement on what the specific goals of load testing are. both open source and commercially licensed. and respond to reported issues. review. The load is usually so great that error conditions are the expected result. the free encyclopedia Jump to: navigation. then it is likely that the overall quality of at least one prior phase is poor. although no clear boundary exists when an activity ceases to be a load test and becomes a stress to correct an unforeseen problem or it may be that a customer is requesting more functionality and code can be added to accommodate their requests. and volume testing. These software tools. provide a customizable process to acquire. it is known as stress testing. search Load testing is the process of putting demand on a system or device and measuring its response.1 Popular load testing tools 2 Mechanical load testing 3 Car charging system 4 See also 5 References 6 External links . acknowledge. reliability testing. When the load placed on the system is raised beyond normal usage patterns. If the labor cost of the maintenance phase exceeds 25% of the priorphases' labor cost. (software maintenance) Load testing From Wikipedia. The term is often used synonymously with software performance testing. management should consider the option of rebuilding the system (or portions) before maintenance cost is out of control. Bug Tracking System tools are often deployed at this stage of the process to allow development teams to interface with customer/field teams testing the software to identify any real or perceived issues.[citation needed] In that case. Contents [hide] y y y y y y 1 Software load testing o 1. in order to test the system's response at unusually high or peak loads.

For example. As such. in the list above item 1 the each of the 25 Virtual users could be browsing through unique items that other virtual users will not browse through. the application is subjected to a 100 VUser load as shown above and its performance is monitored. but a load testing tool will send out the hypertext that the browser will send after the user clicks the OK button. a word processor or graphics editor can be forced to read an extremely large document. It is a common misconception that these are record and playback tools like regression testing tools. often one built using a client/server model. or a financial package can be forced to generate a report based on several years' worth of data. before going live. such as web servers. The criteria for passing or failing a test (pass/fail criteria) is different for each individual organization and there are no standards on what an acceptable criteria should be. The most accurate load testing occurs with actual. usually in a test environment identical to the production environment. For example. Load testing tools work at the protocol level whereas most regression testing tools work at the GUI object level. Load testing generally refers to the practice of modelling the expected usage of a software program by simulating multiple users accessing the program concurrently. rather than theoretical. this testing is most relevant for multi-user systems.[edit] Software load testing Main article: Software performance testing The term load testing is used in different ways in the professional software testing community. other types of software systems can also be load tested. a regression testing tool will simulate a mouse click on an OK button on the browser. and again it will send out the . As a matter of fact. Load and performance testing is to test software intended for a multi-user audience for the desired performance by subjecting it with an equal amount of virtual users and then monitoring the performance under the specified load. For example. However. across the board. results. if a web site with a shopping cart is intended for 100 concurrent users who are doing the following functions: y y y y 25 Virtual Users (VUsers) are browsing through the items and logging off 25 VUsers are adding items to the shopping cart and checking out and logging off 25 VUsers are returning items previously purchased and logging off 25 VUsers are just logged in without any activity Some times it is also referred to as Non-Functional Testing Using various tools available to generate these VUsers.

B2B applications' popularity has resulted in more and more applications moving to Web Services. and the Airline would be bound by the SLA to respond within 5 seconds. Java desktop application for load testing and performance measurement. Often. licensed under the Gnu GPL. Licensed. where there is exchange of information without a browser interface. Performance testing tool primarily used for executing large numbers of tests (or a large number or virtual users) concurrently. especially the acceptance of Business-tobusiness (B2B) applications. Various tools are also available to find out the causes for slow performance which could be in the following areas: y y y y y Application Database Network Client side processing Load balancer With the popularity of the web. Utilizes a distributed software architecture based on CORBA. Eclipse based large scale performance testing tool primarily used for executing large volume performance tests to measure system response time for server based applications. there are large penalties involved if SLAs are not met. Can be OpenSTA 'Open System Testing Architecture' IBM Rational Performance Tester IBM JMeter An Apache Jakarta open source project LoadRunner HP . An example of a typical service level would be a travel agency inquiring the Airline's Web Service on availability of tickets for a particular flight from Chicago to Dallas. OpenSTA binaries available for Windows. there are sometimes Service Level Agreements (SLA) involved. [edit] Popular load testing tools Tool Name Company Name Notes Open source web load/stress testing application.hypertext for multiple users each having a unique login ID and password.

such as fatigue testing for materials.) with a combination of configurations to simulate real user load[1].dat. which are often misunderstood and/or used interchangeably. in generic terms. Contents [hide] y 1 Computer software . often to a breaking point. Licensed.. Performance testing in an open and sharable model which allows realistic load tests for thousands of users running business scenarios across a broad range of enterprise application environments. search Stress testing is a form of testing that is used to determine the stability of a given system or entity.xml). unit etc. Stress testing may have a more specific meaning in certain industries. This amount can. You will create a sample file of the size you want and then test the application's functionality with that file in order to test the performanc Stress testing From Wikipedia. For example. this interaction could be reading and/or writing on to/from the file. the free encyclopedia Jump to: navigation.. . in order to observe the results. Visual Studio includes a load test tool which enables a developer to execute a variety of tests (web. SilkPerformer Micro Focus Visual Studio Load Test Microsoft Volume Testing belongs to the group of non-functional tests.used for unit and integration testing as well. Another example could be when there is a requirement for your application to interact with an interface file (could be any file such as . Volume testing refers to testing a software application with a certain amount of data. be the database size or it could also be the size of an interface file that is the subject of volume testing. you will expand your database to that size and then test the application's performance on it. if you want to volume test your application with a specific database size. It involves testing beyond normal operational capacity.

During stress testing. On the other hand. while measuring the response time. Examples: y A web server may be stress tested using scripts. availability. also known as stress testing. [edit] Hardware Reliability engineers often test items under expected stress or even under accelerated stress. the database may not experience much load. should put the hardware under exaggerated levels of stress in order to ensure stability when used in a normal environment. during load testing the database experiences a heavy load.1 Computer processors 3 Financial sector 4 See also 5 References [edit] Computer software Main article: stress testing (software) In software testing. so it breaks at the weakest link within the entire system. In particular. but the transactions are heavily stressed. bots. the goals of such tests may be to ensure the software does not crash in conditions of insufficient computational resources (such as memory or disk space).[1] Stress testing. rather than on what would be considered correct behavior under normal circumstances. and various denial of service tools to observe the performance of a web site during peak loads. The goal is to determine the operating life of the item or to determine modes of failure. Stress testing may be contrasted with load testing: y y y Load testing examines the entire environment and database. [edit] Computer processors . and error handling under a heavy load. unusually high concurrency. pushing to a level so as to break transactions or systems. in general.y y y y 2 Hardware o 2. or denial of service attacks. is loading the concurrent users over and beyond the level that the system can handle. while some transactions may not be stressed. whereas stress testing focuses on identified transactions. a system stress test refers to tests that put a greater emphasis on robustness. if transactions are selectively stressed. System stress testing.

When modifying the operating parameters of a CPU, such as in overclocking, underclocking, overvolting, and undervolting, it may be necessary to verify if the new parameters (usually CPU core voltage and frequency) are suitable for heavy CPU loads. This is done by running a CPUintensive program (usually Prime95) for extended periods of time, to test whether the computer hangs or crashes. CPU stress testing is also referred to as torture testing. Software that is suitable for torture testing should typically run instructions that utilise the entire chip rather than only a few of its units. Stress testing a CPU over the course of 24 hours at 100% load is, in most cases, sufficient enough to determine that the CPU will function correctly in normal usage scenarios, where CPU usage fluctuates at low levels (50% and under), such as on a desktop computer.

Security testing
From Wikipedia, the free encyclopedia Jump to: navigation, search

Security testing is a process to determine that an information system protects data and maintains functionality as intended. The six basic security concepts that need to be covered by security testing are: confidentiality, integrity, authentication, authorization, availability and non-repudiation.

y y y y y y y

1 Confidentiality 2 Integrity 3 Authentication 4 Authorization 5 Availability 6 Non-repudiation 7 See also

[edit] Confidentiality

A security measure which protects against the disclosure of information to parties other than the intended recipient that is by no means the only way of ensuring the security.

[edit] Integrity

y y

A measure intended to allow the receiver to determine that the information which it is providing is correct. Integrity schemes often use some of the same underlying technologies as confidentiality schemes, but they usually involve adding additional information to a communication to form the basis of an algorithmic check rather than the encoding all of the communication.

[edit] Authentication
y y

The process of establishing the identity of the user. Authentication can take many forms including but not limited to: passwords, biometrics, radio frequency identification, etc

[edit] Authorization
y y

The process of determining that a requester is allowed to receive a service or perform an operation. Access control is an example of authorization.

[edit] Availability
y y

Assuring information and communications services will be ready for use when expected. Information must be kept available to authorized persons when they need it.

[edit] Non-repudiation
y y

A measure intended to prevent the later denial that an action happened, or a communication that took place etc. In communication terms this often involves the interchange of authentication information combined with some form of provable time stamp.

Sanity testing
From Wikipedia, the free encyclopedia (Redirected from Sanity test) Jump to: navigation, search

A sanity test or sanity check is a basic test to quickly evaluate the validity of a claim or calculation. In arithmetic, for example, when multiplying by 9, using the divisibility rule for 9 to verify that the sum of digits of the result is divisible by 9 is a sanity test. In computer science, a sanity test is a very brief run-through of the functionality of a computer program, system, calculation, or other analysis, to assure that the system or

methodology works as expected, often prior to a more exhaustive round of testing. Software development In software development, the sanity test (a form of software testing which offers "quick, broad, and shallow testing"[1]) determines whether it is reasonable to proceed with further testing. Software sanity tests are commonly conflated with smoke tests. [2] A smoke test determines whether it is possible to continue testing, as opposed to whether it is reasonable[citation needed]. A software smoke test determines whether the program launches and whether its interfaces are accessible and responsive (for example, the responsiveness of a web page or an input button). If the smoke test fails, it is impossible to conduct a sanity test. In contrast, the ideal sanity test exercises the smallest subset of application functions needed to determine whether the application logic is generally functional and correct (for example, an interest rate calculation for a financial application). If the sanity test fails, it is not reasonable to attempt more rigorous testing. Both sanity tests and smoke tests are ways to avoid wasting time and effort by quickly determining whether an application is too flawed to merit any rigorous testing. Many companies run sanity tests and unit tests on an automated build as part of their development process.[3] The Hello world program is often used as a sanity test for a development environment. If Hello World fails to compile or execute, the supporting environment likely has a configuration problem. If it works, the problem being diagnosed likely lies in the real application being diagnosed. Another, possibly more common usage of 'sanity test' is to denote checks which are performed within program code, usually on arguments to functions or returns therefrom, to see if the answers can be assumed to be correct. The more complicated the routine, the more important that its response be checked. The trivial case is checking to see that a file opened, written to, or closed, did not fail on these activities ± which is a sanity check often ignored by programmers. But more complex items can also be sanity-checked for various reasons. Examples of this include bank account management systems which check that withdrawals are sane in not requesting more than the account contains, and that deposits or purchases are sane in fitting in with patterns established by historical data ± large deposits may be more closely scrutinized for accuracy, and large purchase transactions may be double-checked with a card holder for validity against fraud; these are "runtime" sanity checks, as opposed to the "development" sanity checks mentioned above.

Smoke testing
Smoke testing is a term used in plumbing, woodwind repair, electronics, computer software development, infectious disease control, and the entertainment industry. It refers to the first test made after repairs or first assembly to provide some assurance that the system under test will not catastrophically fail. After a smoke test proves that "the pipes will not leak, the keys seal properly, the circuit will not burn, or the software will not crash outright," the assembly is ready for more stressful testing.

In computer programming and software testing. Smoke tests can be broadly categorized as functional tests or unit tests. The tester "touches" all areas of the application without getting too deep. "Can I launch the test item at all?". smoke testing is a preliminary to further testing. a smoke test is a collection of written tests that are performed on a system prior to being accepted for further testing. which should reveal simple failures severe enough to reject a prospective software release. Functional tests may be a scripted series of program inputs. search Ad hoc testing is a commonly used term for software testing performed without planning and documentation (but can be applied to early scientific experimental studies). or driver layer that links to the code without altering the code being tested. Microsoft claims[1] that after code reviews. possibly even an automated mechanism for controlling mouse movements. the tests are often initiated by the same process that generates the build itself. for ascertaining that the most crucial functions of a program work. "Does it open to a window?". In software engineering. a smoke test generally consists of a collection of tests that can be applied to a newly created or repaired computer program. Unit tests may be separate functions within the code itself. In this sense a smoke test is the process of validating code changes before the changes are checked into the larger product¶s official source code collection or the main branch of source code. "Do the buttons on the window do things?" The purpose is to determine whether or not the application is so badly broken that testing functionality in a more detailed way is unnecessary. Unit tests exercise individual functions. . Both functional testing tools and unit testing tools tend to be third party products that are not part of the compiler suite. looking for answers to basic questions like. When automated tools are used. or object methods. A daily build and smoke test is among industry best practices. Ad hoc testing From Wikipedia. subroutines. Sometimes the tests are performed by the automated system that builds the final software. but not bothering with finer details. This is also known as a build verification test. the free encyclopedia Jump to: navigation. In software testing. Smoke testing is done by testers before accepting a build for further testing. smoke testing is the most cost effective method for identifying and fixing defects in software. In this case. These written tests can either be performed manually or using an automated tool. This is sometimes referred to as "rattle" testing . This is a "shallow and wide" approach to the application. Functional tests exercise the complete program with various inputs. the smoke is in "if I shake it does it rattle?". Smoke testing in software development A subset of all defined/planned test cases that cover the main functionality of a component or system.

. unless a defect is discovered. In this view. ad hoc testing has been criticized because it isn't structured. being the least formal of test methods. It contrasts to regression testing that looks for a specific issue with detailed reproduction steps. It is performed with improvisation. the tester seeks to find bugs with any means that seem appropriate. Ad hoc testing is a part of exploratory testing. but this can also be a strength: important things can be found quickly. and a clear expected result.Software Testing portal The tests are intended to be run only once. Ad hoc testing is most often used as a complement to other types of testing.

Sign up to vote on this title
UsefulNot useful