Full Testing Concepts

Preface
The purpose of a testing method is to provide a structured approach and discipline to testing. It includes test planning, managing and controlling the testing process, and testing techniques. The full lifecycle testing methodology being introduced in this concepts manual outlines the generic testing processes and techniques needed to test applications. This methodology is targeted for the development project teams (business and technical staff). When testing responsibility falls outside of the scope of the development team, reference is made to external testing processes for completeness purposes only. As preparations are made to pass the system to a group outside of the development team for testing, it is expected that the project development team will consult the external group or their documentation for guidance. The full lifecycle testing methodology has three parts: • • • Testing Concepts Test Process Model Testing Standards

Testing Concepts, describes the "why" and "what" of the testing discipline. It presents the concepts relating to test planning, test preparation, test execution and test reporting. It covers, at a generic level, every facet of testing; types, techniques, levels, and integration approaches. Test Process Model, describes the "how," "who" and "when" of testing. It explains the processes, procedures and tools needed to support a structured approach to testing. Testing Standards, describes the criteria to be met in order for specific levels of testing (such as Operability testing) to be considered acceptable by the organization. In most organizations the standards are owned and managed by the Quality Assurance group. Testing Courses, available separately, teach the concepts of testing a software application using the full lifecycle testing approach and reinforce them with a case study that applies these processes, tools, and techniques. Since this is a generic methodology, we will only be describing tools and techniques that are generic to testing. The descriptions will be at a level that will give the reader the concept of what the tool or technique is and its relevance to the testing process. Inspections are one example of such a technique. The reader may pursue additional education at a greater level of detail outside of this methodology documentation. It is suggested that anyone who is a novice to testing should read the concepts document completely from front-to-back and/or attend the course. Experienced test or development personnel can use this document as reference. It is expected that the Testing Process Model content will be tailored to meet the unique needs of your application, project life cycle, and testing requirements. Testing Standards, where available as part of organizational standards, will be used alongside the Test Process Model to insure the application development is conducted according to the needs of the organization.

26/09/2006 16:03:56

Page 1 of 71

Table of Contents
1. 00000000Introduction to Testing.......................................................................... ................5 2. Testing Fundamentals.......................................................................................... .................5
2.1 Definition................................................................................................................................ ............5 2.2 Ensuring Testability....................................................................................................... .....................6 2.3 Testing Principles.......................................................................................................... .....................6 2.4 "No-Fault" Testing.................................................................................................. ............................7 2.5 Entrance and Exit Criteria..................................................................................................... .............7 2.6 Testing Approaches............................................................................................................ ................7 2.6.1 Static................................................................................................................ ...........................8 2.6.2 Dynamic Testing...................................................................................................... ....................8 2.7 0Testing Techniques................................................................................................................ ...........9 2.7.1 Black Box Testing..................................................................................................... ...................9 2.7.2 White Box Testing...................................................................................................................... ..9 2.7.3 Error Guessing......................................................................................................................... ..10 2.8 0Full Lifecycle Testing............................................................................................................. .........10 2.9 What Is Tested........................................................................................................ .........................10 2.10 The Testing Team.................................................................................................... .......................11

3. 0The Testing Process................................................................................................... .......13
3.1 Test Planning.............................................................................................................................. ......14 3.1.1 The Administrative Plan............................................................................................................. 14 3.1.2 Risk Assessment............................................................................................ ...........................14 3.1.3 Test Focus....................................................................................................... ..........................15 3.1.4 Test Objectives........................................................................................................ ..................16 3.1.5 Test Strategy....................................................................................................................... .......17 3.1.6 The Build Strategy....................................................................................................... ..............17 3.1.7 Problem Management & Change Control............................................................................. .....19

26/09/2006 16:03:56

Page 2 of 71

3.2 0Test Case Design.................................................................................................... .......................19

4. 0Levels of Testing.......................................................................................................... ......21
4.1 Testing Model.................................................................................................................. .................21 4.2 Requirements Testing.................................................................................................................... ...22 4.3 Design Testing......................................................................................................................... .........23 4.4 Unit Testing........................................................................................................... ...........................24 4.5 Integration Testing................................................................................................................. ...........25 4.6 System Testing...................................................................................................................... ...........26 4.7 Systems Integration Testing.......................................................................................... ...................27 4.8 User Acceptance Testing....................................................................................................... ...........28 4.9 Operability Testing.......................................................................................................................... ..29

5. 0Types of Tests...................................................................................................... ..............30
5.1 Functional Testing....................................................................................................... .....................30 5.1.1 Audit and Controls Testing........................................................................................... ..............30 5.1.2 Conversion Testing.................................................................................................. ..................31 5.1.3 User Documentation and Procedures Testing............................................................................ 32 5.1.4 Error-Handling Testing................................................................................................. ..............32 5.1.5 Function Testing......................................................................................................................... 33 5.1.6 Installation Testing..................................................................................................... ................33 5.1.7 Interface / Inter-system Testing...................................................................... ...........................34 5.1.8 Parallel Testing...................................................................................................... ....................35 5.1.9 Regression Testing........................................................................................... .........................35 5.1.10 Transaction Flow Testing.................................................................................. .......................36 5.1.11 Usability Testing.......................................................................................................... .............36 5.2 0Structural Testing................................................................................................ ...........................37 5.2.1 Backup and Recovery Testing................................................................................ ...................37 5.2.2 Contingency Testing........................................................................................ ..........................38 5.2.3 Job Stream Testing......................................................................................................... ...........39

26/09/2006 16:03:56

Page 3 of 71

5.2.4 Operational Testing.............................................................................................................. ......39 5.2.5 Performance Testing.............................................................................................................. ....40 5.2.6 Security Testing................................................................................................... ......................40 5.2.7 Stress / Volume Testing............................................................................................ .................41 5.3 0Relationship Between Levels and Types of Testing............................................... ........................43

6. 0Test Data Set Up....................................................................................... .........................44 7. Test Procedure Set Up.............................................................................. ..........................45 8. Managing and Controlling the Test Effort.............................................................. ............46
8.1 Introduction.............................................................................................................. ........................46 8.2 Project Plans and Schedules................................................................................... ........................46 8.3 Test Team Composition & User Involvement................................................................ ...................46 8.4 Measurement and Accountability.................................................................................. ...................47 8.5 Reporting....................................................................................................................................... ...47 8.6 Problem Management............................................................................................................... .......48 8.7 Change Management................................................................................................................ .......48 8.8 Configuration Management (Version control) ................................................................................ ..48 8.9 Reuse................................................................................................................................... ............49

Appendices.......................................................................................................................... ....50
Appendix A. Full Lifecycle Testing Models........................................................................... .................50 Test Work Products Dependency Diagram................................................................. ..........................51 Appendix B. Integration Approaches...................................................................................... ...............52 Appendix C. Black Box/White Box Testing Techniques....................................................................... ..54 Appendix D. Test Case Design.................................................................................................... ..........58 Glossary of Testing Terms and Definitions........................................................................... .................60

Bibliography .............................................................................................................. .............71 Revision History............................................................................................................. .........71

26/09/2006 16:03:56

Page 4 of 71

1. 00000000Introduction to Testing
This technique paper will take you through the concepts necessary to successfully test a software product under development. It outlines a full lifecycle testing methodology. A critical element in the successful development of any software application is effective testing. Testing moves the evolution of a product from a state of hypothetical usefulness to proven usefulness. It includes the testing of requirements, design, systems' code, documentation, and operational procedures; and is an inseparable part of the development of a product which the purchaser and end-user view as beneficial and meeting their needs. Testing is one of the ways in which a product achieves high quality. FLT helps improve the quality of the product as well as the test process itself.

2. Testing Fundamentals
Testing is conducted to ensure that you develop a product that will prove to be useful to the end user. The primary objectives of testing assure that: • • The system meets the users' needs ... has 'the right system been built' The user requirements are built as specified ... has 'the system been built right'

Other secondary objectives of testing are to: • • • • • • Instill confidence in the system, through user involvement Ensure the system will work from both a functional and performance viewpoint Ensure that the interfaces between systems work Establish exactly what the system does (and does not do) so that the user does not receive any "surprises" at implementation time Identify problem areas where the system deliverables do not meet the agreed to specifications Improve the development processes that cause errors.

Achieving these objectives will ensure that: • • • The associated risks of failing to successfully deliver an acceptable product are minimized, A high quality product (as the application purchaser and user view it) is delivered, and The ability to deliver high quality applications is improved.

The purpose of a testing method is to provide a framework and a set of disciplines and approaches for testing of a software application, so that the process is consistent and repeatable.

2.1 Definition
Testing is the systematic search for defects in all project deliverables. It is the process of examining an output of a process under consideration, comparing the results against a set of pre-determined expectations, and dealing with the variances. Testing will take on many forms. As the form and content of the output change, the approaches and techniques used to test them must be adapted. Throughout the discussion of testing concepts, we will use several terms that are fundamental:

26/09/2006 16:03:56

Page 5 of 71

Validation: the act of ensuring compliance against an original requirement. An example is the comparison of the actual system response of an on-line transaction to what was originally expected, requested, and finally approved in the External Design Verification: the act of checking the current work product to insure it performs as specified by its predecessor. The comparison of a module's code against its technical design specifications document is one example. Process: a series of actions performed to achieve a desired result that transforms a set of inputs (usually information) into useful outputs Expectations: a set of requirements or specifications to which an output result of a process must conform in order to be acceptable. One such example is the performance specification that an online application must return a response in less than two seconds. Variances: deviations of the output of a process from the expected outcome. These variances are often referred to as defects.

• •

Testing is then a process of verifying and/or validating an output against a set of expectations and observing the variances.

2.2 Ensuring Testability
Another important but difficult term to clarify is what is meant by a testable condition. To illustrate the definition, let’s discuss the above term, expectations, further. In order to be able to assess if an output meets or exceeds a given expectation, an expectation itself must be stated in testable terms. This means that when the characteristics of an output under test are compared against the expected characteristics, they can be matched in a clear and unambiguous way. You would not request that the answer to a calculation of (1 + 4) is to be an "appropriate" amount. To be testable, you would specify the answer as a single value like 5 or as a range between 0 and 10 if the exact answer were not known. If the result of the test were anything other than 5 or a number within the specified range, the test would unequivocally fail and you would record the variance. The example, of course, is trivial but it serves to illustrate the point. While you may strive to have requirements stated in testable terms, it may not always be possible. The required level of detail when you document and test the requirements may not yet be available. The process of requirements specification and external design should evolve the functional request from a collection of imprecise statements of user needs to testable user specifications. Even though the root of the word specification is specific, achieving specific requirements is an extremely challenging exercise. At every point in the specification process, you can check the "testability" of the requirement or specification by ensuring it is S.M.A.R.T. This acronym stands for: S M A R T Specific Measurable Attainable or Achievable Realistic Timely

These specifications will form the basis for the criteria upon which the system is tested and ultimately accepted by the system purchaser and end-user.

2.3 Testing Principles

26/09/2006 16:03:56

Page 6 of 71

Here are some powerful basic principles of testing. Although they are expressed simply and most of them appear to be intuitive, they are often overlooked or compromised. • • • • • • • • An author must not be the final tester of his/her own work product. While exhaustive testing is desirable, it is not always practical. Expected results should be documented for each test. Both valid and invalid conditions should be tested. Both expected and unexpected results should be validated. Test Cases should be reused, i.e., no throwaway test cases unless the work product itself is throwaway. Testing is a skilled discipline (on par with such skills as technical coding and analysis). Testing is a "no-fault" process of detecting variances or defects.

2.4 "No-Fault" Testing
It is important to realize that the introduction of variances is a normal and expected consequence of the human activity of developing any product. How many times have you heard ... "To err is human ... "? This implies that testing should be a no-fault process of variance removal. Testing is a cooperative exercise between the tester and developer to detect and repair defects that the development process has injected into the product. It should be apparent to anyone who has tested that this process of removing defects is complex and demanding, accomplished only by a skilled team of personnel. They must use good techniques and tools in a planned and systematic way. At the end of a development project, the team hopes that testing has been so successful that they don't have to invoke the last half of that famous saying ".... to forgive is divine!"

2.5 Entrance and Exit Criteria
The concept of establishing pre-requisites (entrance criteria) and post conditions (exit criteria) for an activity to be undertaken is extremely useful for managing any process, testing being no exception. Entrance criteria are those factors that must be present, at a minimum, to be able to start an activity. In Integration Testing for example, before a module can be integrated into a program, it must be compiled cleanly and have successfully completed unit testing. If the entrance criteria of the next phase have been met, the next phase may be started even though the current phase is still under way. This is how overlapping schedules are allowed. Exit criteria are those factors that must be present to declare an activity completed. To proclaim System Testing completed, two criteria might be that all test cases must have been executed with a defined level of success (if other than 100%) and that there must be no more than a mutually agreed upon number of outstanding problems left unresolved. Exit criteria must be specifically expressed using terms such as, "X will be accepted if Y and a Z are completed." You may view the User Acceptance Test as the Exit Criteria for the development project.

2.6 Testing Approaches
Testing approaches fall into two broad classes: static and dynamic testing.

26/09/2006 16:03:56

Page 7 of 71

Both are effective if applied properly and can be used throughout the application lifecycle. Testing tools and techniques will share characteristics from both these classes. The development and test teams are responsible for selection and use of the tools and techniques best suited for their project.

2.6.1 Static
Static testing is a detailed examination of a work product's characteristics to an expected set of attributes, experiences, and standards. Since the product under scrutiny is static and not exercised (as in a module being executed), its behavior to changing inputs and environments cannot be assessed. Discovering variances or defects early in a project will result in less expensive and less painful changes. In the development cycle, the only early testing usually available during pre-construction phases is static testing. Some representative examples of static testing are: • • • • • Plan reviews Requirements walkthroughs or sign-off reviews Design or code inspections Test plan inspections Test case reviews

Here is an example of how a static test would be applied in a walkthrough of a single statement in a functional specification: "If withdrawal request is greater than $400 reject the request, otherwise allow subject to ATM cash dispensing rules and account balance." Static test questions: • • • • • • • • Are all ATM dispensing rules the same (different types of machines)? Do the ATM dispensing rules allow for withdrawals up to $400? Is the account balance always available? If not, what then? What multiples of amounts are allowed? What types of accounts are allowed? If the host is not available do we use the card balance? If the host is not available but a deposit preceded the withdrawal request, should this be taken into account? If the request is denied, what is the message to the customer?

The point is that seemingly complete and precise requirements can, with practice, generate many questions. If these questions are answered as soon as the specification is written, testing costs will be much less than if the questions are not asked until after coding has commenced. For a more thorough explanation of Inspections and Walkthroughs see the Static Testing Guidelines Technique Paper.

2.6.2 Dynamic Testing
Dynamic testing is a process of verification or validation by exercising (or operating) a work product under scrutiny and observing its behavior to changing inputs or environments. Where a module is

26/09/2006 16:03:56

Page 8 of 71

statically tested by looking at its code documentation, it is executed dynamically to test the behavior of its logic and its response to inputs. Dynamic testing used to be the mainstay of system testing and was traditionally known as "testing the application." Some representative examples of dynamic testing are: • • • • Application prototyping Executing test cases in an working system Simulating usage scenarios with real end-users to test usability Parallel testing in a production environment

2.7 0Testing Techniques
Three common testing techniques are Black Box Testing, White Box Testing, and Error Guessing. Testing techniques that examine the internal workings and details of a work product are said to use a white box approach. The technique that looks at how a unit under test behaves by only examining its inputs and outputs is said to be a black box approach. These are explained more thoroughly in Black Box/ White Box Testing in the Appendix. The Error Guessing technique is where experience and intuition are applied to look for unexpected but prevalent errors.

2.7.1 Black Box Testing
In the Black Box approach, the testers have an "outside" view of the system. They are concerned with "what is done" NOT "how it is done." Black Box testing is requirements and/or specifications-oriented and is used at all test levels. The system or work product is defined and viewed functionally. To test the system, all possible input combinations are entered and the outputs are examined. Both valid and invalid inputs are used to test the system. Examples of black box testing are: • • Enter an Automated Teller Machine withdrawal transaction and observe expected cash dispensed Exercising a program function with external data and observing the results

2.7.2 White Box Testing
In the White Box approach, the testers have an inside view of the system. They are concerned with "how it is done" NOT "what is done". White Box testing is logic oriented. The testers are concerned with the execution of all possible paths of control flow through the program. The "White Box" approach is essentially a Unit test method (which is sometimes used in the Integration test or in the Operability test) and is almost always performed by technical staff. Examples of white box testing are: • Testing of branches and decisions in code

26/09/2006 16:03:56

Page 9 of 71

Tracking of a logic path in a programmer

2.7.3 Error Guessing
Based on past experience, test data can be created to anticipate those errors that will most often occur. Using experience and knowledge of the application, invalid data representing common mistakes a user might be expected to make can be entered to verify that the system will handle these types of errors. An example of error guessing is: • Hitting the 'CTRL' key instead of the 'ENTER' key for a PC application and verifying that the application responds properly

2.8 0Full Lifecycle Testing
The software development process is a systematic method of evolving from a vague statement of a business problem to a detailed functional and concrete representation of a solution to that problem. The development process itself is subdivided into a discrete number of phases that align to a life cycle representing major stages or phases of system evolution. At the termination of each phase, a checkpoint is taken to insure that the current phase is completed properly and that the project team is prepared to start the next phase. The process we use to define and build a software product is the application development lifecycle. The testing process will align to these same phases. The development and test activities will be coordinated so that they complement each other.

Project Definition Requirements External Design Internal Design Construction Implementation

As each interim work product is created by the developer, it is in turn tested to insure that it meets the requirements and specifications for that phase so that the resulting work products will be constructed on a sound foundation. This strategy of testing-as-you-go will minimize the risk of delivering a solution that is error prone or does not meet the needs of the end-user. We refer to testing at every phase in the development process as full lifecycle testing. It will test each and every interim work product delivered as part of the development process. Testing begins with requirements and continues throughout the life of the application. During application development, all aspects of the system are exercised to ensure that the system is thoroughly tested before implementation. Static and dynamic, black box and white box, and error guessing testing approaches and techniques are used. Even after an application has been in production, testing continues to play an important role in the systems maintenance process. While an application is being maintained, all changes to the application must be tested to ensure that it continues to provide the business function it was originally designed for, without impacting the rest of the application or the other systems with which it interfaces. When the hardware or software environment changes, the application must be retested in all areas to ensure that it continues to function as expected. This is often referred to as regression testing.

2.9 What Is Tested
26/09/2006 16:03:56
Page 10 of 71

Those on a system development or maintenance team usually test the following: • • • The function of the application system manual and automated procedures in their operating environment to ensure it operates as specified Exception (error) conditions that could occur to ensure that they will be handled appropriately The interim deliverables of the development phases to demonstrate that they are satisfactory and in compliance with the original requirements.

2.10 The Testing Team
Developing a system requires the involvement of a team of individuals that bring varied resources, skills, experiences, and viewpoints to the process. Each plays a necessary role, each may play one or more roles, and the roles may change over the complete life cycle of development. Added to the roles of sponsor, user, and developer defined in the application development process, is the tester. Their role is to validate and verify the interim and final work products of the system using both user and operational product and technical expertise where necessary In many cases, the sponsor and the user are a single representative. In other cases, a special group of user representatives act as surrogate end-users. While they may represent the end-users, they themselves will usually never directly operate or use the application. An organization where both the development team and user and/or sponsor personnel participate in the testing process is strongly recommended. It encourages user "buy-in" and acceptance through early involvement and participation throughout the development life cycle. Some benefits are: • Involvement of developers and users in walkthroughs and inspections of the deliverables during the earlier phases of the development life cycle serves as a training ground for the developers to gain business knowledge Testing will be more thorough and complete because users with greater product knowledge will be involved to test the business functions The User Acceptance Testing will provide a formal validation of a more completely tested system rather than serve as a process where basic product defects are being uncovered for the first time User personnel will become familiar with the forms and procedures to be used in the new system prior to implementation Users who gain experience in using the system during the test period will be able to train other user personnel and sell the new system

• • • •

It is important to note the following. • • • The extent of user participation will depend on the specific nature of your project or application. The user should only get involved in "hands-on" dynamic testing at the point when the system is stable enough that meaningful testing can be carried out without repeated interruptions. Each user representative must be responsible for representing his or her specific organization and for providing the product knowledge and experience to effectively exercise the test conditions necessary to sufficiently test the application.

26/09/2006 16:03:56

Page 11 of 71

The test team is responsible for making certain that the system produced by the development team adequately performs according to the system specifications. It is the responsibility of the test group to accept or reject the system based on the criteria that were documented and approved in the test plan.

Traditionally, users are viewed as being from the business community. They should also be considered as being from the technical (computer) operations area. Systems should incorporate the requirements of all the groups that will use and/or operate the services provided by the system being developed. Finally, while the technical staff may build the technical components of the application, they will also assume the role of tester when they verify the compliance to design specifications of their own work or the work of others. This is an example of changing roles that was mentioned earlier.

26/09/2006 16:03:56

Page 12 of 71

3. 0The Testing Process
The basic testing process consists of four basic steps. They are: • • • • Plan for the tests Prepare for tests Execute the tests Report the results

Testing is an iterative process. These steps are followed at the overall project level and repeated for each level of testing required in the development lifecycle. This process is shown in the Test Process Overview diagram below. The development of the application will be undertaken in stages. Each stage represents a known level of physical integration and quality. These stages of integration are known as Testing Levels and will be
OT UAT SIT Plan ST Plan IT UT
Plan MASTER TEST PLAN Prepare Prepare Execute Execute Report Report

Prepare

Execute Test Levels Report

Detailed Test Plans

discussed later. The Levels of Testing used in the application development lifecycle are: • • • • • • Unit testing Integration testing System testing Systems integration testing User acceptance testing Operability testing

The first layer of plans and tests address the overall project at a high level and is called the Master Test Plan. Successive Levels of Testing will each be handled as additional refinements to the Master Test Plan to meet the specific needs of that Level of Testing. The FLT methodology refers to these as Detailed Test Plans. They are explained in more detail in the following sections. 26/09/2006 16:03:56

Page 13 of 71

For a more detailed treatment of the process, refer to the FLT models in the Appendix. The overview of the process is depicted as a level 0 data flow model. Each of the major processes on the level 0 data flow diagram has its own data flow diagram shown separately on the following pages.

3.1 Test Planning
Planning itself is a process. Performing each step of the planning process will insure that the plan is built systematically and completely. Documenting these steps will ensure that the plan is itself testable by others who have to approve it. Most of the topics under planning should be familiar to all but the novice developer or tester. The Test Process Model will lend some guidance in this area. A few topics deserve special attention since they are specific to testing or helpful in understanding testing in general.

3.1.1 The Administrative Plan
This portion of the plan deals with test team organization, test schedules, and resource requirements. These plans should be fully integrated into the overall project schedules. We will more fully cover the specifics of test team planning later in the section on Managing and Controlling the Test Effort.

3.1.2 Risk Assessment
Why is Risk Assessment a part of planning? The reason for risk analysis is to gain an understanding of the potential sources and causes of failure and their associated costs. Measuring the risk prior to testing can help the process in two ways: • • High-risk applications can be identified and more extensive testing can be performed. Risk analysis can help draw attention to the critical components and/or focus areas for testing that are most important from the system or user standpoint.

In general, the greater the potential cost of an error in a system, the greater the effort and the resources assigned to prevent these errors from occurring. Risk is the product of the probability of occurrence of an error and the cost and consequence of failure. Three key areas of risk that have significant influence on a project are: • • • Project size Experience with technology Project structure

The potential cost/risk factor, independent of any particular error, must be taken into consideration when developing an overall testing strategy and a test plan for a particular system. One approach to assessing risk is: • • List what can go wrong during the operation of the application once implemented, and the likelihood of occurrence. Determine the cost of failure. For example, loss of business, loss of image, loss of confidence in the system, security risk, and financial risk, for each of these problems to the organization if it occurred.

26/09/2006 16:03:56

Page 14 of 71

Determine what level of confidence is required and what the critical success factors are from an application development and testing point of view.

A strategy and subsequent plans to manage these risks will form the basis of the testing strategy. • • How much effort and expense should be put into testing When the potential loss exceeds the cost of testing, the testing is cost justified

3.1.3 Test Focus
The overall objective of testing is to reduce the risks inherent in computer systems. The methodology must address and minimize those risks. Areas are identified to assist in drawing attention to the specific risks or issues where testing effort is focused and which must be handled by the test strategy. We have so far discussed the importance of quality of systems being developed and implemented. It is virtually impossible and economically not feasible to perform exhaustive testing. We have to avoid both under testing and over testing, while optimizing the testing resources, and reducing cycle time. All of these lead to the question of management of conflicts and trade-offs. When projects do not have unlimited resources, we have to make the best use of what we have. This comes down to one question: What are the factors or risks that are most important to: • • • The user or the client The developer from a system perspective The service provider (usually Computer Services)

The answer lies in considering the Focus areas and selecting the ones that will address or minimize risks. Test Focus can be defined as those attributes of a system that must be tested in order to assure that the business and technical requirements can be met. Some of the commonly used test focus areas are: • • • • • • • • • • Auditability Continuity of Processing Correctness Maintainability Operability Performance Portability Reliability Security Usability

You may consider and include other focus areas as necessary. For example: • • Technology may be a Test Focus in a cooperative processing environment Compatibility of software may be a test focus item for vendor packages

26/09/2006 16:03:56

Page 15 of 71

Each of the items on the list of common test focus areas is a potential factor that may impact the proper functioning of the system. For example, the risk of the system not being usable may result in certain functions not being used. The challenge to effective testing is to identify and prioritize those risks that are important to minimize and focus testing in those areas. It is therefore critical that a joint decision be taken by the users and developers as to what is important and what is not. It should not be decided from a single perspective. It is important to note that, to a major extent, correctness is most likely a high priority test focus for most applications. Further considerations are: • • • What degree of correctness is required? What are the risks if the functions are not tested exhaustively? What price are the users willing to pay?

This means that the: • • Process of determining test focus areas has to be selective, and Selected focus areas have to be ordered in terms of priority.

The concept of risk and risk evaluation makes the decision of how much testing to do, or what types of testing are needed to be performed, an economic decision. The economics of testing and risk management determines whether defects are acceptable in the system, and if so, what is the tolerable level of residual defects. This means that the decisions of what gets tested more or less thoroughly are shifted from the judgment of the developers and users to ones based more objectively on economics.

3.1.4 Test Objectives
From a testing standpoint, the test objectives are a statement of the goals that must be achieved to assure that the system objectives are met and that the system will be demonstrated to be acceptable to the user. They should answer the question: "What must be achieved, • • in order to assure that the system is at the level of quality that the acceptor of the system will expect, and that he/she will also be confident that the developer has achieved these expectations?"

They should draw the attention or focus on the major testing tasks that must be accomplished to successfully complete full testing of the system. The purpose of developing test objectives is to define evaluation criteria to know when testing is completed. Test objectives also assist in gauging the effectiveness of the testing process. Test objectives are based on: • • • Business functions Technical requirements Risks

Test objectives are defined with varying levels of detail at various levels of testing; there will be test objectives at the global cross-functional level as well as the major functional level.

26/09/2006 16:03:56

Page 16 of 71

For example, if correctness of bank statement processing is to be a focus area, then one testing objective will concentrate on how correctness will be thoroughly proved in bank statement processing by the functional testing results.

3.1.5 Test Strategy
Test strategy is a high level system-wide expression of major activities that collectively achieve the overall desired result as expressed by the testing objectives. For example, if performance (transaction response time) is a focus area, then a strategy might be to perform Stress/Volume Testing at the Integration Test Level before the entire system is subjected to System Testing. Similarly, if auditability is a focus area, then a strategy might be to perform rigorous audit tests at all levels of testing. As part of forming the strategy, the risks, constraints, and exposures present must be identified. These considerations must be accommodated in the strategy. This is where the planner can introduce significant added value to the plan. As an example of one such strategy that handles constraints, if you have to develop the application to run on two technical platforms but only one is available in the development environment. The strategy might be to exhaustively test on one platform and test the other platform in the production operating environment as a parallel run. This strategy recognizes the constraint on the availability of test platforms and manages it with acceptable risk. Another practical area of constraint may be where the organization's Testing Standards play a significant part in reducing the freedom you can adopt in your strategy. All strategy statements are expressed in high level terms of physical components and activities, resources (people and machines), types and levels of testing, schedules and activities. The strategic plan will be specific to the system being developed and will be capable of being further refined into tactical approaches and operating plans in the Detailed Test Plans for each level of testing. Strategic plans will drive future needs in resources or schedules in testing (or development) and will often force trade-offs. Fortunately, since this is usually well in advance of the anticipated need, the project will have enough time to respond. Detailed Test Plans should repeat the exercise of reviewing and setting objectives and strategy specific to each level of testing. These objectives and associated strategies (often called approaches at the lower levels) should dovetail with their predecessor higher level Master Test Plan (MTP) set of objectives and strategies.

3.1.6 The Build Strategy
A set of testing plans will include the high level list of items for which testing is needed to be performed. These are the lists of Business Functions and Structural Functions that will be covered by testing activities. You may consider that these are testing blueprints and use them in the way that architects use blueprints as plans to guide them in their design and construction processes. These "architect's plans" are not like resource and schedule plans - they serve very different purposes. They are more like a functional road map for building test cases. Both these types of plans will exist on a development project, and both will share dependencies. The Build Strategy is the approach to how the physical system components will be built, assembled, and tested. It is driven by overall project requirements such as user priorities, development schedules, earlier delivery of specific functions, resource constraints, or any factor where there is a need to consider building the system in small pieces. Each build can be assembled and tested as a stand-alone unit although there may be dependencies on previous builds.

26/09/2006 16:03:56

Page 17 of 71

Since it is desirable to test a component or assembly of components as soon as possible after they are built, there must be close co-ordination between the development team (builders) and the test team (testers). There must be a close collaboration on what components are built first, what testing facilities must be present, how the components will be assembled, and how they will be tested and released to the next level of integration. The figure opposite describes four different "build" examples: Build 1 is an example of a welldefined functional build where each entity forms part of a single, clearly defined functional process. For example, an on-line selection of a single menu item will lead to several sequentially processed screens to complete the function. Development of all screen modules for this function would be grouped into one build.

2. 1. A x

B

C 3.

Build 3 is a vertically sliced F D E build where each entity is clearly sequenced in terms of its dependency. Usually a vertical build is confined to discreet and 4. relatively self-contained processes as opposed to functions. A good example of this would be in a batch process where an edit program would precede a reporting program. Each entity has its own functional characteristic but must clearly precede or succeed the other in terms of the development schedule. Build 2 shows where a special "test build" is required solely to support testing. For example, in the illustration, entity "X" might be a driver program that needs to be developed to provide test data to entity "C". Usually, but not always, these are throwaway. Test builds are especially critical to the overall project's schedule and must be identified as soon as possible. To illustrate this, if the need for the special program "X" is not identified until after coding of program "C" is completed, this will impact the testing schedule and, probably, the completion of program "F." Build 4 is a stand-alone build usually confined to a single, complex or large function. A horizontally sliced build (not illustrated) is where similar but independent functions are grouped together. For example, programs "B" and "C" might be grouped together into one build especially if the logic processes in each program are similar and can be tested together. The diagram will also serve to illustrate the suitability of the different build types. Assume each box represents a program and that "B" and "C" are updates and that "E" and "F" produce simple batch reports. Also assume that no formal build design process was used and the update programs were logically assigned to one group and the report programs to another. These are, in effect, horizontally sliced builds. Probably the group working on the report programs would be finished first and, typically, would be left to their own devices to create test data. When all programs are completed they would be subjected to a system test that would discover errors in the reporting programs not caused by bad coding but by inadequate testing. On the other hand, if a build strategy had been employed, the program assignment would probably use vertical builds where each update program and its supporting report

26/09/2006 16:03:56

Page 18 of 71

program were built and tested in the same exercise, eliminating the need to create special test data for the report programs and reducing the likelihood of inadequate testing. This is a simple example. On a complex project involving hundreds of programs and many development staff, a build strategy is critical and will help to reduce testing time and may obviate the creation of unnecessary test data. The Business and Structural Function lists will be broken down into Test Conditions. Test conditions are grouped into Test Sets for purposes of test execution only. Test conditions may generate multiple test cases and test conditions can also be grouped into Test Cases. Individual Test Cases can be documented in preparation for execution. Test Data is designed to support the Test Cases. The Test Case represents the lowest level of detail to which testing design is carried. Although the builds are for the use of the current project, some of the test sets from each of the builds may be designated as part of the regression test set. When subsequent builds are tested, the regression test sets may first be run before the actual test sets for the latest build. The collection of these test sets then becomes the regression test package. Similarly, the System Test Plan becomes the system documentation and the subsequent subset of test sets becomes the system test regression test package. The test team will build test packages to test the system at the Integration, System, and User Acceptance levels of test. These test packages will consist of testing resource plans and test blueprint plans, cases, and data. The best way to create these packages is to first start planning and constructing the packages that will be used for the UAT and work backwards through the increasing levels of detail towards the Unit Test Level. You are actually working from the most stable and well known requirements (those already stated and with expectations firmly in place) towards the functional elements and level of detail that the developers may currently be designing and creating. Therefore, the UAT test package would be created first, then followed by the System test package, and finally followed by the Integration test package. The level of detail included in the plans depends, of course, on the Level of Testing. An understanding of "what will be tested" will drive out other plans such as test facilities requirements. These plans will also be influenced by the testing strategies.

3.1.7 Problem Management & Change Control
Problem Management is the process by which problems (variance reports) are controlled and tracked within a development project. This is a key tool to the testers. It is best to use the same Problem Management System used by the overall project. If one does not exist, it should be set up early enough to be in place for the discovery of the first variances. It is recommended that an automated system be used to capture, store, and report on the status of variances. The amount of detail becomes enormous even on small projects. Change Control or Change Management is similar to Problem Management in that it captures, stores, and reports on the status of valid and recognized change requirements. Testing Teams will both request change and also be impacted by change. Again it is recommended that an automated system be used if the amount of change will be large. Once again, the project's Change Management System should be the one used by all. Plans should include provision for, and use of, both Problem Management and Change Management Systems. If these systems do not exist, plans should include provisions to create and maintain them.

3.2 0Test Case Design
Test Case Design must focus on the testing techniques, tools, the build and integration strategies, and the basic characteristics of the system to be tested.

26/09/2006 16:03:56

Page 19 of 71

The first approaches to consider are Black Box and White Box. Black Box techniques help to validate that the business requirements are met while White Box techniques facilitate testing the technical requirements. The former tests for requirements' coverage whereas the latter provides logic coverage. The tools that provide the maximum pay back are keystroke capture and play back and test coverage tools. The underlying principles for both these tools are very simple. Keystroke capture tools eliminate the tedious and repetitive effort involved in keying in data. Test coverage tools assist in ensuring that the myriad paths that the logic branches and decisions can take are tested. Both are very useful but need a lot of advance planning and organization of the test cases. As well as anticipating all expected results, testing tool procedures must provide for handling the unexpected situations. Therefore, they need a comprehensive test case design plan. Care must be exercised in using these automated tools so as to get the most effective usage. If the use of these tools is not carefully planned, a testing team can become frustrated and discouraged with the tool itself. The testers will abandon these tools or fail to maintain the test cases already in their test packages. The test design must also consider whether the tests involve on-line or batch systems, and whether they are input or output driven. Test cases would be built to accommodate these system characteristics. Another consideration to take into account will be the grouping of valid and invalid test conditions. The integration approach (top down, bottom up, or a combination of both) can sometimes influence the test case design. For a more complete discussion of integration approaches please see Integration Approaches in Appendix B. It is recommended that the development of regression test packages be considered part of the test design. Test cases should always be built for reuse. Test case design strategy entails the following steps: • • • • • • • • Examine the development / testing approach. This was explained in the Build Strategy section. Consider the type of processing - whether on-line, batch, conversion program, etc. Determine the techniques to be used - white box / black box, or error guessing. Develop the test conditions. Develop the test cases. Create the test script. Define the expected results. Develop the procedures and data (prerequisites, steps, expectations, and post-test steps).

In developing a list of conditions that should be tested, it is useful to have a list of standard conditions to test for some common application types. A comprehensive list of considerations for ensuring a complete list of conditions to test is included the Test Case Design in the Appendix D. They can be used as an aid to develop test cases.

26/09/2006 16:03:56

Page 20 of 71

4. 0Levels of Testing
Testing proceeds through various physical levels as described in the application development lifecycle. Each completed level represents a milestone on the project plan and each stage represents a known level of physical integration and quality. These stages of integration are known as Testing Levels. The Levels of Testing used in the application development lifecycle are: • • • • • • Unit testing Integration testing System testing Systems integration testing User acceptance testing Operability testing

In FLT Methodology, requirements testing and design testing are used as additional levels of testing.

4.1 Testing Model
The descriptions of testing levels will follow. For each of these levels, these attributes will be covered: • • • • • • • Objectives When to perform the tests Inputs / outputs Who performs the tests Methods Tools Education / training pre-requisites

The tables on the following pages describe the testing levels for each level of testing:

26/09/2006 16:03:56

Page 21 of 71

4.2 Requirements Testing
Requirements testing involves the verification and validation of requirements through static and dynamic tests. The validation testing of requirements will be covered under User Acceptance Testing. This section only covers the verification testing of requirements.

Objectives

To verify that the stated requirements meet the business needs of the end-user before the external design is started To evaluate the requirements for testability After requirements have been stated Detailed Requirements Verified Requirements Users & Developers JAD Static Testing Techniques Checklists Mapping Document Reviews CASE Application training

• When Input Output Who Methods • • • • • • • • • Tools Education • •

26/09/2006 16:03:56

Page 22 of 71

4.3 Design Testing
Design testing involves the verification and validation of the system design through static and dynamic tests. The validation testing of external design is done during User Acceptance Testing and the validation testing of internal design is covered during, Unit, Integration and System Testing. This section only covers the verification testing of external and internal design.

Objectives

To verify that the system design meets the agreed to business and technical requirements before the system construction begins To identify missed requirements After External Design is completed After Internal Design is completed External Application Design Internal Application Design Verified External Design Verified Internal Design Users & Developers JAD Static Testing Techniques Checklists Prototyping Mapping Document Reviews CASE Prototyping tools Application training Technical training

• When • • Input • • Output • • Who Methods • • • • • • • Tools • • Education • •

26/09/2006 16:03:56

Page 23 of 71

4.4 Unit Testing
Unit level test is the initial testing of new and changed code in a module. It verifies the program specifications to the internal logic of the program or module and validates the logic.

Objectives

• • • • • •

To test the function of a program or unit of code such as a program or module To test internal logic To verify internal design To test path & conditions coverage To test exception conditions & error handling After modules are coded Internal Application Design Master Test Plan Unit Test Plan Unit Test Report Developer White Box testing techniques Test Coverage techniques Debug Re-structure Code Analyzers Path/statement coverage tools Testing Methodology Effective use of tools

Input

• • •

Output Who Methods

• • • •

Tools

• • • •

Education

• •

Programs and modules should be desk checked or walked-through before they are integrated and tested. This review must be done first by the programmer and then in a more formal way through a structured walkthrough or code inspection. The steps to prepare for unit testing are: • • Determine the development integration and testing approach, top-down, bottom-up or a combination of both. Determine the testing techniques to be used (white box), and the particular sub-techniques that apply best to the level of testing such as statement coverage, decision coverage, path coverage, equivalence partitioning, boundary value analysis and so on. These techniques will be described later in this document. Develop the test sets of conditions (one or more levels depending on detail).

26/09/2006 16:03:56

Page 24 of 71

Some of the common things to check for are that: • • • • • • • All variables are explicitly declared and initialized The initialization is correct after every processing level and that work areas are cleared or reset properly Array subscripts are integers and within the bounds of the array Reference variables are correctly allocated Unexpected error conditions are handled correctly File attributes are correct End-of-file conditions are handled correctly

During program development, design the test cases at the same time as the code is designed. Advantages are that test conditions / cases are: • • • Designed more objectively Not influenced by coding style Not overlooked

4.5 Integration Testing
Integration level tests verify proper execution of application components and do not require that the application under test interface with other applications. Communication between modules within the subsystem is tested in a controlled and isolated environment within the project. Objectives When Input • • • • • Output Who Methods • • • • Tools • • • Education • • To technically verify proper interfacing between modules, and within sub-systems After modules are unit tested Internal & External Application Design Master Test Plan Integration Test Plan Integration Test report Developers White and Black Box techniques Problem / Configuration Management Debug Re-structure Code Analyzers Testing Methodology Effective use of tools

26/09/2006 16:03:56

Page 25 of 71

4.6 System Testing
System level tests verify proper execution of the entire application components including interfaces to other applications. Both functional and structural types of tests are performed to verify that the system is functionally and operationally sound. Objectives • • • To verify that the system components perform control functions To perform inter-system test To demonstrate that the system performs both functionally and operationally as specified To perform appropriate types of tests relating to Transaction Flow, Installation, Reliability, Regression etc. After Integration Testing Detailed Requirements & External Application Design Master Test Plan System Test Plan System Test Report Development Team and Users Problem / Configuration Management Recommended set of tools Testing Methodology Effective use of tools

When Input

• • • •

Output Who Methods Tools Education

• • • • • •

26/09/2006 16:03:56

Page 26 of 71

4.7 Systems Integration Testing
Systems Integration testing is a test level which verifies the integration of all applications, including interfaces internal and external to the organization, with their hardware, software and infrastructure components in a production-like environment.

Objectives

To test the co-existence of products and applications that are required to perform together in the production-like operational environment (hardware, software, network) To ensure that the system functions together with all the components of its environment as a total system To ensure that the system releases can be deployed in the current environment After system testing Often performed outside of project life-cycle Test Strategy Master Test Plan Systems Integration Test Plan Systems Integration Test report System Testers White and Black Box techniques Problem / Configuration Management Recommended set of tools Testing Methodology Effective use of tools

• When • • Input • • • Output Who Methods • • • • Tools Education • • •

26/09/2006 16:03:56

Page 27 of 71

4.8 User Acceptance Testing
User acceptance tests (UAT) verify that the system meets user requirements as specified. The UAT simulates the user environment and emphasizes security, documentation and regression tests and will demonstrate that the system performs as expected to the sponsor and end-user so that they may accept the system.

Objectives When Input

• • • • •

To verify that the system meets the user requirements After System Testing Business Needs & Detailed Requirements Master Test Plan User Acceptance Test Plan User Acceptance Test report Users / End Users Black Box techniques Problem / Configuration Management Compare, keystroke capture & playback, regression testing Testing Methodology Effective use of tools Product knowledge Business Release Strategy

Output Who Methods

• • • •

Tools Education

• • • • •

26/09/2006 16:03:56

Page 28 of 71

4.9 Operability Testing
Operability tests verify that the application can operate in the production environment. Operability tests are performed after, or concurrent with, User Acceptance Tests.

Objectives

• •

To ensure the product can operate in the production environment To ensure the product meets the acceptable level of service as per Service Level Agreement To ensure the product operates as stated in the Operations Standards To ensure the system can be recovered / restarted as per standards To ensure that JCL is as per standard Concurrent with or after User Acceptance Testing is completed User Acceptance Test Plan User Sign-off from UAT (if available) Operability Test Plan Operations Standards (as appropriate) Operability test report Operations staff Problem / change management Performance monitoring tools

• • • When Input • • • • • Output Who Methods Tools Education • • • •

None

Note: JCL stands for Job Control Language. In a non-IBM environment, an equivalent system for controlling jobs/files/etc., would apply.

26/09/2006 16:03:56

Page 29 of 71

5. 0Types of Tests
Testing types are logical tests, which may be conducted in isolation or as combined exercises. They will be performed during the physical levels of testing as previously described. Success of the testing process depends on: • • • • • Selecting appropriate types of testing necessary to meet the test objectives Determining the stages or levels of testing when these types of testing would be most effectively used (A test type can appear in more than one test level) Developing test conditions to meet the test evaluation criteria Creating test scripts / test data required to test the conditions above Managing fixes and re-testing

Types of testing are broadly classified as: • • Functional Testing Structural Testing

5.1 Functional Testing
The purpose of functional testing is to ensure that the user functional requirements and specifications are met. Test conditions are generated to evaluate the correctness of the application. The following are some of the categories. They are organized alphabetically: • • • • • • • • • • • Audit and Controls testing Conversion testing Documentation & Procedures testing Error Handling testing Functions / Requirements testing Interface / Inter-system testing Installation testing Parallel testing Regression testing Transaction Flow (Path) testing Usability testing

They are each discussed in the following sections.

5.1.1 Audit and Controls Testing
Description Audit and Controls testing verifies the adequacy and effectiveness of controls and ensures the capability to prove the completeness of data processing results. Their validity would have been verified during design.

26/09/2006 16:03:56

Page 30 of 71

Audit & Controls testing would normally be carried out as part of System Testing once the primary application functions have been stabilized. Objectives The objectives for Audit and Controls testing are to ensure or verify that: • • • Audit trail data is accurate and complete The transactions are authorized The audit trail information is produced and maintained as needed

Examples • • • Using hash totals to ensure that the accumulation of detailed records reconciles to the total records Inspecting manual control availability and operation to ensure audit effectiveness Reviewing the audit trail from the full parallel run to ensure correctness

5.1.2 Conversion Testing
Conversion testing verifies the compatibility of the converted program, data, and procedures with those from existing systems that are being converted or replaced. Most programs that are developed for conversion purposes are not totally new. They are often enhancements or replacements for old, deficient, or manual systems. The conversion may involve files, databases, screens, report formats, etc. Portions of conversion testing could start prior to Unit Testing and, in fact, some of the conversion programs may even be used as drivers for unit testing or to create unit test data. Objectives The objectives for Conversion testing are to ensure or verify that: • • • • • • • • New programs are compatible with old programs The conversion procedures for documentation, operation, user interfaces, etc., work Converted data files and format are compatible with the new system The new programs are compatible with the new databases The new functions meet requirements Unchanged functions continue to perform as before Structural functions perform as specified Back out/recovery and/or parallel operation procedures work

Examples • • • Conversion from one operating system to another Conversion from host to distributed systems Conversion from IMS databases to DB2 tables

26/09/2006 16:03:56

Page 31 of 71

5.1.3 User Documentation and Procedures Testing
Description User documentation and procedures testing ensures that the interface between the system and the people works and is useable. Documentation testing is often done as part of procedure testing to verify that the instruction guides are helpful and accurate. Both areas of testing are normally carried out late in the cycle as part of System Testing or in the UAT. It is normally a mistake to invest a lot of effort to develop and test the user documentation and procedures until the externals of the system have stabilized. Ideally, the persons who will use the documentation and procedures are the ones who should conduct these tests. Objectives The objectives are to ensure or verify that: • • • User / operational procedures are documented, complete and correct, and are easy to use People responsibilities are properly assigned, understood and coordinated User / operations staff are adequately trained

Examples • • • Providing Operations staff with manuals and having them use them to run the test application to observe their actions in a real operations simulation Providing data entry personnel with the kind of information they normally receive from customers and verifying that the information is entered correctly as per manuals / procedures Simulating real end-user scenarios (like telemarketing) using the system with the documentation and procedures developed

5.1.4 Error-Handling Testing
Description Error-handling is the system function for detecting and responding to exception conditions (such as erroneous input). The completeness of the error handling capability of an application system is often key to the usability of the system. It ensures that incorrect transactions will be properly processed and that the system will terminate in a controlled and predictable way in case of a disastrous failure. Note: Error-handling tests should be included in all levels of testing. Objectives The objectives are to ensure or verify that: • • • All reasonably expected errors can be detected by the system The system can adequately handle the error conditions and ensure continuity of processing Proper controls are in place during the correction process

Error handling logic must deal with all possible conditions related to the arrival of erroneous input such as: • Initial detection and reporting of the error

26/09/2006 16:03:56

Page 32 of 71

• • •

Storage of the erroneous input pending resolution Periodic flagging of the outstanding error until it is resolved Processing of the correction to the error

Examples • • Seeding some common transactions with known errors into the system Simulating an environmental failure to see how the application recovers from a major error

5.1.5 Function Testing
Description Function Testing verifies, at each stage of development, that each business function operates as stated in the Requirements and as specified in the External and Internal Design documents. Function testing is usually completed in System Testing so that by the time the system is handed over to the user for UAT, the test group has already verified to the best of their ability that the system meets requirements. Objectives The objectives are to ensure or verify that: • • • The system meets the user requirements The system performs its functions consistently and accurately The application processes information in accordance with the organization's standards, policies, and procedures

Examples • • Manually mapping each external design element back to a requirement to ensure they are all included in the design (static test) Simulating a variety of usage scenarios with test cases in an operational test system

5.1.6 Installation Testing
Description Any application that will be installed and run in an environment remote from the development location requires installation testing. This is especially true of network systems that may be run in many locations. This is also the case with packages where changes were developed at the vendor's site. Installation testing is necessary if the installation is complex, critical, should be completed in a short window, or of high volume such as in microcomputer installations. This type of testing should always be performed by those who will perform the installation process. Installation testing is done after exiting from System Test, or in parallel with the User Acceptance Test. Objectives The objectives are to ensure or verify that: • • All required components are in the installation package The installation procedure is user-friendly and easy to use

26/09/2006 16:03:56

Page 33 of 71

• •

The installation documentation is complete and accurate The machine readable data is in the correct and usable format

Examples • • Verifying the contents of installation package with its checklist of enclosures Actually having a person outside of the development or test team install the application using just the installation documentation as guidance

5.1.7 Interface / Inter-system Testing
Description Application systems often interface with other application systems. Most often, there are multiple applications involved in a single project implementation. Interface or inter-system testing ensures that the interconnections between applications function correctly. The interface testing is even more complex if the applications operate on different platforms, in different locations or use different languages. Interface testing is typically carried out during System Testing when all the components are available and working. It is also acceptable for a certain amount of Interface testing to be performed during UAT to verify that System-tested interfaces function properly in a production-like environment. Interface testing in UAT should always be the first practical tests in order to resolve incompatibility issues before the users commence their tests. Objectives The objectives are to ensure or verify that: • • • • • • • Proper parameters and data are correctly passed between the applications The applications agree on the format and sequence of data being passed Proper timing and coordination of functions exists between the application systems and that the processing schedules reflect these Interface documentation for the various systems is complete and accurate Missing data files are properly handled It is not possible for the same file to be processed twice or to be processed out of sequence The implications are clearly identified if the interfacing applications are delayed, not available or have been cancelled.

Examples • • Taking a set of test transactions from one application to be passed to another application and passing them into the interfacing application Simulating the loss of an upstream interface file in a full System Test

26/09/2006 16:03:56

Page 34 of 71

5.1.8 Parallel Testing
Description Parallel testing compares the results of processing the same data in both the old and new systems. Parallel testing is useful when a new application replaces an existing system, when the same transaction input is used in both, and when the output from both is reconcilable. It is also useful when switching from a manual system to an automated system. Parallel testing is performed by using the same data to run both the new and old systems. The outputs are compared, and when all the variances are explained and acceptable, the old system is discontinued. Objectives The objectives are to ensure or verify that: • • The new system gives results consistent with the old system (in those situations where the old system was satisfactory) Expected differences in results occur (in those circumstances where the old system was unsatisfactory)

Examples • • First running the old system, restoring the environment to as it was at the beginning of the first run, and then running the new version in a production environment Executing two manual procedures side-by-side and observing the parallel results

5.1.9 Regression Testing
Description Regression testing verifies that no unwanted changes were introduced to one part of the system as a result of making changes to another part of the system. To perform a regression test, the application must be run through the same test scenario at least twice. The first test is run when your application or a specific part of your application is responding correctly. Your application's responses to the first test serve as a base against which you can compare later responses. The second test is run after you make changes to the application. The application responses are compared for both executions of the same test cases. The results of comparison can be used to document and analyze the differences. By analyzing the differences between two executions of the same test cases, you can determine if your application's responses have changed unexpectedly. Regression testing should always be used by the developer during Unit testing and, in conjunction with a Change Management discipline, will help prevent code changes being lost or being overwritten by subsequent changes. A last regression test should be done as the final act of System Testing, once all the function is stabilized and further changes are not expected. Objectives The objectives are to ensure or verify that:

26/09/2006 16:03:56

Page 35 of 71

• •

The test unit or system continues to function correctly after changes have been made Later changes do not affect previous system functions

Examples • • Final test a group of changes by executing a regression test run as the last System Test Reusing accumulated test cases to re-unit-test a module each time an incremental change is coded

5.1.10 Transaction Flow Testing
Description Transaction Flow Testing can be defined as the testing of the path of a transaction from the time it enters the system until it is completely processed and exits a suite of applications. For example, if it is an online branch transaction for issue of a draft on another bank, path testing starts when the branch teller enters the transaction at the terminal; through account debiting, draft issuance, funds transfer to the paying bank, draft settlement, account reconciliation and reporting of the transaction on the customer's statement. This implies that transaction flow is not necessarily limited to testing one application especially if the end-to-end process is handled by more than one system application. Wherever any one component in a flow of processes changes, it is the responsibility of the person or group making that change to ensure that all other processes continue to function properly. Transaction Flow testing may begin once System Testing has progressed to the point that the application is demonstrably stable and capable of processing transactions from start to finish. Objectives The objectives are to ensure or verify that: • • • • The transaction is correctly processed from the time of its entry into the system until the time it is expected to exit All the output from the various systems or sub-systems, which are input to the other systems or subsystems, are processed correctly and passed through Interfaces can handle unexpected situations without causing any uncontrolled ABENDs or any break in services The business system functions "seamlessly" across a set of application systems

Examples • • Entering an on-line transaction and tracing its progress completely through the applications Creating unexpected problem situations in interface systems and verifying that the system is equipped to handle them

5.1.11 Usability Testing
Description The purpose of usability testing is to ensure that the final product is usable in a practical, day-to-day fashion. Whereas functional testing looks for accuracy of the product, this type of test looks for simplicity and user-friendliness of the product. Usability testing would normally be performed as part of functional testing during System and User Acceptance Testing.

26/09/2006 16:03:56

Page 36 of 71

Objectives The objectives of this type of test are to ensure that: • • • The system is easy to operate from an end-user and a service provider (Computer Services) standpoint Screens and output are clear, concise and easy to use and help screens or clerical instructions are readable, accurately describe the process and are expressed in simple jargon-free language Input processes, whether via terminal or paper, follow natural and intuitive steps.

Examples • • Testing a data input screen to ensure that data is requested in the order that the client would normally use it. Asking someone unconnected with the project to complete a balancing process on a printed report by following the instructions.

5.2 0Structural Testing
The purpose of structural testing is to ensure that the technical and "housekeeping" functions of the system work. It is designed to verify that the system is structurally sound and can perform the intended tasks. Its objective is also to ensure that the technology has been used properly and that when the component parts are integrated they perform as a cohesive unit. The tests are not intended to verify the functional correctness of the system, but rather that the system is technically sound. The categories covered in the next sub-sections are: • • • • • • • Backup and Recovery testing Contingency testing Job Stream testing Operational testing Performance testing Security testing Stress / Volume testing

5.2.1 Backup and Recovery Testing
Description Recovery is the ability of an application to be restarted after failure. The process usually involves backing up to a point in the processing cycle where the integrity of the system is assured and then reprocessing the transactions past the original point of failure. The nature of the application, the volume of transactions, the internal design of the application to handle a restart process, the skill level of the people involved in the recovery procedures, documentation and tools provided, all impact the recovery process. Backup and recovery testing should be performed as part of the System Testing and verified during Operability Testing whenever continuity of processing is a critical requirement for the application. Risk of failure and potential loss due to the inability of an application to recover will dictate the extent of testing required.

26/09/2006 16:03:56

Page 37 of 71

Objectives The objectives are to ensure or verify that: • • • • • The system can continue to operate after a failure All necessary data to restore / re-start the system is saved Backed up data is accessible and can be restored Backup and recovery procedures are well documented and available People responsible for conducting the recovery are adequately trained

Examples • • Simulating a full production environment with production data volumes Simulating a system failure and verifying that procedures are adequate to handle the recovery process

5.2.2 Contingency Testing
Description Operational situations may occur which result in major outages or "disasters." Some applications are so crucial that special precautions need to be taken to minimize the effects of these situations and speed the recovery process. This is called Contingency. Usually each application is rated by its users in terms of its importance to the company and contingency plans are drawn up accordingly. In some cases, where an application is of no major significance, a contingency plan may be to simply wait for the disaster to go away. In other cases more extreme measures have to be taken. For example, a back-up processor in a different site may be an essential element required by the contingency plan. Contingency testing will have to verify that an application and its databases, networks, and operating processes can all be migrated smoothly to the other site. Contingency testing is a specialized exercise normally conducted by operations staff. There is no mandated phase in which this type of test is to be performed although, in the case of highly important applications, this will occur after System Testing and probably concurrent with the UAT and Operability Testing. Objectives • • • • The system can be restored according to prescribed requirements All data is recoverable and restartable under the prescribed contingency conditions All processes, instructions and procedures function correctly in the contingency conditions When normal conditions return, the system and all its processes and data can be restored

26/09/2006 16:03:56

Page 38 of 71

Examples: • • Simulate a collapse of the application with no controlled back-ups and test its recovery and the impact in terms of data and service loss Test the operational instructions to ensure that they are not site specific.

5.2.3 Job Stream Testing
Description Job Stream testing is usually done as a part of operational testing (the test type not the test level, although this is still performed during Operability Testing). Job Stream testing starts early and continues throughout all levels of testing. Conformance to standards is checked in User Acceptance and Operability testing. Objectives The objectives are to ensure or verify that: • • • • • • The JCL is defect free, compliant to standards, and will execute correctly Each program can handle expected parameter input The programs are generating the required return codes Jobs are sequenced and released properly File formats are consistent across programs to ensure that the programs can talk to each other The system is compatible with the operational environment

Examples • • • Having the JCL inspected by an Operations staff member or checked by a tool like JCL Check Running a full operating test with a final version of the operations JCL Attempting to run the application with an invalid set of parameters

5.2.4 Operational Testing
Description All products delivered into production must obviously perform according to user requirements. However, a product's performance is not limited solely to its functional characteristics. Its operational characteristics are just as important since users expect and demand a guaranteed level of service from Computer Services. Therefore, even though Operability Testing is the final point where a system's operational behavior is tested, it is still the responsibility of the developers to consider and test operational factors during the construction phase. Operational Testing should be performed as part of Integration and System Testing and verified during Operability Testing. Objectives The objectives are to ensure that: • All modules and programs are available and operational

26/09/2006 16:03:56

Page 39 of 71

• • • • •

All on-line commands and procedures function properly All JCL conforms to standards and operates the application successfully All scheduling information is correct All operational documentation correctly represents the application, is clear, concise and complete All batch functions can be completed within an acceptable window of operation

Examples • • Simulate a full batch cycle and operate the process exactly as prescribed in the run documentation Subject all JCL to a standard check

5.2.5 Performance Testing
Description Performance Testing is designed to test whether the system meets the desired level of performance in a production environment. Performance considerations may relate to response times, turn around times (through-put), technical design issues and so on. Performance testing can be conducted using a production system, a simulated environment, or a prototype. Attention to performance issues (e.g. response time or availability) begins during the Design Phase. At that time the performance criteria should be established. Performance models may be constructed at that time if warranted by the nature of the project. Actual performance measurement should begin as soon as working programs (not necessarily defectfree programs) are ready. Objectives The objectives are to ensure or verify that: • The system performs as requested (e.g., transaction response, availability, etc.).

Examples • • • Using performance monitoring tools to verify system performance specifications Logging transaction response times in a System Test Using performance monitoring software to ensure identification of dead code in programs, efficient processing of transactions etc. - normal and peak periods should be identified and testing carried out to cover both

5.2.6 Security Testing
Description Security of an application system is required to ensure the protection of confidential information in a system and in other affected systems is protected against loss, corruption, or misuse; either by deliberate or accidental actions. The amount of testing needed depends on the risk assessment of the consequences of a breach in security. Tests should focus on, and be limited to those security features developed as part of the system, but may include security functions previously implemented but necessary to fully test the system.

26/09/2006 16:03:56

Page 40 of 71

Security testing can begin at any time during System Testing, continue in UAT and are completed in Operability Testing. Objectives The objectives are to ensure or verify that: • • • The security features cannot be bypassed, altered, or broken Security risks are properly identified and accepted, and contingency plans tested The security provided by the system functions correctly

Examples • • • • Attempting a 'sign-on' to the system without authorization if one is required Verifying passwords are not visible on terminals or on printed output Attempting to give yourself authorization to perform restricted tasks Attempting to enter unauthorized on-line transactions to ensure that the system can not only identify and prevent such unauthorized access but also that they are reported if required

5.2.7 Stress / Volume Testing
Description Stress testing is defined as the processing of a large number of transactions through the system in a defined period of time. It is done to measure the performance characteristics of the system under peak load conditions. Stress factors may apply to different aspects of the system such as input transactions, report lines, internal tables, communications, computer processing capacity, throughput, disk space, I/O and so on. Stress testing should not begin until the system functions are fully tested and stable. The need for Stress Testing must be identified in the Design Phase and should commence as soon as operationally stable system units are available. It is not necessary to have all functions fully tested in order to start Stress Testing. The reason why this is started early is so that any design defects can be rectified before the system exits the construction phase. Objectives The objectives are to ensure or verify that: • • • • • The production system can process large volumes of transactions within the expected time frame The system architecture and construction is capable of processing large volumes of data The hardware / software are adequate to process the volumes The system has adequate resources to handle expected turn around time The report processing support functions (e.g., print services) can handle the volume of data output by the system

Examples

26/09/2006 16:03:56

Page 41 of 71

• • •

Testing system overflow conditions by entering more volume than can be handled by tables, transaction queues, internal storage facilities and so on Testing communication lines during peak processing simulations Using test data generators and multiple terminals, stressing the on-line systems for an extended period of time and stressing the batch system with more than one batch of transactions

26/09/2006 16:03:56

Page 42 of 71

5.3 0Relationship Between Levels and Types of Testing
The diagram below illustrates the relationship between testing levels and testing types. The testing levels are the columns and the types are the rows in the table. The table shows the level(s) where each type of test might be performed. These are only suggestions. Obviously each project has different characteristics, which have to be considered when planning the testing process. As an example, attention is drawn to Interface/Inter-System testing. This is clearly a Systems Test function although, given environmental constraints within the Development facility, some Interface testing may have to be performed as an initial process within User Acceptance Testing: Levels Types Audit & Control Conversion Doc. & Procedure Error-handling Function/Requirement Installation Interface/Inter-System Parallel Regression Transaction Flow Usability Back-up & Recovery Contingency Job Stream Operational Performance Security Stress/Volume                                                                  Unit Integration System Systems Integration UAT Operability

26/09/2006 16:03:56

Page 43 of 71

6. 0Test Data Set Up
Test data set up is a very tedious and time-consuming process. After the test case design has been selected, we have to consider all the available sources of data and build new or modify existing test data. Frequently, the testers tend to look at the functional specifications and set up data to test the specifications. This tends to overlook how the software will be used. It is therefore important that the production data are analyzed to understand the types, frequency and characteristics of data so that they can be simulated during testing. Choose from the following sources while setting up test data: • • • • Production data Data from previous testing Data from centralized test beds (if available) Data used in other systems (for interface testing)

In many situations, data from the selected source(s) have to be supplemented to cover additional conditions and cases. When setting up effective test data, the goal is to provide adequate requirements and conditions coverage. Try to include some of each of the following types: • • • • Frequently occurring types and characteristics of data that have high risk. For example, deposit or withdrawal transactions at a banking terminal. Frequently occurring types of data with very little exposure. For example, maintenance-type transactions such as name & address changes. Low frequency errors that have very little consequence. Low frequency errors resulting in heavy losses. For example, the printing of a few additional zeros in a financial report.

Once the data source has been identified, you have to determine the files and sizes of files to use. Always try to use any known standards or conventions for databases and files. Data are then extracted using the appropriate methods. The key to successful testing is to state the expected results when defining the test conditions and documenting the test scripts. Documentation should include: • • • • What is being tested How is testing done (procedures) Where to find the test data The expected results.

The keys to reducing the effort to create test data are REUSE and RECYCLE!

26/09/2006 16:03:56

Page 44 of 71

7. Test Procedure Set Up
The test procedures describe the step-by-step process to conduct the tests. The primary goal is to define the sequence and contents of tests and to establish pass-fail criteria for each. In so doing, you will also be establishing the entry and exit criteria for each test. As part of the process of setting up entrance criteria, what we are in fact doing is defining the prerequisites and co-requisites. For example, if we want to test the printing of certain audit reports in batch, we may include the entry of on-line transactions and the bringing down of the on-line system as entry criteria. Again, if we are testing for a specific error condition to be generated and reported on the audit report as part of our test, we may specify a particular entry appearing on the report to be the exit criteria. We cannot exit until that entry appears as expected. Typically, test procedures include instructions to set up the test facility. For example, if files have to be restored before a test is run, this becomes the first step in the test procedure. If certain tests fail, back out steps may have to be performed to undo what was done up to that point. In that case, some restart and recovery steps may be required once the problem has been fixed. All of these steps are included as part of the test procedure. Some times it may not be possible to anticipate all error conditions. In that case, steps to handle an unexpected error situation should be described. Procedures set up may also involve writing the Job Control Language (JCL), or platform-specific equivalent, to be used in production. This is usually tested during the System Test and certified in the UAT and Operability Test Environment. If the JCL is being set up for a new job, it will actually be tested during Integration and System testing before being promoted to UAT. Please follow all relevant JCL standards for setting up the production JCL procedures.

26/09/2006 16:03:56

Page 45 of 71

8. Managing and Controlling the Test Effort
8.1 Introduction
Although testing is a part of the development process, the activities that must be managed by the test manager and staff can be quite distinct and critical to the overall success of the project. Outlined below are the major factors that are most important to testing. While some are exclusive to testing, others are shared as part of overall project management.

8.2 Project Plans and Schedules
In larger projects, the test manager will assume responsibility for the portion of the project plans that relate specifically to testing activities. It is important for the test team to institute a tracking and control system to measure the schedule's progress and identify potential bottlenecks. As an example, test plans are written to schedule and control test runs. These can be as complex and extensive as those that are used by the product developers. This activity can demand a level of planning skill on par with a project manager. It is critical that the testing plans (schedule, costs, and resources) be fully integrated with the development plans and be managed as an overall plan for all sizes of projects. This ensures that coordination is achieved throughout the project.

8.3 Test Team Composition & User Involvement
A key step in managing a successful test project is to identify the key participants who will be involved in testing. It is good to use the concept of a Test Manager. Typically the Test Manager will be from the development team but can come from someone in the users' shop with experience in this or related areas. The role of the Test Manager is to: • • • • Customize the generic testing methodology Define test responsibility / tasks that need to be performed Identify individuals who can perform those responsibilities / tasks Plan, coordinate, manage and report results of the testing process

The test team staff is responsible for: • • Producing test package material (test cases, test data, test procedures) Executing test cases, reporting results, and retesting fixes

The technical staff is responsible for: • Providing the technical guidance required to test and fix the new application

Personnel involved in testing activities can be typically drawn from any of the following groups: • • • • Developers Users User management Operations

26/09/2006 16:03:56

Page 46 of 71

Independent test group specialists

Effective managing and control of the test effort requires that a test team leader and staff be assigned and the roles and responsibilities defined. For each level of testing, clearly communicate and document the following: • • • • • • • Who creates the test data Who runs the tests or enters the data Who checks the results Who makes the decisions on what corrections are to be made, and Who makes the required corrections Who builds and maintains the test facilities Who operates the testing facilities and procedures (e.g., output handling)

8.4 Measurement and Accountability
You cannot manage what you cannot measure. The test manager will track the typical project measures such as schedules and resources. Project teams are all accustomed to those. There should also be an objective set of measurements to track and control the progress of the testing activities and their success. Many of these measurements are just plain common sense and have been validated by many major project experiences. They can be applied at each level of testing. Among the parameters the test team must know are: • • • • • • • What parts have I completed testing successfully? (coverage) What parts have I attempted to test? (coverage) How many variances can I expect? (defect prediction) How many variances have I already discovered? (problem management) How many variances have I fixed and have left to fix? (problem management) What is the impact (seriousness) of the outstanding variances? (problem management) At what rate is the team still finding variances? (problem management)

While the list of questions may not be exhaustive, it does represent the most useful information the test team needs to know. Using the measures that answer the above questions and the history of previous projects, the test planner can make more reliable schedule, resource, and test plans, and use the on-going results to control and report on the success of testing. The accountability of the team will rest on the outcome of these measures but the risk will be managed over the full project life cycle. At this point, the specific measures will not be discussed. The methodology training will refer to some of the more popular measures recommended. The measures selected by your project will depend on your needs.

8.5 Reporting
Progress reporting to management must be regular and useful. The format and the frequency of the reports are the responsibility of the test manager or, in the absence of that position, the project manager.

26/09/2006 16:03:56

Page 47 of 71

To be useful, progress should be reported in terms of factual and objective measures. The measures, which answer the questions posed in the measurement section above, could be presented or combined and used to answer other related management questions. Graphic representation of the data improves the clarity and impact of the information. The level of detail should be appropriate to the audience to which you are reporting. The Test Team should contribute to and be recipients of testing progress reports. Success of testing then becomes a visible incentive.

8.6 Problem Management
Once a work product is transferred from its author and becomes part of the test process it becomes subject to problem management procedures. This means that any problem reported will be documented, tracked, and reported until it becomes resolved. There is a tendency not to report very small problems. This must be resisted since the impacts of the problem or its fix may not be fully analyzed and the problem may lose visibility for retest and subsequent analysis. The status of problems discovered and outstanding must be tracked and reported regularly. Using backlog measures (where outstanding problems are accumulated, counted and reported); you can assess the ability of the project team to cope with the volume and/or severity of problems. If the backlog continues to increase in relation to the numbers of problems discovered, then this information can assist in determining the need to change the project and test schedule. Similarly, if severe problems remain unresolved for prolonged periods this might indicate that there are design issues, which may need addressing. Another benefit of maintaining a history of problems is the capability of the organization to improve the testing or development processes through causal analysis. For every problem, cause category codes can be assigned and later analyzed for patterns of failures on which to base corrective actions. Causal analysis is a process whereby every problem's cause is tracked and categorized. Very often a situation may cause multiple problems that manifest themselves in different forms and in different places in a system. If these causes are not tracked, significant time can be wasted throughout a testing exercise. If they are recorded and categorized, identification of common problem-causing situations is facilitated. If the project has a Problem Management system, it should be used. Otherwise, one should be created and maintained whether supported with automated tools or not.

8.7 Change Management
When the system requirements or specifications are approved at the phase ends, they become candidates for change control. As part of the evaluation process for a change, the Test Team should be involved since the change may have a significant effect on the direction and cost of testing. It may also affect the test build strategy. Failure to strictly adhere to formal change management has been the reason for the failure of many projects. Inadequate change control leads to runaway requirements, elongated test schedules and inflated project costs. If the project has a Change Management system, it should be used. Otherwise, one should be created and maintained whether supported with automated tools or not.

8.8 Configuration Management (Version control)

26/09/2006 16:03:56

Page 48 of 71

As development progresses, the interim work products (e.g., design, modules, documentation, and test packages) go through levels of integration and quality. Testing is the process of removing the defects of components under development at one testing level and then promoting the new defect-free work product to the next test level. It is absolutely critical that the highest level of test not be corrupted with incomplete or defective work products. It might require the retest of some or all of the work products at that level to regain the state of confidence in their quality. A test team member usually (but not always) acts as the "gatekeeper" to the next level of test and assures the quality of the interim work products before they are promoted. Update access is restricted from the authors. Control may be enforced by procedures and/or automation (such as software library control systems). These controls are key to avoiding rework or defective work slipping by uncontrolled. A configuration or version control process and tools must be present and proactively managed throughout the project.

8.9 Reuse
This testing methodology assumes a process of creating an initial work product and then refining it to subsequent levels of detail as you progress through the stages of development. A Master Test Plan is first created with a minimal level of detail known at the time and is used as the template and source of basic information for the Detailed Test Plans. The test sets, cases, and data are created and successively refined as they are used through the design, code/unit test, and integration test for use in the System Test and the UAT. Even then, they are not discarded but are recycled as the System Test and/or UAT regression test packages. The deliverables produced at one phase by the test team should be considered as candidates to be reused fully or in part by subsequent phases. The final product of all the testing should be the regression test package delivered as part of the implementation to those who will maintain the system.

26/09/2006 16:03:56

Page 49 of 71

Appendices
Appendix A. Full Lifecycle Testing Models

Definition
Quality Planning

Design

Generation
Quality Management Project Management Solution Change Management Defect Management

Validation

Deployment

Static Testing Test Planning Test Preparat ion Unit Test Integration Test System Test Conf iguration Management User Accept Ops Accept Systems Integ. Implementation

Tools & Techniques

PROJECT OBJECTIVES REQTS EXTERNAL DESIGN INTERNAL DESIGN CODE

Objectives, Cust Expectations, Operating Environment, Initial Bus. Plan User Scenarios, External Systems Business / Product Function Matrices

Master Test Plan Acceptance Test Plan Sys. Integ. Test Plan Test Cases

Internals, Functions, Interfaces Logic, Code Unit Test Plan Test Cases Integration Test Plan Test Cases

System Test Plan Test Cases

UNIT TEST

INTEGRATION TEST

SYSTEM TEST

ACCEPTANCE / SYS. INT.TEST

POD OPERABILITY
TEST

26/09/2006 16:03:56

Page 50 of 71

Test Work Products Dependency Diagram

Operational Model

Release Plan

Quality Plan

Static Test Plan

Project Workbook Outline

Use Case Model

Test Strategy

Non-functional Requirements

Master Test Plan

Test Environment

Configuration Management Procedures

Acceptance Test Plan Usability Requirements Detailed Test Plans* Test Specification *Unit, Integration, System, Systems Integration, Operability, Performance Test Execution Plan

Test Results

Test Report

Build Procedures

26/09/2006 16:03:56

Page 51 of 71

Appendix B. Integration Approaches Introduction
A critical decision which impacts the implementation phase of the development process more than any other is the strategy to determine the sequence for developing and integrating system components such as modules, programs, and sub-systems. There are three strategies: • • • Top down Bottom up A combination of both

A conscious decision must be made by the project / test team, early in the development life cycle to merge the development and testing strategies so as to optimize the project resources, and deliverables. Some opportunities to exploit advantages through an integration approach when you consider system units which: • • • Form a stand-alone subsystem and those, which interface with other units. Must exist before other units can be tested. Must be delivered first (which are critical to the success of the project) or which are on the critical path of the project.

Top Down
The Top Down Approach is a strategy, which starts with construction of the top level or control modules and then integrates lower level modules within the structure. In order to test Top Down, you will need to construct program and module stubs. Stubs The purpose of a stub or dummy program is to simulate the existence of a program or module until the real module can replace it. This use of temporary code is an example of "scaffolding." Each module integrated into the program or job stream requires stubs to represent the modules it calls until built. Testing can occur on the overall structure of modules itself and on individual modules as part of the whole structure as they are integrated and come on-line. Advantage • • Tends to encourage the overlapping of design and testing Performs incremental testing on whole unit at each integration

Disadvantages • • Extra construction of stubs (dummies) is required. Design deficiencies may not become apparent until lower level modules are tested.

26/09/2006 16:03:56

Page 52 of 71

Bottom Up
In the Bottom Up approach, modules or program units are written and tested completely starting at the lowest level. Successively, higher levels are added and tested until the program or programs are complete. Drivers A driver is a program that simulates the calls to one or more modules under development for the purpose of testing those modules. For example, a driver may generate the data required for the module under test or, read a file with test data, which is then formatted and passed to the module. Usually, but not always, a trace or debug report may be created showing the data sent to and returned from the module. If the module being tested and any modules that it calls are fully coded then the use of drivers is an effective approach to testing. Development of driver modules or programs is sometimes thought to be a tedious process, which is often avoided. However, if such facilities are developed and fully documented they can provide payback in the future for subsequent test exercises. Advantage • • Encourages reusable code Can be modified for other "driver" situations

Disadvantage • Requires the construction of drivers (additional programming).

Top Down and Bottom Up
The best features of Top Down testing and Bottom Up testing can sometimes be combined. • Top Down testing is effective in iterative or incremental development strategies. Bottom up is effective when a number of common modules are used by different systems and are normally coded and tested first. A combination of top-down and bottom-up strategies may be best depending on above considerations.

26/09/2006 16:03:56

Page 53 of 71

Appendix C. Black Box/White Box Testing Techniques Introduction
Two techniques are commonly used in the development of test conditions. They are Black Box and White Box techniques. An overview of the two techniques is given below. More detailed information on how and when to use each technique are covered in the Test Process Model.

Black Box Testing
In the Black Box approach, the testers have an "outside" view of the system. They are concerned with “what is done” NOT “how it is done." Black Box testing is requirements oriented and is used at all test levels. The system is defined and viewed functionally. Inputs are entered into the system and the outputs from the system are examined. All possible inputs and/or input combinations are entered to test the system. Both valid and invalid inputs are used to test the system. Black Box testing uses the following techniques: Equivalence Partitioning Equivalence partitioning is a method for developing test cases by analyzing each possible class of values. In equivalence partitioning you can select any element from: • • Valid equivalence class Invalid equivalence class

Identify equivalence classes by partitioning each input or external condition into valid and invalid classes. Example 1. First character must be alphabetic Valid class Any one of the 26 Alphabetic characters Invalid class Any special character Any numeric character

It is not necessary to test with each one of the valid or invalid classes. Example 2. Input X will range in value from 0 to 40 Valid class 0<= X <= 40 Invalid class X<0 X > 40 Boundary Value Analysis Boundary value analysis is one of the most useful test case design methods and is a refinement to equivalence partitioning. Boundary conditions are those situations directly on, above, and beneath the edges of input equivalence classes and output equivalence classes. In boundary analysis one or more

26/09/2006 16:03:56

Page 54 of 71

elements must be selected to test each edge. There is a focus on both input conditions and output results. Example: If amount greater than $1.00 • • • Test case 1: Amount = 0.99 (false) Test case 2: Amount = 1.00 (false) Test case 3: Amount = 1.01 (true)

White Box Testing
In the White Box approach, the testers have an inside view of the system. They are concerned with "how it is done" NOT "what is done." White Box testing is logic oriented. The testers are concerned with the execution of all possible paths of control flow through the program. The "White Box" approach is essentially a Unit test method (which is sometimes used in the Integration test) and is almost always performed by technical staff. Each technique of White Box testing offers only one facet to the overall complex solution to the problem of completely testing all the aspects of logic paths and their content. Techniques for White Box testing are the following: Statement Coverage Statement Coverage is a White Box technique used to verify that every statement in the program is executed at least once. This ensures complete logic coverage. Consider the following program segment: A + B If (A = 3) Then B = X + Y End-If-Then While (A > 0) Do Read (X) A = A - 1 End-While-Do If "(A = 3)" is tested, only the true condition is followed but not the false condition "A not=3." Statement coverage will only test that the statement itself has been reached; it will not test the coverage if the condition is false. Both true and false outcomes must be considered. Decision Coverage Decision Coverage is an improvement over Statement Coverage in that all true and false branches are tested. Decision Coverage is also known as Branch Coverage. Decision Coverage is achieved when all branches have been traversed and every entry point has been invoked at least once. But once a condition is satisfied it won't necessarily test other branch conditions included in the statement.

26/09/2006 16:03:56

Page 55 of 71

Example: If A < 10 or A > 20 Then B = X + Y If "A < 10" is tested, it will not test the other condition "A > 20." This would require another decision test. Condition Coverage The objective of Condition Coverage testing is that all conditions within a statement, not just each decision outcome, are tested to ensure full coverage. Consider the following program segment: A = X If (A > 3) or (A < B) Then B = X + Y End-If-Then While (A > 0) and (Not EOF) Do Read (X) A = A - 1 End-While-Do Condition coverage is achieved when each condition has been true at least once and false at least once, and every point has been invoked at least once. Multiple Condition Coverage The purpose is to ensure that all combinations of conditions are tested. Decision tables are a good tool to determine the number of unique combinations in a program. Path Coverage This ensures that all possible combinations of flow paths, based upon all possible combinations of condition outcomes in each decision from all points of entry to all exit points, are invoked at least once. Selection Guidelines Black Box testing • • • Consider each possible class of values Consider inside and outside edge of boundaries of the range Consider expected input and output values

White Box testing • • Select appropriate techniques Identify all decisions, conditions and paths

26/09/2006 16:03:56

Page 56 of 71

• • • • •

Determine the number of test cases needed Identify the specific combinations of conditions needed Determine variable values required Select specific test case values (input and output) Inspect for completeness and correctness

26/09/2006 16:03:56

Page 57 of 71

Appendix D. Test Case Design
A general list of test conditions is shown below. These can be used as an aid to develop test cases. Transaction Test Cases The following section includes some of the key considerations to keep in mind while designing test cases based on the type of processing (on-line, batch or transaction flow) and the level of testing (unit or system test and so on). Also, consider the use of: • • Inspections and walkthroughs to check for completeness and consistency. Automation to aid in administration, documentation, regression testing and operation.

General Conditions • Adding records − − • − − − − • − − − • − − Add a record using valid data. Add an existing record. Update a record using valid data. Update a record that does not exist. Update a closed record. Make multiple updates to the same record. Close (delete flag or move to history) using valid data. Re-close a closed record. Close a non-existent record. Add, update and close a record in the same test. Use all possible data entry methods.

Updating records

Closing records

All records

For Batch Transactions • • • • • • • Try transactions singly and in combinations Pass duplicate transactions Pass combinations of valid and invalid values, multiple errors Pass data set with sequence errors Pass batches with invalid dates, duplicates, missing headers/trailers, incorrect totals/control data Run with no data at all Run two or more days data as one run

For On-line Transactions • • Process null request for all transactions Try interrupting the transaction without completing it

All Transactions - Invalid Data

26/09/2006 16:03:56

Page 58 of 71

• • • • • • • •

Test all types of transactions using valid and invalid data. Set up test records so that balances will go high, low, zero, and negative, negative then positive. Try transactions against closed and non-existent records. Check ranges high and low, within and outside. Check arithmetic fields for overflow, negative values, rounding, truncating, zero divide, alignment. Try numeric fields with alphanumeric, blanks, leading blanks, trailing blanks, and embedded blanks. Try date fields with invalid dates (months, days, and years). Try special characters or keys - *,?,/1/2,1/4,EOF, etc.

Screen/Panel/Window Test Cases • • Screen flow. Test the number of display lines for 0 entries, minimum, maximum, minimum - 1, maximum + 1 number of entries. For example, if an order display screen can display up to 20 items, test it against orders with no items, 1 item, 20 items and 21 items. Force continuation of all output display screens. Test that all program function keys perform the desired action on all applicable screens. Consistency between screens, format, use of colors, use of function keys and of action codes is identical on all screens. Availability and consistency of help screens. Appropriate return from help screens.

• • • • •

26/09/2006 16:03:56

Page 59 of 71

Glossary of Testing Terms and Definitions
This testing glossary is intended to provide a set of common terms and definitions as used in IBM’s testing methodology. These definitions have origin in many different industry standards and sources such as British Standards Institute, IEEE, and other IBM program development documents. Many of these terms are in common use and therefore may have a slightly different meaning elsewhere. If more than one definition is in common use, they have been included where appropriate.

Term Acceptance Criteria

Definition The definition of the results expected from the test cases used for acceptance testing. The product must meet these criteria before implementation can be approved. (1) Formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the client to determine whether or not to accept the system. (2) Formal testing conducted to enable a user, client, or other authorized entity to determine whether to accept a system or component. Describes the steps the client will use to verify that the constructed system meets the acceptance criteria. It defines the approach to be taken for acceptance testing activities. The plan identifies the items to be tested, the test objectives, the acceptance criteria, the testing to be performed, test schedules, entry/exit criteria, staff requirements, reporting requirements, evaluation criteria, and any risks requiring contingency planning. A loosely structured testing approach that allows test developers to be creative in their test selection and execution. Adhoc testing is targeted at known or suspected problem areas. A functional type of test that verifies the adequacy and effectiveness of controls and completeness of data processing results. A test focus area defined as the ability to provide supporting evidence to trace processing of data. A structural type of test that verifies the capability of the application to be restarted after a failure. Evaluation techniques that are executed without knowledge of the program’s implementation. The tests are based on an analysis of the specification of the component without reference to its internal workings. Approach to integration testing where the lowest level components are tested first then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested. See "Top-down." A test case selection technique that selects test data that lie along "boundaries" or extremes of input and output possibilities. Boundary

Acceptance Testing

Acceptance Test Plan

Adhoc Testing

Audit and Controls Testing Auditability Backup and Recovery Testing Black Box Testing

Bottom-up Testing

Boundary Value Analysis

26/09/2006 16:03:56

Page 60 of 71

Term

Definition Value Analysis can apply to parameters, classes, data structures, variables, loops, etc.

Branch Testing Build

A white box testing technique that requires each branch or decision point to be taken once. (1) An operational version of a system or component that incorporates a specified subset of the capabilities that the final product will provide. Builds are defined whenever the complete system cannot be developed and delivered in a single increment. (2) A collection of programs within a system that are functionally independent. A build can be tested as a unit and can be installed independent of the rest of the system. A set of related activities that comprise a stand-alone unit of business. It may be defined as a process that results in the achievement of a business objective. It is characterized by well-defined start and finish activities and a workflow or pattern. A model of the stages through which software organizations progress as they define, implement, evolve, and improve their software process. This model provides a guide for selecting process improvement strategies by determining current process capabilities and identifying the issues most critical to software quality and process improvement. This concept was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University. The evaluation of the cause of major errors, to determine actions that will prevent reoccurrence of similar errors. The process, by which a change is proposed, evaluated, approved or rejected, scheduled, and tracked. A process methodology to identify the configuration of a release and to manage all changes through change control, data recording, and updating of baselines. A documented proposal for a change of one or more work items or work item parts. A white box test method that requires all decision conditions be executed once for true and once for false. (1) The process of identifying and defining the configuration items in a system, controlling the release and change of these items throughout the system life cycle, recording and reporting the status of configuration items and change requests, and verifying the completeness and correctness of configuration items. (2) A discipline applying technical and administrative direction and surveillance to (a) identify and document the functional and physical characteristics of a configuration items, (b) control changes to those characteristics, and (c) record and report change processing and implementation status. A functional type of test that verifies the compatibility of converted

Business Function

Capability Maturity Model (CMM)

Causal Analysis Change Control Change Management

Change Request Condition Testing Configuration Management

Conversion testing

26/09/2006 16:03:56

Page 61 of 71

Term

Definition programs, data and procedures with the “old” ones that are being converted or replaced.

Coverage

The extent to which test data tests a program’s functions, parameters, inputs, paths, branches, statements, conditions, modules or data flow paths. Documentation procedure to indicate the testing coverage of test cases compared to possible elements of a program environment (i.e., inputs, outputs, parameters, paths, cause-effects, equivalence partitioning, etc.). A test focus area defined as the ability to continue processing if problems occur. Included is the ability to backup and recover after a failure. A test focus area defined as the ability to process data according to prescribed rules. Controls over transactions and data field edits provide an assurance on accuracy and completeness of data. Testing in which test cases are designed based on variable usage within the code. The process of locating, analyzing, and correcting suspected faults. Compare with testing. Percentage of decision outcomes that have been exercised through (white box) testing. A variance from expectations. See also Fault. A set of processes to manage the tracking and fixing of defects found during testing and to perform causal analysis. A functional type of test that verifies that the interface between the system and the people works and is usable. It also verifies that the instruction guides are helpful and accurate. (1) A formal meeting at which the preliminary or detailed design of a system is presented to the user, customer or other interested parties for comment and approval. (2) The formal review of an existing or proposed design for the purpose of detection and remedy of design deficiencies that could affect fitness-for-use and environmental aspects of the product, process or service, and/or for identification of potential improvements of performance, safety and economic aspects. Testing of software by the manual simulation of its execution. It is one of the static testing techniques. The detailed plan for a specific level of dynamic testing. It defines what is to be tested and how it is to be tested. The plan typically identifies the items to be tested, the test objectives, the testing to be

Coverage Matrix

Continuity of Processing Correctness

Data flow Testing Debugging Decision Coverage Defect Defect Management Documentation and Procedures Testing Design Review

Desk Check Detailed Test Plan

26/09/2006 16:03:56

Page 62 of 71

Term

Definition performed, test schedules, personnel requirements, reporting requirements, evaluation criteria, and any risks requiring contingency planning. It also includes the testing tools and techniques, test environment set up, entry and exit criteria, and administrative procedures and controls.

Driver Dynamic Testing

A program that exercises a system or system component by simulating the activity of a higher level component. Testing that is carried out by executing the code. Dynamic testing is a process of validation by exercising a work product and observing the behavior of its logic and its response to inputs. A checklist of activities or work items that must be complete or exist, respectively, before the start of a given task within an activity or subactivity. See Test Environment. Portion of the component’s input or output domains for which the component’s behavior is assumed to be the same from the component’s specification. (1) A discrepancy between a computed, observed or measured value or condition and the true specified or theoretically correct value or condition. (2) A human action that results in software containing a fault. This includes omissions or misinterpretations, etc. See Variance. A test case selection process that identifies test cases based on the knowledge and ability of the individual to anticipate probable errors. A functional type of test that verifies the system function for detecting and responding to exception conditions. Completeness of error handling determines the usability of a system and ensures that incorrect transactions are properly handled. A sequence of manual or automated steps required to carry out part or all of a test design or execute a set of test cases. (1) Actions that must happen before an activity is considered complete. (2) A checklist of activities or work items that must be complete or exist, respectively, prior to the end of a given process stage, activity, or sub-activity. Predicted output data and file conditions associated with a particular test case. Expected results, if achieved, will indicate whether the test was successful or not. Generated and documented with the test case prior to execution of the test. (1) An accidental condition that causes a functional unit to fail to perform its required functions (2) A manifestation of an error in software. A fault if encountered may cause a failure. Synonymous with bug.

Entry Criteria

Environment Equivalence Partitioning Error

Error Guessing Error Handling Testing

Execution Procedure Exit Criteria

Expected Results

Fault

26/09/2006 16:03:56

Page 63 of 71

Term Full Lifecycle Testing

Definition The process of verifying the consistency, completeness, and correctness of software and related work products (such as documents and processes) at each stage of the development life cycle. (1) A specific purpose of an entity or its characteristic action. (2) A set of related control statements that perform a related operation. Functions are sub-units of modules. A functional type of test, which verifies that each business function operates according to the detailed requirements, the external and internal design specifications. Selecting and executing test cases based on specified function requirements without knowledge or regard of the program structure. Also known as black box testing. See "Black Box Testing." Those kinds of tests used to assure that the system meets the business requirements, including business functions, interfaces, usability, audit & controls, and error handling etc. See also Structural Test Types. (1) A realization of an abstraction in more concrete terms; in particular, in terms of hardware, software, or both. (2) The process by which software release is installed in production and made available to end-users. (1) A group review quality improvement process for written material, consisting of two aspects: product (document itself) improvement and process improvement (of both document production and inspection). (2) A formal evaluation technique in which software requirements, design, or code are examined in detail by a person or group other than the author to detect faults, violations of development standards, and other problems. Contrast with walk-through. A functional type of test that verifies that the hardware, software and applications can be easily installed and run in the target environment. A level of dynamic testing that verifies the proper execution of application components and does not require that the application under test interface with other applications. A functional type of test that verifies that the interconnection between applications and systems functions correctly. An acronym for Joint Application Design. Formal session(s) involving clients and developers used to develop and document consensus on work products, such as client requirements, design specifications, etc. Refers to the progression of software testing through static and dynamic testing. Examples of static testing levels are Project Objectives Review, Requirements Walkthrough, Design (External and Internal) Review, and Code Inspection.

Function

Function Testing

Functional Testing

Functional Test Types Implementation

Inspection

Installation Testing Integration Testing

Interface / Intersystem Testing JAD

Level of Testing

26/09/2006 16:03:56

Page 64 of 71

Term

Definition Examples of dynamic testing levels are: Unit Testing, Integration Testing, System Testing, Acceptance Testing, Systems Integration Testing and Operability Testing. Also known as a test level.

Lifecycle Logical Path Maintainability

The software development process stages. Requirements, Design, Construction (Code/Program, Test), and Implementation. A path that begins at an entry or decision statement and ends at a decision statement or exit. A test focus area defined as the ability to locate and fix an error in the system. Can also be the ability to make dynamic changes to the system environment without making system changes. A plan that addresses testing from a high-level system viewpoint. It ties together all levels of testing (unit test, integration test, system test, acceptance test, systems integration, and operability). It includes test objectives, test team organization and responsibilities, high-level schedule, test scope, test focus, test levels and types, test facility requirements, and test management procedures and controls. A test focus area defined as the effort required (of support personnel) to learn and operate a manual or automated system. Contrast with Usability. A level of dynamic testing in which the operations of the system are validated in the real or closely simulated production environment. This includes verification of production JCL, installation procedures and operations procedures. Operability Testing considers such factors as performance, resource consumption, adherence to standards, etc. Operability Testing is normally performed by Operations to assess the readiness of the system for implementation in the production environment. A structural type of test that verifies the ability of the application to operate at an acceptable level of service in the production-like environment. A functional type of test, which verifies that the same input on “old” and “new” systems, produces the same results. It is more of an implementation that a testing strategy. A white box testing technique that requires all code or logic paths to be executed once. Complete path testing is usually impractical and often uneconomical. A test focus area defined as the ability of the system to perform certain functions within a prescribed time. A structural type of test that verifies that the application meets the expected level of performance in a production-like environment.

Master Test Plan

Operability

Operability Testing

Operational Testing

Parallel Testing

Path Testing

Performance Performance Testing

26/09/2006 16:03:56

Page 65 of 71

Term Portability Problem

Definition A test focus area defined as ability for a system to operate in multiple operating environments. (1) A call or report from a user. The call or report may or may not be defect oriented. (2) A software or process deficiency found during development. (3) The inhibitors and other factors that hinder an organization’s ability to achieve its goals and critical success factors. (4) An issue that a project manager has the authority to resolve without escalation. Compare to ‘defect’ or ‘error’. A document which describes the organization, activities, and project factors that have been put in place to achieve the target level of quality for all work products in the application domain. It defines the approach to be taken when planning and tracking the quality of the application development work products to ensure conformance to specified requirements and to ensure the client’s expectations are met. A functional type of test, which verifies that changes to one part of the system have not caused unintended adverse effects to other parts. A test focus area defined as the extent to which the system will provide the intended function without failing. (1) A condition or capability needed by the user to solve a problem or achieve an objective. (2) A condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document. The set of all requirements forms the basis for subsequent development of the system or system component. A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users or other interested parties for comment or approval. See Causal Analysis. Temporary programs may be needed to create or receive data from the specific program under test. This approach is called scaffolding. A test focus area defined as the assurance that the system/data resources will be protected against accidental and/or intentional modification or misuse. A structural type of test that verifies that the application provides an adequate level of protection for confidential information and data belonging to other systems. (1) The totality of features and characteristics of a software product that bear on its ability to satisfy given needs; for example, conform to specifications. (2)The degree to which software possesses a desired combination of attributes. (3)The degree to which a customer or user perceives that software meets his or her composite expectations. (4)The composite characteristics of software that determine the

Quality Plan

Regression Testing Reliability Requirement

Review

Root Cause Analysis Scaffolding Security

Security Testing

Software Quality

26/09/2006 16:03:56

Page 66 of 71

Term

Definition degree to which the software in use will meet the expectations of the customer.

Software Reliability

(1) The probability that software will not cause the failure of a system for a specified time under specified conditions. The probability is a function of the inputs to and use of the system as well as a function of the existence of faults in the software. The inputs to the system determine whether existing faults, if any, are encountered. (2) The ability of a program to perform a required function under stated conditions for a stated period of time. A white box testing technique that requires all code or logic statements to be executed at least once. (1) The detailed examination of a work product's characteristics to an expected set of attributes, experiences and standards. The product under scrutiny is static and not exercised and therefore its behavior to changing inputs and environments cannot be assessed. (2) The process of evaluating a program without executing the program. See also desk checking, inspection, walk-through. A structural type of test that verifies that the application has acceptable performance characteristics under peak load conditions. Structural functions describe the technical attributes of a system. Those kinds of tests that may be used to assure that the system is technically sound. (1) A dummy program element or module used during the development and testing of a higher level element or module. (2) A program statement substituting for the body of a program unit and indicating that the unit is or will be defined elsewhere. The inverse of Scaffolding. (1) A group of assemblies or components or both combined to perform a single function. (2) A group of functionally related components that is defined as elements of a system but not separately packaged. A collection of components organized to accomplish a specific function or set of functions. A dynamic level of testing which ensures that the systems integration activities appropriately address the integration of application subsystems, integration of applications with the infrastructure, and impact of change on the current live environment. A dynamic level of testing in which all the components that comprise a system are tested to verify that the system functions together as a whole. (1) A test environment containing the hardware, instrumentation tools, simulators, and other support software necessary for testing a system or system component. (2) A set of test files, (including databases and

Statement Testing Static Testing

Stress / Volume Testing Structural Function Structural Test Types Stub

Sub-system

System Systems Integration Testing

System Testing

Test Bed

26/09/2006 16:03:56

Page 67 of 71

Term

Definition reference files), in a known state, used with input test data to test one or more test conditions, measuring against expected results.

Test Case

(1) A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. (2) The detailed objectives, data, procedures and expected results to conduct a test or part of a test. A functional or structural attribute of an application, system, network, or component thereof to be tested. A worksheet used to formulate the test conditions that, if met, will produce the expected result. It is a tool used to assist in the design of test cases. A worksheet that is used for planning and for illustrating that all test conditions are covered by one or more test cases. Each test set has a Test Conditions Coverage Matrix. Rows are used to list the test conditions and columns are used to list all test cases in the test set. A worksheet used to plan and cross check to ensure all requirements and functions are covered adequately by test cases. The input data and file conditions associated with a specific test case. The external conditions or factors that can directly or indirectly influence the execution and results of a test. This includes the physical as well as the operational environments. Examples of what is included in a test environment are: I/O and storage devices, data files, programs, JCL, communication lines, access control and security, databases, reference tables and files (version controlled), etc. Those attributes of an application that must be tested in order to assure that the business and structural requirements are satisfied. See Level of Testing. A chronological record of all relevant details of a testing activity A collection of tables and matrices used to relate functions to be tested with the test cases that do so. Worksheets used to assist in the design and verification of test cases. The tangible goals for assuring that the Test Focus areas previously selected as being relevant to a particular Business or Structural Function are being validated by the test. A document prescribing the approach to be taken for intended testing activities. The plan typically identifies the items to be tested, the test objectives, the testing to be performed, test schedules, entry / exit criteria, personnel requirements, reporting requirements, evaluation criteria, and any risks requiring contingency planning.

Test Condition Test Conditions Matrix Test Conditions Coverage Matrix

Test Coverage Matrix Test Data Test Environment

Test Focus Areas Test Level Test Log Test Matrices

Test Objectives

Test Plan

26/09/2006 16:03:56

Page 68 of 71

Term Test Procedure

Definition Detailed instructions for the setup, operation, and evaluation of results for a given test. A set of associated procedures is often combined to form a test procedures document. A document describing the conduct and results of the testing carried out for a system or system component. A dated, time-stamped execution of a set of test cases. A high-level description of how a given business or technical requirement will be tested, including the expected outcome; later decomposed into sets of test conditions, each in turn, containing test cases. A sequence of actions that executes a test case. Test scripts include detailed instructions for set up, execution, and evaluation of results for a given test case. A collection of test conditions. Test sets are created for purposes of test execution only. A test set is created such that its size is manageable to run and its grouping of test conditions facilitates testing. The grouping reflects the application build strategy. A worksheet that relates the test conditions to the test set in which the condition is to be tested. Rows list the test conditions and columns list the test sets. A checkmark in a cell indicates the test set will be used for the corresponding test condition. A set of documents that define and describe the actual test architecture, elements, approach, data and expected results. Test Specification uses the various functional and non-functional requirement documents along with the quality and test plans. It provides the complete set of test cases and all supporting detail to achieve the objectives documented in the detailed test plan. A high level description of major system-wide activities which collectively achieve the overall desired result as expressed by the testing objectives, given the constraints of time and money and the target level of quality. It outlines the approach to be used to ensure that the critical attributes of the system are tested adequately. See Type of Testing. (1) The extent to which software facilitates both the establishment of test criteria and the evaluation of the software with respect to those criteria. (2) The extent to which the definition of requirements facilitates analysis of the requirements to establish test criteria. The process of exercising or evaluating a program, product, or system, by manual or automated means, to verify that it satisfies specified requirements, to identify differences between expected and actual results.

Test Report Test Run Test Scenario

Test Script

Test Set

Test Sets Matrix

Test Specification

Test Strategy

Test Type Testability

Testing

26/09/2006 16:03:56

Page 69 of 71

Term Testware

Definition The elements that are produced as part of the testing process. Testware includes plans, designs, test cases, test logs, test reports, etc. Approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested. A functional type of test that verifies the proper and complete processing of a transaction from the time it enters the system to the time of its completion or exit from the system. Tests a functional or structural attribute of the system, e.g., Error Handling, Usability. (Also known as test type.) The first level of dynamic testing and is the verification of new or changed code in a module to determine whether all new or modified paths function correctly. A test focus area defined as the end-user effort required to learn and use the system. Contrast with Operability. A functional type of test that verifies that the final product is userfriendly and easy to use. See Acceptance Testing. (1) The act of demonstrating that a work item is in compliance with the original requirement. For example, the code of a module would be validated against the input requirements it is intended to implement. Validation answers the question "Is the right system being built?” (2) Confirmation by examination and provision of objective evidence that the particular requirements for a specific intended use have been fulfilled. See "Verification." A mismatch between the actual and expected results occurring in testing. It may result from errors in the item being tested, incorrect expected results, invalid test data, etc. See "Error." (1) The act of demonstrating that a work item is satisfactory by using its predecessor work item. For example, code is verified against module level design. Verification answers the question "Is the system being built right?” (2) Confirmation by examination and provision of objective evidence that specified requirements have been fulfilled. See "Validation." A review technique characterized by the author of the object under review guiding the progression of the review. Observations made in the review are documented and addressed. Less formal evaluation technique than an inspection.

Top-down

Transaction Flow Testing Type of Testing Unit Testing

Usability Usability Testing User Acceptance Testing Validation

Variance

Verification

Walkthrough

26/09/2006 16:03:56

Page 70 of 71

Term White Box Testing

Definition Evaluation techniques that are executed with the knowledge of the implementation of the program. The objective of white box testing is to test the program's statements, code paths, conditions, or data flow paths. A software development lifecycle work product. (1) The result produced by performing a single task or many tasks. A work product, also known as a project artifact, is part of a major deliverable that is visible to the client. Work products may be internal or external. An internal work product may be produced as an intermediate step for future use within the project, while an external work product is produced for use outside the project as part of a major deliverable. (2) As related to test, a software deliverable that is the object of a test, a test work item.

Work Item Work Product

Bibliography
A Structured Approach to Systems Testing, William E. Perry, Prentice-Hall Inc., New Jersey (1983). Design and Code Inspections to Reduce Errors in Program Development, M. E. Fagan, IBM System Journal, Vol. 15, No. 3, 1976. Experiences with Defect Prevention, R. G. Mays, C. L. Jones, G. J. Holloway, D. P. Studinski, IBM System Journal, Vol. 29, No.1, 1990, Experience with Inspection in Ultralarge-Scale Developments, Glen W. Russell, Bell-Northern Research, 1991 IEEE Software, Volume 8, issue 1, January 1991. Programming Process Architecture, Ver. 2.1, Document Number ZZ27-1989-1, Programming Process Architecture, Poughkeepsie, New York. Quality and Productivity, STL Programming Development Handbook, Vol. 8, Process, Quality, and Productivity, Santa Teresa Laboratory, Programming Systems, IBM Corporation, January 1989. Software Testing Techniques, Boris Beizer, Van Nostrand Reinhold Company, New York (1983). The Art of Software Testing, G. J. Myers, John Wiley and Sons, New York (1979). The Complete Guide to Software Testing, Bill Hetzel, QED Information Sciences, Inc.,Massachusetts (1988).

Revision History
Version 1.1 October 2000 – Removed IBM Confidential from header. Version 1.0 September 1999 - Base version.

26/09/2006 16:03:56

Page 71 of 71

Sign up to vote on this title
UsefulNot useful