This action might not be possible to undo. Are you sure you want to continue?
The characteristic identified by the following: • satisfies or exceeds an agreed upon set of requirements, and • assessed using agreed upon measures and criteria, and • produced using an agreed upon process."
Who Own Quality
• A common misconception is that quality is owned by or is the responsibility of one group. • Quality is and should be, the responsibility of everyone. • Each worker contributes to the achievement of quality in the following ways:
– Product quality - the contribution to the overall achievement of quality in each artifact being produced. – Process quality - the achievement of quality in the process activities for which they are involved.
Common Misconceptions about Quality
• Quality can be added to, or ‘tested’ into the product • Quality is single Dimension – Quality is not single dimension
• Progress: such as use-cases demonstrated or milestones completed • Variance: differences between planned and actual schedules, budgets, staffing requirements, etc. • Reliability: resistance to failure (crashing, hanging, memory leakages) • Function: the artifacts implements and executed the required usecases as intended • Performance: the artifacts executes and responds, in timely manner
and its translation into the implementation artifacts. and precision (appropriate level of detail and accuracy). design. managing quality includes assessment of the design artifact set. its translation from the requirements artifacts. The Test workflow is highly focused towards the management of quality. including the consistency of the design model. as most of the efforts expended in the workflow address the purposes of managing quality identified above. In the Implementation workflow. In the Analysis & Design workflow. clarity (clearly communicates information to all shareholders. and test artifacts.Managing Quality in RUP • Managing quality in the Requirements workflow includes the analysis of the requirements artifact set for consistency (between artifact standards and other artifacts). • • • . and other workers). stakeholders. managing quality includes assessing the implementation artifacts and evaluating the source code / executable artifacts against the appropriate requirements.
• to test the developed components as units. . and • to integrate the results produced by individual implementers (or teams). executables. and others).Overview Of Implementation Workflow Purpose of Implementation Workflow • to define the organization of the code. binaries. in terms of implementation subsystems organized in layers. into an executable system. • to implement classes and objects in terms of components (source files.
Integration is done at several levels and stages of the implementation: • Integrating the work of a team working in the same implementation subsystem.Implementation – Concepts • Build – A build is an operational version of a system or part of a system that demonstrates a subset of the capabilities provided in the final product. • Integrating subsystems into a complete system. before releasing the subsystem to system integrators. • Software Integration – The term "integration" refers to a software development activity in which separate software components are combined into a whole. .
• Stubs that are more intelligent and can simulate a more complex behavior. – There are two styles used here: • Stubs that are simply "dummies" with no other functionality than being able to return a pre-defined value.) • A stub is a component (or complete implementation subsystem) containing functionality for testing purposes.Implementation – Concepts (Contd. .
• To assign responsibilities for Implementation Subsystems and their contents. .Implementation – Structure the Implementation Model Purpose: • To establish the structure in which the implementation will reside.
which will contain one or more components and all related files needed to implement the component.although not necessarily one to one .) Design Packages will have corresponding Implementation Subsystems.Structure the Implementation Model (Contd. The mapping from the Design Model to the Implementation Model may change as each Implementation Subsystem is allocated to a specific layer in the architecture. Note that both classes and possibly design subsystems in the Design Model are mapped to components in the Implementation Model .
. and the order in which the subsystems should be integrated in the current iteration.Implementation – Plan the Integration Purpose: Plan which subsystems should be implemented.
Plan the Integration – Plan the System Integration Purpose: • To plan the integration of the system – Identify Subsystems – Define Build Sets – Define the series of builds .
create builds. . Study the usecase realization's sequence diagrams. Identify which implementation subsystems participate in the use cases and scenarios for the current iteration. Also identify which other implementation subsystems are needed to make it possible to compile. collaboration diagrams. and so on.Plan the Integration – Identify Subsystems The iteration plan specifies all use cases and scenarios that should be implemented in this iteration. that is.
it becomes a complex task to plan the integration. It is recommended that you define meaningful sets of subsystems (build sets or towers). To facilitate integration planning. . and manage complexity you need to reduce the number of things you need to think about.Plan Integration – Define Build Sets In large systems where you may have up to a hundred implementation subsystems. that belong together from an integration point of view.
. For each build. define which subsystems should go into it.Plan Integration – Define Build Series You define a series of builds to incrementally integrate the system. and which other subsystems must be available as stubs. This is typically done bottom-up in the layered structure of subsystems in the implementation model.
Implementation – Implement Components Activities: • Implement a Component • Perform Unit Test • Fix a Defect .
Implement Components – Implement a Component • Implement Operations • Implement States • Implement Associations. rework feedback has to be provided to the design . Attributes • Provide Feedback to Design – If a design error is discovered in any of the steps. Aggregations.
Implement Components – Perform Unit Test Purpose • To verify the specification of a unit. . • To verify the internal structure of a unit.
Perform Unit – Test Unit means not only a class in an objectoriented language. but also free subprograms. . such as functions in C++.
Steps in Perform Unit Test • • • • Execute Unit Test Evaluate the Execution of Test Verify Test Results Recover from Halted Tests .
• Initialize the test environment to ensure all components are in the correct initial state for the start of testing. • Execute the test procedures.Execute Unit Test • Set-up the test environment to ensure that all the needed elements (hardware. tools. . etc. software. data.) have been implemented and are in the test environment.
Evaluate Execution of Test • Normal: all the test procedures (or scripts) execute as intended. When testing ends abnormally. – If testing terminates abnormally. corrected. continue with Recover from halted test . The cause of termination needs to be identified. and the tests re-executed before additional test activities are performed. then continue with Verify Test Results: • Abnormal or premature: the test procedures (or scripts) did not execute completely or as intended. – If testing terminates normally. the test results may be unreliable.
If the reported failures are due to errors identified in the test artifacts. the appropriate corrective action should be taken and the testing re-executed. or due to problems with the test environment. . or unexpected results were not caused by external influences (to the target-of-test). the test results should be reviewed to ensure that the test results are reliable and reported failures. For additional information. warnings. such as improper set-up or data.Verify Test Results Upon the completion of testing.
the system fails (network failures. etc.) – Test Script Command Failures .Recover from Halted Tests • There are two major types of halted tests: – Fatal errors . • Both types of abnormal termination to testing may exhibit the same symptoms: – unexpected actions. • To recover from halted tests. or events occur while the test script is executing – test environment appears unresponsive or in an undesirable state (such as hung or crashed).specific to automated testing. hardware crashes. windows. this is when a test script cannot execute a command (or line of code). do the following: – – – – – determine the actual cause of the problem correct the problem re-set-up test environment re-initialize test environment re-execute tests .
• Locate the fault – The next step is to execute the test cases that cause the defect to occur and try to identify where in the code the source of the fault is. it will be almost impossible to locate the fault. a symptom). If you can't make the defect occur reliably. .Fix a Defect • Stabilize the defect – The first step is to stabilize the defect (i.e. to make it occur reliably.
Here are some guidelines to keep in mind: • Make sure you understand the problem and the program before you make the fix. it is important to implement the fixes incrementally.Fix a Defect (Contd.) • Fix the fault – When the fault has been located. to make it easy to locate where any new faults are occurring from. This should be the easy part. the focus should be on fixing the underlying problem in the code. • Fix the problem. it is time to fix it. because fixing faults is in itself an error-prone activity. • Make one change at a time. . not the symptom.
the implementation subsystem is delivered into the system integration workspace.Implementation – Integrate Each Subsystem Activity: • Integrate Subsystem – Subsystem integration proceeds according to the artifact – Integration Build Plan. or a few components to the system – After the final increment. . when the implementation subsystem is ready and the associated build has been integration tested. At each increment you add one. in which the order of component and subsystem integration has been planned – It is recommended that you integrate the implemented classes (components) incrementally bottom-up in the compilationdependency hierarchy.
implementation subsystems have been delivered to satisfy the requirements of the next (the 'target') build described in the Artifact: Integration Build Plan. recalling that the Integration Build Plan may define the need for several builds in an iteration.Implementation – Integrate System When this activity begins. .
• To verify the proper integration of all components of the software. . • To identify and ensure defects are addressed prior to the deployment of the software. • To verify that all requirements have been correctly implemented.Test Workflow The purposes of testing are: • To verify the interaction between objects.
Concepts • • • • • • • • • • • Product Quality Quality Dimensions The Life-Cycle of Testing Key Measures of Test Test Strategy Types of Test Stages of Test Performance Testing Structure Testing Acceptance Testing Test Automation and Tools .
and various models.Product Quality Product quality is the quality of the product being produced by the process. this is the primary product that provides value to the customer (endusers. Non-deployed. system. Non-deployed executables.). etc. • • • . for it is typically this artifact for which the project existed. In software development the product is the aggregation of many artifacts. etc. test plans.). such as the implementation set of artifacts including the test scripts and development tools created to support implementation. shareholders. stakeholders. Deployed non-executable artifacts. non-executed artifacts such as the implementation plans. including: • Deployed. executable code (application. perhaps the most visible of the artifacts. That is. including artifacts such as user manuals and course materials.
. In Rational Unified Process. function calls. Function: ability to execute the specified use cases as intended and required. Operational characteristics for performance include those characteristics related to production load. and code integrity and structure (technical compliance to language and syntax). memory leaks. and system calls. The timing profiles include the code’s execution flow.). data access.Quality Dimensions Quality is not a simple concept to describe. Performance: the timing profiles and operational characteristics of the target-of-test. operational reliability and operational limits such as load capacity or stress. such as • • crashes. resource usage. there is no single perspective of what quality is or how its measured. such as response time. we address this issue by stating that Quality has the following dimensions: • Reliability: software robustness and reliability (resistance to failures. when our focus turns to the discussion of testing to identify quality. Likewise. etc.
software is refined through iterations. There is no frozen software specification and there are no frozen tests. just as the software itself is revised. which are used for regression testing at later stages. . the testing lifecycle must also have an iterative approach with each build being a target for testing. This approach implies that it causes reworking the tests throughout the process.The Lifecycle of Testing In the software development lifecycle. accumulating a body of tests. Additions and refinements are made to the tests that are executed for each build. In this environment.
In iteration X+2. .) This iterative approach gives a high focus on regression test. Most tests of iteration X are used as regression tests in iteration X+1. you would use most tests from iteration X and iteration X+1 as regression tests. Because the same test is repeated several times. and the same principle would be followed in subsequent iterations. It becomes necessary to effectively automate your tests to meet your deadlines.Lifecycle of Testing (Contd. it is well worth the effort to automate the tests.
) Look at the lifecycle of testing without the rest of the project in the same picture. This is the way the different activities of testing are interconnected if you view them in a non-iterative view: .Lifecycle of Testing (Contd.
which means that each iteration will have a test cycle following that pattern. .Lifecycle of Testing (Contd.) This lifecycle has to be integrated with the iterative approach.
something you need to be aware of and able to manage. . the lower the impact on the overall schedule. or postponed to the next iteration. • Furthermore. or cause a long testing and bug-fixing schedule to be appended to the development schedule. the tests will either be deficient. which defeats the goals of iterative development.Lifecycle of Testing (Contd. The earlier these are resolved. • If not started early enough.) • The testing lifecycle is a part of the software lifecycle. • There is always some "requirements creep" from iteration to iteration. they should start at the same time. the test planning and design activities can expose faults or flaws in the application definition. • Problems found during evaluation can be solved within this iteration.
and • performance of the target-of-test (system or application-undertest). • Quality – Quality is a measure is of • reliability. and is based on the coverage of testing. • stability.Key Measures of Test The key measures of a test include: • Coverage – Test coverage is the measurement of testing completeness. or the coverage of executed code. . expressed either by the coverage of test requirements and test cases.
.Coverage Measures • Requirement Based Test Coverage – Requirements-based test coverage is measured several times during the test life cycle and provides the identification of the test coverage at a milestone in the testing life cycle (such as the planned. Code coverage can either be based on control flows (statement. executed. implemented. and successful test coverage). • Code Based Test Coverage – Code-based test coverage measures how much code has been executed during the test. or paths) or data flows. compared to how much code there is left to execute. branch.
i. Where : T is the number of Tests (planned. or successful) as expressed as test procedures or test cases. RfT: is the total number of Requirements for Test.x.Requirements Based Test Coverage Test coverage is calculated by the following equation: Test Coverage = T(p. .s) / RfT implemented. executed.
data state decision points. code paths.Code Based Test Coverage Code-based test coverage is calculated by the following equation: Test Coverage = Ie / TIic where: I is the number of items executed expressed as code statements. TIic is the total number of items in the code. . e code branches. or data element names.
• Quality is the indication of how well the software meets the requirements.Quality Measure • An evaluation of defects discovered during testing provides the best indication of software quality. so in this context. . defects are identified as – a type of change request in which the target-of-test failed to meet the requirements.
Example of Defect Density Report .
load. integration and system) are to be addressed • which kinds of testing (function.) are to be performed. It includes: • which stages of testing (unit.Test Strategy A strategy for the testing portion of a project describes the general approach and objectives of the test activities. stress. etc. . • Testing techniques and tools to be employed. performance.
..Types of Test In the Introduction to Test. . interface. it was stated that there is much more to testing software than testing only the functions. Additional tests must focus on characteristics / attributes such as the target-of-test's: • integrity (resistance to failure) • ability to be installed / executed on different platforms • ability to handle many requests simultaneously • . and response time characteristics of a target-oftest.
etc.). operational reliability. and code integrity and structure (technical compliance to language and syntax). and operational limits such as load capacity or stress. The timing profiles include the code’s execution flow. such as crashes. resource usage. and system calls. memory leaks. • Performance: the timing profiles and operational characteristics of the target-of-test. data access. function calls. Operational characteristics for performance include those characteristics related to production load. . • Function: ability to execute the specified use cases as intended and required.Quality Dimensions (Revisited) • Reliability: software robustness and reliability (resistance to failures. such as response time.
Types of Test Under ReliabilityDimension
• Integrity test: Tests which focus on assessing the targetof-test's robustness (resistance to failure) and technical compliance to language, syntax, and resource usage. This test is implemented and executed against different targetof-tests, including units and integrated units. • Structure test: Tests that focus on assessing the target-oftest's adherence to its design and formation. Typically, this test is done for web-enabled applications ensuring that all links are connected, appropriate content is displayed, and there is no orphaned content.
Types of Test Under FunctionDimension
• • Configuration test: Tests focused on ensuring the target-of-test functions as intended on different hardware and / or software configurations. This test may also be implemented as a system performance test. Function test: Tests focused on verifying the target-of-test functions as intended, providing the required service(s), method(s), or use case(s). This test is implemented and executed against different target-of-tests, including units, integrated units, application(s), and systems. Installation test: Tests focused on ensuring the target-of-test installs as intended on different hardware and / or software configurations and under different conditions (such as insufficient disk space or power interrupt). This test is implemented and executed against application(s) and systems. Security test: Tests focused on ensuring the target-of-test, data, (or systems) is accessible to only those actors intended. This test is implemented and executed various targets-of-test. Volume test: Testing focused on verifying the target-of-test ability to handle large amounts of data, either as input and output or resident within the database. Volume testing includes test strategies such as creating queries that [would] return the entire contents of the database, or have so many restrictions that no data is returned, or data entry of the maximum amount of data in each field.
Types of Test Under Performance-Dimension
• • • Benchmark test: A type of performance test that compares the performance of a [new or unknown] target-of-test to a known, reference-workload and system. Contention test: Tests focused on verifying the target-of-test's can acceptably handle multiple actor demands on the same resource (data records, memory, etc.). Load test: A type of performance test to verify and assess acceptability of the operational limits of a system under varying workloads while the systemunder-test remains constant. Measurements include the characteristics of the workload and response time. When systems incorporate distributed architectures or load balancing, special tests are performed to ensure the distribution and load balancing methods function appropriately. Performance profile: A test in which the target-of-test's timing profile is monitored, including execution flow, data access, function and system calls to identify and address performance bottlenecks and inefficient processes. Stress test: A type of performance test that focuses on ensuring the system functions as intended when abnormal conditions are encountered. Stresses on the system may include extreme workloads, insufficient memory, unavailable services / hardware, or diminished shared resources.
There are 4 stages of test: – Unit Test – Integration Test – System Test – Acceptance Test . These stages progress from testing small components (unit testing) to testing completed systems (system testing).Concepts – Stages of Test Testing is usually applied to different types of targets in different stages of the software's delivery cycle.
Stages of Test – Unit Test • • • Unit test. Unit testing is applied to components in the implementation model to verify that control flows and data flows are covered and function as expected. These expectations are based on how the component participates in executing a use case which you find from sequence diagrams for that use case. • • • . The Implementer performs unit test as the unit is developed. The details of unit test are described in the Implementation workflow. implemented early in the iteration Focuses on verifying the smallest testable elements of the software.
Stages of Test – Integration Test • Integration testing is performed to ensure that the components in the implementation model operate properly when combined to execute a use case. . • The target-of-test is a package or a set of packages in the implementation model.
• The target.Stages of Test – System Test • System testing is done when the software is functioning as a whole. in this case. . or when welldefined subsets of its behavior are implemented. is the whole implementation model for the system.
.Stages of Test – Acceptance Test • Acceptance testing is the final test action prior to deploying the software. • The goal of acceptance testing is to: – verify that the software is ready and can be used by the end-users and – it performs the functions and tasks the software was built to do.
Strategies of Acceptance Test The three strategies of Acceptance Test are: • Formal Acceptance • Informal Acceptance or Alpha Test • Beta Test .
Acceptance testing is completely performed by the end-user organization. or group of people chosen from the end-user organization and from development organization. The test cases chosen should be a subset of those performed in system test. The tests are planned and designed as carefully and in the same detail as system testing.Formal Acceptance Test • Formal acceptance testing is a highly managed process and is often an extension of the system test. • • • • . The activities and artifacts are the same as for system testing.
– It may not uncover subjective defects in the software. • The disadvantages include: – Requires significant resources and planning. – The acceptability criteria are known. – The details of the tests are known and can be measured. since you are only looking for defects you expect to find. . which permits regression testing. – The progress of the tests can be measured and monitored.Advantages & Disadvantages of Formal Acceptance Test • The benefits of this form of testing are: – The functions and features to be tested are known. – The tests can be automated. – The tests may be a re-implementation of system tests.
• The individual tester determines what to do. but there are no particular test cases to follow.Informal Acceptance / Alpha Testing • In informal acceptance testing. • This approach to acceptance testing is not as controlled as formal testing. . the test procedures for performing the test are not as rigorously defined as for formal acceptance testing. • Done By End-User Organization. • The business processes are identified and documented.
rather than looking for defects. The disadvantages include: – Resources. • . – Resources for acceptance testing are not under the control of the project and may be constricted.Benefits & Disadvantages of Alpha Testing • The benefits of this form of testing are: – The functions and features to be tested are known. – End users may conform to the way the system works and not see the defects. and management resources are required. – End users may focus on comparing the new system to a legacy system. – You have no control over which test cases are used. planning. – The acceptability criteria are known. – The progress of the tests can be measured and monitored.
features. • Each tester is responsible for creating their own environment. or tasks to explore. . and approach taken is entirely up to the individual tester. the data. the amount of detail. • Each tester is responsible for identifying their own criteria for whether to accept the system in its current state or not. and determining what functions. • In beta testing. selecting their data.Beta Test • Beta testing is the least controlled of the three acceptance test strategies.
. – Increases customer satisfaction to those who participate. – You will uncover more subjective defects than with formal or informal acceptance testing. • The disadvantages include: – Not all functions and / or features may be tested.Benefits & Disadvantages of Beta Test • The benefits of this form of testing are: – Testing is implemented by end users. – End users may conform to the way the system works and not see or report the defects. – Test progress is difficult to measure. – Large volumes of potential test resources. – Acceptability criteria are not known. – Resources for acceptance testing are not under the control of the project and may be constricted.
Test Workflow .
. • Test Plan Strategy – A single test plan may be developed. describing all the different types of tests to be implemented and executed or – One test plan per type of test may be developed.Workflow Detail – Plan Test • Identify and describe the testing that will be implemented and executed. • Generating a test plan which contains the requirements for test and test strategies.
Plan Test .Overview .
Steps in Plan Test • • • • • Identify Requirements for Test Develop Test Strategy Identify Resources Create Schedule Generate Test Plan .
etc. that. therefore it is important. Similar kinds of requirements must be grouped together • • . and the scope and role of the test effort.) The following is performed to identify requirements for test: – Review All Materials: The requirements for test may be identified from many sources. – Indicate Requirements for Test: The requirements for test must be document.Plan Test – Identify Requirements for Test • The requirements for test identify what is being tested. as the first step. test design. Requirements for test are used to determine the overall test effort (for scheduling. all the materials available for the application / system to be developed should be reviewed.
Plan Test – Develop Test Strategy • Identify Tools to be used • Identify Types of Testing to be done – Reliability – Functional – Performance etc. .
Plan Test – Identify Resources • Human Resources • Non-Human Resources – Physical Environment (Workspace for testing) – Software • Interfaces to other systems. Lotus Notes… • Network protocols – Tools • Testing Tools – Data . such as legacy system • Other Desktop app(s) like MS Office.
– All test activities are repeated in every iteration. data entities and relationships • Generate Test Schedule – A test project schedule can be built from the work estimates and resource assignments. a separate test project schedule is needed for each iteration. components. – In the iterative development environment.Plan Test – Create Schedule • Estimate Test Efforts – Identify productivity/skills/knowledge of professionals working on the project – Parameters about application. . such as number of windows.
Test Schedule – Guidelines .
Plan Test – Generate Test Plan • Document the outcome of steps performed in a Test Plan Document (See the template provided……) .
Design Test – Overview
Design Test Purpose • To identify a set of verifiable test cases for each build. • To identify test procedures that show how the test cases will be realized.
Steps in Design Test • Identify & Describe Test Cases • Identify & Structure Test Procedures
It includes: • Test Case Description: A description of the condition. • Expected Results: The resulting state or data received upon completion of executing this test case.What is a Test Case ? A test case is a set of test inputs. program path. and expected results developed for particular objective. or objective that this set of data implements / executes. such as to exercise a particular program path or to verify compliance with a specific requirement. • Test Inputs: The objects or fields the actor interacts with and the specific data values entered (or object states created) by the actor when executing this test case. execution conditions. .
subsystem or component). traverse your target-of-test (system. at some point. • The use case. use-case scenario.Design Test – Identify & Describe Test Cases The primary input for identifying test cases is: • The use cases that. Describe the test cases by stating: • The test condition (or object or application state) being tested. • Target-of-test application map (as generated by an automated test script generation tool). • The expected result in terms of the output state. or data value(s). • The design model. • Any technical or supplemental requirements. . or technical or supplemental requirement the test cases is derived from. condition.
and evaluation of results for a given test case (or set of test cases). . execution.What is a Test Procedure ? • A test procedure is a set of detailed instructions for the set-up.
or data (such as the result of a calculation or retrieved record). This may also be used as a reference to a test case.Outline of Test Procedure • Steps / Actions – A series of succinct statements indicating the steps or actions taken by the actor when executing a test case (for a given use case). The verification method may be a simple visual observation (window appeared) or a more complex combination of visual and non-visual properties (window appeared in a given state at a given location within a given time). Verification Method(s) – The method used to compare the expected and actual results. Input Values / Test Case – The actual values input by the actor at each step / action in the test procedure. This may be stated as a machine / application state (such as a window appears or a pushbutton is enabled). • • • . Expected Result(s) – The expected response from the application for a given step / action.
or referenced test case) for each action / step. Expected result (condition or data. or action for the structured test procedure. the following information: • • • • • • • Set-up: how to create the condition(s) for the test case(s) that is (are) being tested and what data is needed (either as input or within the test database). state. Starting condition. Evaluation of results: the method and steps used to analyze the actual results obtained comparing them with the expected results. Instructions for execution: the detailed steps / actions taken by the tester to implement and execute the tests (to the degree of stating the object or component). Data values entered (or referenced test case). .Design Test . state. or action for the structured test procedure. Ending condition. at a minimum.Structure the Test Procedure Revise and modify the described test procedures to include.
Implement Test – Workflow Detail Activities: • Implement Test • Design Test Packages & Classes • Implement Test Packages & Classes .
Complete . Generate (Test Factory) or Program Test Scripts (Write Scripts) – – – – maximize test script reuse minimize test script maintenance use existing scripts when feasible use test tools to create test scripts instead of programming them (when feasible) • Identify Test-Specific functionality in the Design & Implementation Model – Identify Stubs (Decide the stub strategy – dummy / with behavior) • Create / Maintain External Data Sets Result artifact : Test Script .Implement Test • Record (Robot).Activity .
but there is no behavior defined for the methods other than to provide for input (to the target for test) or returning a pre-defined value (to the target for test). A driver or stub of a design class has the same methods as the original class.Activity – Design Test Packages & Classes • Based on input from the test designer identify and specify test-specific classes and packages in the design model. A driver or stub of a design package contains simulated classes for the classes that form the public interface of the original package .
Implement Test Components & Subsystems Code the test Classes .
Execute Tests – In Integration Test Stage .
. as well as all tests the previous builds went through (regression tests).Purpose of Integration Test • To ensure that the assembly of the system's components collaborate as intended • The system integrator compiles and links the system in increments • Each increment needs to go through testing of the functionality that has been added.
Activity – Execute Test Steps • Execute Test Procedures • Evaluate Execution of Test • Verify Test Results • Recover from Halted Test .
tools. etc. • Execute the test procedures.) have been implemented and are in the test environment.Execute Test . • Initialize the test environment to ensure all components are in the correct initial state for the start of testing. the following steps should be followed: • Set-up the test environment to ensure that all the needed components (hardware. data. software. .Execute Test Procedure To execute the tests.
– If testing terminates abnormally. corrected. and the tests re-executed before any additional test activities are performed. The cause of the abnormal / premature termination needs to be identified. the test results may be unreliable. – If testing terminates normally. continue with Recover from Halted Test . When testing ends abnormally.Execute Test – Evaluate Execution of Test The execution of testing ends or terminates in one of two conditions: • Normal: all the test procedures (or scripts) execute as intended and to completion. then continue with Verify Test Results • Abnormal or premature: the test procedures (or scripts) did not execute completely or as intended.
If the test results indicate the failures are genuinely due to the target-of-test. or unexpected results were not caused by external influences (to the target-of-test). the test results should be reviewed to ensure that the test results are reliable and reported failures. warnings. such as improper set-up or data. then the Execute Test Activity is complete or else Recover from Halted Test .Execute Test – Verify Test Results Upon the completion of testing.
Execute Test . do the following: • determine the actual cause of the problem • correct the problem • re-set-up test environment • re-initialize test environment • re-execute tests .Recover from Halted Test To recover from halted tests.
as well as all tests the previous builds went through (regression tests). • The system integrator compiles and links the system in increments. • Each increment needs to go through testing of the functionality that has been added.Execute Test – In System Test Stage • The purpose of the System Test Stage is to ensure that the complete system functions as intended. .
• This is accomplished by – reviewing and evaluating the test results. – identifying and logging change requests. .Evaluate Test • The purpose of evaluating test is generate and deliver the test evaluation summary.
Evaluate Test • Analyze Test Results & Submit Change Requests • Evaluate Requirement Based Test Coverage • Evaluate Code-Based Test Coverage • Analyze Defects • Determine if Test Completion & Success Criteria is achieved .