1

Unified Software Development Process (Implementation and Test)
Unified Software Development Process...........................................................1 (Implementation and Test)...............................................................................1 Core Workflow – Implementation.....................................................................2 Introduction..................................................................................................................................2 Concepts.......................................................................................................................................3 Workflow Detail – Structure the implementation model.............................................................4 Workflow Detail – Plan the Integration.......................................................................................4 Workflow details – Implement a component...............................................................................5 Workflow details – Integrate Each Subsystem............................................................................9 Workflow Detail – Integrate the system......................................................................................9 Core Workflow - Test......................................................................................11 Introduction................................................................................................................................12 Concepts related to Test Workflow............................................................................................13 Concepts - Quality.....................................................................................................................13 Concepts: Quality Dimensions..................................................................................................17 Concepts – The Life Cycle of Testing........................................................................................18 Concepts – Key Measures of Test..............................................................................................20 Concepts – Types of Tests..........................................................................................................25 Concepts – Stages In Test..........................................................................................................27 Concepts – Performance Test.....................................................................................................28 Concepts – Structure test...........................................................................................................28 Concepts – Acceptance Test.......................................................................................................29 Concepts – Test Automation and Tools......................................................................................31 Workflow Detail – Plan Test......................................................................................................33 Workflow Detail – Design Test..................................................................................................40 Workflow detail - Implement Test.............................................................................................46 Workflow Detail – Execute Test In Integration Test Stage........................................................49 Workflow Detail – Execute Test in System Test Stage..............................................................51

2

Core Workflow – Implementation

Introduction
The purpose of implementation is: o Plan the system integrations required in each iteration. Our approach to this is Incremental, which results in a system that is implemented as a succession of small and manageable steps. o To define the organization of the code, in terms of implementation subsystems organized in layers,

3 o To implement classes and objects in terms of components (source files, binaries, executables, and others), o To test the developed components as units, and o To integrate the results produced by individual implementers (or teams), into an executable system. Implementation is the focus during the construction iterations Implementation is also done during elaboration to create the executable architectural baseline and during transition to handle late defects such as those found when beta releasing the system. It is must be maintained through out the software life cycle.

 

Concepts
 Build A build is an operational version of a system or part of a system that demonstrates a subset of the capabilities provided in the final product.

 Software Integration The term "integration" refers to a software development activity in which separate software components are combined into a whole. Integration is done at several levels and stages of the implementation: o Integrating the work of a team working in the same implementation subsystem, before releasing the subsystem to system integrators. o Integrating subsystems into a complete system. The Unified Process approach to integration is that the software is integrated incrementally. Incremental integration means that code is written and tested in small pieces, and combined into a working whole, by adding one piece at a time. It is important to understand that integration occurs (at least once) within each and every iteration. An iteration plan defines which use cases to design, and thus which classes to implement. The focus of the integration strategy is to determine the order in which classes are implemented, and combined.  Stubs

A stub is a component (or complete implementation subsystem) containing functionality for testing purposes. When you use an incremental integration strategy you select a set of components to be integrated into a build. These components may need other components to be able to compile the source code, and execute the tests. This is specifically needed in integration test, where you need to build up test specific functionality that can act as stubs for things not included or not yet implemented. There are two styles used here: o Stubs that are simply "dummies" with no other functionality than being able to return a pre-defined value. o Stubs that are more intelligent and can simulate a more complex behavior.

4 The second style should be used with discretion, because it takes more resources to implement. So you need to be sure it adds value. You may end up in situations where your stubs also need to be carefully tested, which is very time consuming.

Workflow Detail – Structure the implementation model
Activity Inputs From Structure the implementation  Design Model model Resulting Artifacts  Software Architecture Document  Implementation Model

Activity - Structure the implementation model Purpose:   To establish the structure in which the implementation will reside. To assign responsibilities for Implementation Subsystems and their contents.

Design Packages will have corresponding Implementation Subsystems, which will contain one or more components and all related files needed to implement the component. The mapping from the Design Model to the Implementation Model may change as each Implementation Subsystem is allocated to a specific layer in the architecture. Note that both classes and possibly design subsystems in the Design Model are mapped to components in the Implementation Model although not necessarily one to one

Workflow Detail – Plan the Integration
Activity Plan System Integration Inputs From  Design Model  Use Case Realization Resulting Artifacts  Integration Build Plan

Activity – Plan system Integration Purpose       To plan the system integration Steps Identify Subsystems Define build sets Define series of builds Evaluate the Integration build plan

5

Workflow details – Implement a component
Activity Implement Component Inputs From  Design Model  Test Cases  Test Procedures  Workload Document   Change Request Component Resulting Artifacts  Component

Analysis Component Component Review Record

Fix a Defect Perform Unit Test Review Code

Activity – Implement Component Activity – Fix a Defect Steps Stabilize the Defect The first step is to stabilize the defect (i.e. a symptom), to make it occur reliably. If you can't make the defect occur reliably, it will be almost impossible to locate the fault. Then try to narrow down the test case by identifying which of the factors in the test case make the defect occur, and which factors are irrelevant for the defect. To find out if a factor is irrelevant, execute the test case and change the factor in question. If the defect still occurs, this factor can probably be eliminated. If successful, you should finish with at least one test case that causes the defect to occur, and also some idea of what factors are related to the occurrence of the defect. Locate the Fault The next step is to execute the test cases that cause the defect to occur and try to identify where in the code the source of the fault is. Examples of ways to locate a fault are:  Narrow down the suspicious region of the code. Test a smaller piece of the code; remove a piece of the code, and rerun the test cases. Continue to remove pieces of code, as long as the defect still occurs. Eventually you will have identified where the fault can be found.  Use the debugger to step through the code line-by-line, and monitor the values of interesting variables.  Let someone else review the code. Fix the Fault

  Automated testing: The test scripts created during the Implement Test step are executed. because fixing faults is in itself an error-prone activity. Note: executing the test procedures will vary depending upon whether testing is automated or manual. Execute Unit Test To execute unit test. tools. When the fault has been fixed. software.6 When the fault has been located. etc. it is important to implement the fixes incrementally.  Fix the problem. Activity – Perform Unit Tests Purpose:   To verify the specification of a unit. Here are some guidelines to keep in mind:  Make sure you understand the problem and the program before you make the fix. Execute the test procedures. This should be the easy part. Manual execution: The structured test procedures developed during the Structure Test Procedure activity are used to manually execute the test Evaluate Execution of test The execution of testing ends or terminates in one of two conditions: .) have been implemented and are in the test environment. it is time to fix it. Initialize the test environment to ensure all components are in the correct initial state for the start of testing. data.  Make one change at a time. to make it easy to locate where any new faults are occurring from. the following steps should be followed:    Set-up the test environment to ensure that all the needed elements (hardware. not the symptom. To verify the internal structure of a unit. add a special test case that verifies this particular fault. and whether test components are needed either as drivers or stubs. the focus should be on fixing the underlying problem in the code.

windows. such as improper set-up or data. then continue with Verify Test Results: Abnormal or premature: the test procedures (or scripts) did not execute completely or as intended. or due to problems with the test environment.7    Normal: all the test procedures (or scripts) execute as intended.) Test Script Command Failures . the test results may be unreliable. For additional information. Both types of abnormal termination to testing may exhibit the same symptoms:   Unexpected actions. warnings. continue with Recover from Halted Tests. corrected. do the following:      Determine the actual cause of the problem Correct the problem Re-set-up test environment Re-initialize test environment Re-execute tests Activity – Review Code . When testing ends abnormally. If testing terminates abnormally. To recover from halted tests. Recover from halted test There are two major types of halted tests:   Fatal errors . the appropriate corrective action should be taken and the testing re-executed. and the tests re-executed before additional test activities are performed. Verify Test Results Upon the completion of testing. see "Recover From Halted Tests" below.specific to automated testing. etc. or events occur while the test script is executing Test environment appears unresponsive or in an undesirable state (such as hung or crashed). below. then the Execute Test Activity is complete and the next activity is to Evaluate Test. If the test results indicate the failures are genuinely due to the target-of-test. If testing terminates normally. or unexpected results were not caused by external influences (to the target-of-test). hardware crashes. this is when a test script cannot execute a command (or line of code). the test results should be reviewed to ensure that the test results are reliable and reported failures. If the reported failures are due to errors identified in the test artifacts.the system fails (network failures. The cause of termination needs to be identified.

8 .

If two or more implementers are working in parallel on the same subsystem. Explanation     Subsystem integration proceeds according to the Artifact: Integration Build Plan. At each increment you add one. their work is integrated through a subsystem integration workspace. and from which the integrator will construct builds. it is important that the team members share their results frequently. in which the order of component and subsystem integration has been planned. not waiting until late in the process to integrate the team's work. or a few components to the system. into which the implementers deliver components from their private development workspaces.  Workflow Detail – Integrate the system Activity Integrate System Inputs From Implementation Subsystems (new versions) Resulting Artifacts Build Activity – Integrate system Purpose:  To integrate the implementation subsystems piecewise into a build Steps  Accept Subsystems and Produce Intermediate Builds . It is recommended that you integrate the implemented classes (components) incrementally bottom-up in the compilation-dependency hierarchy.9 Workflow details – Integrate Each Subsystem Activity Integrate Subsystem Inputs From  Components implementer X)  Components implementer Y) Resulting Artifacts (from  Build  Implementation Subsystem (from Activity – Integrate Subsystem Purpose  To integrate the components in an implementation subsystem. If a team of several individuals works in parallel on the same subsystem. then deliver the implementation subsystem for system integration.

which is then provided to the tester to execute a minimal system integration test.invoking the Activity: Create Baselines in the Configuration Management workflow. in turn. It should be easier to isolate and diagnose problems using this approach. and provide the essential minimal run-time behavior. . These are subjected to a minimal integration test (usually a subset of the tests described in the Integration Build Plan for this target build) to ensure that what is added is compatible with what already exists in the system integration workspace. with the final build of an iteration being subjected to all the tests defined in the Iteration Test Plan. The build is now made available to the tester for complete system testing. an initial or provisional baseline is created for this build . in the process resolving any merge conflicts. Depending on the complexity and number of subsystems to be integrated. making sure that the versions of the subsystems are consistent. implementation subsystems have been delivered to satisfy the requirements of the next (the 'target') build described in the Artifact: Integration Build Plan. The integrator accepts delivered subsystems incrementally into the system integration workspace. recalling that the Integration Build Plan may define the need for several builds in an iteration. and producing a series of intermediate 'mini' builds . The final increment of a sequence produces the target build. each build planned for an iteration may. as planned in the Integration Build Plan. it is often more efficient to produce the target build in a number of steps.10 When this activity begins. Some subsystems are only needed as stubs. adding more subsystems with each step. to make it possible to compile and link the other subsystems. The increment of subsystems is compiled and linked into an intermediate build. This diagram shows a build produced in three increments. It is recommended that this be done bottom-up with respect to the layered structure. have its own sequence of transient intermediate builds. taking imports into consideration. The nature and depth of this testing will be as planned in the Integration Build Plan. When this has been minimally tested.thus.

Test .11 Core Workflow .

or transportation companies. such as poor user productivity. that is. A well-conceived methodology and use of state-of-the-art tools. software testing accounts for 30 to 50 percent of software development costs. Well-performed tests. initiated early in the software lifecycle. high-quality software is essential for the success of the system produced. can greatly improve the productivity and effectiveness of the software testing. testing is typically done without a clear methodology and without the required automation or tool support.12 Introduction The purposes of testing are:     To verify the interaction between objects. Mission-critical systems must be tested using the same rigorous approaches used for safety-critical systems. To verify that all requirements have been correctly implemented. This contradiction is rooted in two clear facts. For example: banks. Yet most people believe that software is not well tested before it is delivered. To identify and ensure defects are addressed prior to the deployment of the software. The different ways a given program can behave are unquantifiable. will significantly lower the cost of completing and maintaining the software. Explanation:   In many organizations. For "safety-critical" systems where a failure can harm people (such as air-traffic control. Many MIS system are "mission-critical". o Second. data entry and calculation errors. or medical delivery systems). testing software is enormously difficult. It will also greatly reduce the risks or liabilities associated with deploying poor quality software. missile guidance. o First. To verify the proper integration of all components of the software. For a typical MIS system. this situation is not as painfully obvious.       . and unacceptable functional behavior. but the impact of a defect can be very expensive. companies cannot fulfill their functions and experience massive losses when failures occur.

and various models. for it is typically this artifact for which the project existed. etc. including:  Deployed. Achieving quality is not simply "meeting requirements" or producing a product that meets user needs.  Deployed non-executable artifacts. this is the primary product that provides value to the customer (end-users.13 Concepts related to Test Workflow Concepts . The characteristic identified by the following: o Satisfies or exceeds an agreed upon set of requirements. These are as follows:     Product Quality Process Quality Measuring Quality Evaluating Quality Product Quality Product quality is the quality of the product being produced by the process. etc. such as the implementation set of artifacts including the test scripts and development tools created to support implementation. or expectations. .  Non-deployed executables. etc. stakeholders.). which needs to be understood. system. In software development the product is the aggregation of many artifacts. Rather quality includes identifying the measures and criteria to demonstrate the achievement of quality The implementation of a process to ensure that the product created by the process.Quality Introduction  If a question is asked – What is quality? – The common answers that you get are : o I don’t know how to describe it – but I’ll know what I see it. and o Produced using an agreed upon process. but many The actual definition (as per unified software development process). including artifacts such as user manuals and course materials. That is. non-executed artifacts such as the implementation plans. has achieved the desired degree of quality (and can be repeated and managed). executable code (application. shareholders.). perhaps the most visible of the artifacts.    There are four major aspects of quality. o Meeting requirements    Quality is not a single dimension.  Non-deployed. and o Assessed using agreed upon measures and criteria. test plans.

and are required for the many planning.  Measuring Quality  The measurement of Quality. you may want to assess product quality.   Metrics are used to attain two goals. the Rational Unified Process has included pages such as:  Activity: a description of the activity to be performed and the steps required to perform the activity. . The objectives of measuring and assessing process quality are to:  Manage profitability and resources  Manage and resolve risk  Manage and maintain budgets. and assessment tasks.  They are also used to evaluate how close or far we are from the objectives set in the plan in terms of completion. evaluate. or achieve.14 Process Quality Process quality refers to the degree to which an acceptable process. monitor. and use the artifact. or track requirements changes. from a project to another.  Templates: models or prototypes of the artifact that provide structure and guidance for content. Software development requires a complex web of sequential and parallel steps. etc. You are usually interested in seeing how things change or improve over time. All processes consist of product activities and overhead activities. and quality  Capture data for process improvement Process quality is measured not only to the degree to which the process was adhered to. quality.  Artifact Guidelines and Checkpoints: information on how to develop. reduce. As the scale of the project increases. Overhead activities have an intangible impact on the end product. For example. Product activities result in tangible progress toward the end product. predict.  Work Guideline: techniques and practical advice useful for performing the activity. including measurements and criteria for quality. and therefore be able to manage it. monitor test coverage. improve. To aid in your evaluation of the process and product quality. whether Product or Process. obtain data to predict testing effort.  Measurements are made primarily to gain control of a project. o Change or achievement goals: these are expressed by the use of verbs such as increase. schedules. management. requires the collection and analysis of information usually stated in terms of measurements and metrics. You want to better understand your development process. and compliance to requirements. knowledge and change (or achievement): o Knowledge goals: they are expressed by the use of verbs like evaluate. from an iteration to another. more steps must be included to manage the complexity of the project. has been implemented and adhered to in order to produce the artifacts. but also to the degree of quality achieved in the products produced by the process.

Measuring process quality is achieved using one or more measurement techniques. such as the time required for a specified action (use case. etc. and implementation of an accepted process. such as the percentage of code. Fail over and recovery testing. This criteria indicates the degree to which an artifact or process activity / step must meet an agreed upon standard or guideline. o Acceptability or satisfaction. such as usability or aesthetics. The acceptance criteria may be stated in many ways and may include more than one measure. such as: o progress . or other event) to occur.  Measuring the product quality of an executable artifact is achieved using one or more measurement techniques. Test coverage is usually used in conjunction with the defect criteria identified above). such as the number of defects identified. concise. o Status / state of current process implementation to planned implementation. o Compliance. Common acceptance criteria may include the following measures: o Defect counts and / or trends.15   All metrics require criteria to identify and to determine the degree or level at which of acceptable quality is attained. o Test coverage. This criterion is usually used with subjective measures.such as use cases demonstrated or milestones completed o variance . or use cases planned or implemented and executed (by a test). budgets. and testable fashion is only part of achieving product quality. fixed.  It is also necessary to identify the measures and criteria that will be used to identify the desired level of quality and determine if it has been achieved. guidelines. or that remain open (not fixed). o The degree of adherence to the standards. o Performance. o The quality of the artifacts produced (using product quality measures described above). This is criteria is commonly used for Performance testing. such as: o reviews / walkthroughs o inspection o execution Measuring Process Quality  The measurement of Process Quality is achieved by collecting both knowledge and achievement measures. staffing requirements. Measuring Product Quality  Stating the requirements in a clear. or other tests in which time criticality is essential. o product quality measures and metrics (as described in Measuring Product Quality section above)  . operation.differences between planned and actual schedules.

etc. or other interested party for comments and approval.).   Inspections. such as at the end of a phase. and other problems. Described below are the different evaluations that occur during the lifecycle. Conducting these should be done in a meeting format. Status assessments are periodic efforts to assess ongoing progress throughout an iteration and/or phase. style. or set of artifacts are presented to the user. Reviews. questions. Walkthroughs Milestones and Status Assessments Each phase and iteration in the Rational Unified Process results in the release (internal or external) of an executable product or subset of the final product under development. . Milestones and Status Assessments Inspections. The three kinds of efforts are defined as follows:  Review: A formal meeting at which an artifact. and other problems Managing Quality in Unified Software Development Process. and Walkthroughs Inspections. The evaluation of quality may occur when a major event occurs. possible errors.  Walkthrough: A review process in which a developer leads one or more members of the development team through a segment of an artifact that he or she has written while the other members ask questions and make comments about technique. such as a code walkthrough.  Inspection: A formal evaluation technique in which artifacts are examined in detail by a person or group other than the author to detect errors.16 Evaluating Quality Throughout the product lifecycle. and a second worker recording notes (change requests. or may occur when an artifact is produced. Reviews. with one worker acting as a facilitator. Reviews. There are four major Milestones: Minor milestones occur at the conclusion of each iteration and focus on verifying that the objectives of the iteration have been achieved. at which time assessments are made for the following purposes:  Demonstrate achievement of the requirements (and criteria)  Synchronize expectations  Synchronize related artifacts into a baseline  Identify risks Major milestones occur at the end of each of the four Rational Unified Process phases and verify that the objectives of the phase have been achieved. violations of development standards. violation of development standards. customer. issues. to manage quality. Managing Quality is done for the following purposes:  To identify appropriate indicators (metrics) of acceptable quality. and Walkthroughs are specific techniques focused on evaluating artifacts and are a powerful method of improving the quality and productivity of the development process. measurements and assessments of the process and product quality are performed.

and assess both process quality and product quality. o Performance: the timing profiles and operational characteristics of the target-oftest. and system calls. and code integrity and structure (technical compliance to language and syntax). and precision (appropriate level of detail and accuracy). The Test workflow is highly focused towards the management of quality. are highlighted some of the efforts expended in each workflow to manage quality:  Managing quality in the Requirements workflow includes the analysis of the requirements artifact set for consistency (between artifact standards and other artifacts). design. design. To identify and appropriately address issues affecting quality as early and effectively as possible. like test. phases. In the Analysis & Design workflow. managing quality includes assessment of the design artifact set. In general. measure. Below. as most of the efforts expended in the workflow address the purposes of managing quality identified above. etc. In Unified Process. The Project Management workflow includes the overview of many efforts for managing quality. and other workers). there is no single perspective of what quality is or how it is measured. and progress of the development process. In the Implementation workflow. The Environment workflow. including the review and audits to assess the implementation.17   To identify appropriate measures to be used in the evaluation and assessment of quality. resource usage. Here. memory leaks. we address this issue by stating that Quality has the following dimensions: o Reliability: software robustness and reliability (resistance to failures. data access.       Concepts: Quality Dimensions   When our focus turns to the discussion of testing to identify quality. and iterations in the Unified Process.). clarity (clearly communicates information to all shareholders. and evaluating the executable and deployment artifacts against the appropriate requirements. function calls. managing quality throughout the lifecycle is to implement. adherence. Managing quality in the Deployment workflow includes assessing the implementation and deployment artifacts. you can find guidance on how to best configure your process to meet your needs. includes many efforts addressing the purposes of managing quality. including the consistency of the design model. its translation from the requirements artifacts. Operational characteristics for performance include those . and test artifacts. and test artifacts needed to deliver the product to the endcustomer. Managing Quality is implemented throughout all workflows. and its translation into the implementation artifacts. The timing profiles include the code’s execution flow. stakeholders. managing quality includes assessing the implementation artifacts and evaluating the source code / executable artifacts against the appropriate requirements. o Function: ability to execute the specified use cases as intended and required. such as crashes.

Look at the lifecycle of testing without the rest of the project in the same picture. operational reliability (MTTF – Mean Time to Failure). and the same principle would be followed in subsequent iterations. such as response time. There is no frozen software specification and there are no frozen tests. In this environment. . which are used for regression testing at later stages.18 characteristics related to production load. This approach implies that it causes reworking the tests throughout the process.       This iterative approach gives a high focus on regression test. you would use most tests from iteration X and iteration X+1 as regression tests. In iteration X+2. Additions and refinements are made to the tests that are executed for each build. It becomes necessary to effectively automate your tests to meet your deadlines. it is well worth the effort to automate the tests. This is the way the different activities of testing are interconnected if you view them in a non-iterative view: The testing lifecycle. Most tests of iteration X are used as regression tests in iteration X+1. the testing lifecycle must also have an iterative approach with each build being a target for testing. Concepts – The Life Cycle of Testing      In the software development lifecycle. and operational limits such as load capacity or stress. Because the same test is repeated several times. accumulating a body of tests. just as the software itself is revised. software is refined through iterations.

the test planning and design activities can expose faults or flaws in the application definition. which defeats the goals of iterative development. How much you invest in testing depends on how you evaluate quality and tolerate risk in your particular environment. Execution is both execution of the new tests and regression tests using previous tests. . The earlier these are resolved. which means that each iteration will have a test cycle following that pattern. The ways in which you will perform tests will depend on several factors: o your budget o your company policy o risk tolerance o and your staff. If not started early enough.       The testing lifecycle is a part of the software lifecycle.19  This lifecycle has to be integrated with the iterative approach. the lower the impact on the overall schedule. Furthermore. they should start at the same time. the tests will either be deficient. or cause a long testing and bugfixing schedule to be appended to the development schedule. One of the major tasks in evaluation is to measure how complete the iteration is by verifying what requirements have been implemented.

In control-flow coverage. expressed by the coverage of test requirements and test cases o The coverage of executed code. or may be calculated by test automation tools. implemented.20 Concepts – Key Measures of Test Introduction   The key measures of a test include coverage and quality. test strategies are formulated in terms of how much of the requirements are fulfilled by the system.  Both measures can be derived manually (equations given below).s) / RfT where: T is the number of Tests (planned. that a data element is defined before it is used.  If code-based coverage is applied. and successful test coverage). Requirements-based test coverage Requirements-based test coverage is measured several times during the test life cycle and provides the identification of the test coverage at a milestone in the testing life cycle (such as the planned. Code-based test coverage Code-based test coverage measures how much code has been executed during the test. Code-based test coverage is calculated by the following equation: Test Coverage = Ie / TIic . Test coverage is the measurement of testing completeness. branch. or paths) or data flows. Code coverage can either be based on control flows (statement. the aim is to test that data states remain valid through the operation of the software. This type of test coverage strategy is very important for safety-critical systems. executed. executed. In data-flow coverage. for example. implemented. branch conditions. o RfT is the total number of Requirements for Test.i. the aim is to test lines of code. and is based on: o The coverage of testing. or successful) as expressed as test procedures or test cases. Coverage Measures  The most commonly used coverage measures are: o Requirements-based (verification of use cases) and o Code-based test coverage (execution of all lines of code)  If requirement-based coverage is applied. paths through the code. or other elements of the software's flow of control.  Test coverage is calculated by the following equation: o Test Coverage = T(p.x. compared to how much code there is left to execute. test strategies are formulated in terms of how much of the source code has been executed by tests.

For example. A threshold can be established below which the software can be deployed. For defect analysis. closed. in a Defect Density report. third parties. as a function of time. or data element names. open. Defect counts can be reported as a function of time. data state decision points. respectively. creating a Defect Trend diagram or report. o Defect age reports are a special type of defect distribution report. TIic is the total number of items in the code. being fixed. defect counts can be reported as a function of one or more defect parameters. allowing detection of "weak modules". or closed). etc. an evaluation of defects discovered during testing provides the best indication of software quality. Defect analysis provides an indication of the reliability of the software. However. Defect age reports show how long a defect has been in a particular state. it is expected that defect discovery rates will eventually diminish as the testing and fixing progresses. Defect counts can also be reported based on the origin in the implementation model. by status (new. o Priority the relative importance of this defect having to be addressed and resolved. parts of the software that keep being fixed again and again. Not all reported defects report an actual flaw.21 where: Ie is the number of items executed expressed as code statements. Defects included in an analysis of this kind have to be confirmed defects. indicating some more fundamental design flaw. like Owner. out of the scope of the project. such as Open. etc. The trend reports can be cumulative or non-cumulative. like severity or status. "hot spots". Quality Measures While the evaluation of test coverage provides the measure of testing completion. o Defect trend reports show defect counts. The impact to the end-user. . there are four main defect parameters commonly used: o Status the current state of the defect (open. as some may be enhancement requests. or describe an already reported defect.). or what component will be fixed to eliminate the defect. In any age category. an organization. code paths. there is value to looking at and analyzing why there are many defects being reported that are either duplicates or not confirmed defects. code branches. Defect Reports The Unified Process provides defect evaluation in the form of three classes of reports: o Defect distribution (density) reports allow defect counts to be shown as a function of one or two defect parameters. o Severity the relative impact of this defect. These types of analysis provide a perspective on the trends or distribution of defects that reveal the software's reliability. Defects analysis means to analyze the distribution of defects over the values of one or more the parameters associated with a defect. o Source where and what is the originating fault that results in this defect. defects can also be sorted by another attribute.

The usual test criteria include a statement about the allowable numbers of open defects in particular categories. Defect Status Versus Location in the Implementation Model . To be effective producing reports of this kind normally requires tool support. Many of these reports are valuable in assessing software quality. usually it is practical to have four priority levels: 1. Normal queue 4.22 o Test results and progress reports show the results of test procedure execution over a number of iterations and test cycles for the application-under-test. this evaluation can be focused on different sets of requirements. to a successful test criteria might be no Priority 1 defects and fewer than five Priority 2 defects are open. By filtering or sorting on test requirements. Resolve immediately 2. For example. A defect distribution diagram. Defect Density Reports Defect Status Versus Priority Each defect should be given a priority. minor annoyance). such as the following. High priority 3. This criterion is easily checked with a defect distribution evaluation. major function not performed. Low priority Criteria for a successful test could be expressed in terms of how the distribution of defects over these priority levels should look. Defect Status Versus Severity Defect severity reports show how many defects there are of each severity class (for example: fatal error. Note that this diagram needs to include a filter to show only open defects as required by the test criterion. such as severity class. should be generated: It is clear that the criterion has not been met.

. This simple trend analysis assumes that defects are being fixed promptly and that the fixes are being tested in subsequent builds. if the defect rates are still rising in the third week of a four-week test cycle. When this does not happen. so that the rate of closing defects should follow the same profile as the rate of finding defects. the project is clearly not on schedule. To find problems. it indicates a problem with the defect-resolution process.23 Defect source reports show distribution of defects on elements in the implementation model. For example. the defect rates rise quickly. Then they reach a peak and fall at a slower rate over time. the defect fixing resources or the resources to re-test and validate fixes might be inadequate. if the majority of older. For example. it probably means that not enough resources are applied to the re-testing effort. Defect Trend Reports Trend reports identify defect rates and provide a particularly good view of the state of the testing. unresolved defects are in a pendingvalidation state. the project schedule can be reviewed in light of this trend. Defect trends follow a fairly predictable pattern in a testing cycle. Early in the cycle. Defect Aging Reports Defect age analyses provide good feedback on the effectiveness of the testing and the defect removal activities.

they may indicate a problem and identify when additional resources may need to be applied to specific areas of development or testing. When combined with the measures of test coverage. Performance Measures Several measures are used for assessing the performance behaviors of the target-of-test and focus on capturing data related to behaviors such as response time. however. Dynamic Monitoring Dynamic monitoring provides real-time display / reporting. The report is used to monitor or assess performance test execution during test execution by displaying the current state. execution flow. and progress the test scripts being executed.24 The trend reflected in this report shows that new defects are discovered and opened quickly at the beginning of the project. typically in the form of a histogram or graph.real-time capturing and display of the status and state of each test script being executed during the test execution. If your trends deviate dramatically from these.percentile measurement / calculation of the data collected values. timing profiles. . The primary performance measures include: o Dynamic monitoring . there are performance measures that are used during the Execute Test activity to evaluate test progress and status.measurement of the response times or throughput of the target-of-test for specified actors. o Percentile Reports . o Response Time / Throughput . status. These trends depict a successful effort. The trend for closing defects increases over time as open defects are fixed and verified. o Trace Reports .differences or trends between two (or more) sets of data representing different test executions. the defect analyses provide a very good assessment on which to base the test completion criteria. operational reliability and limits. but lags slightly behind. The trend for open defects is similar to that for new defects. these measures are assessed in the Evaluate Test activity. o Comparison Reports . and that they decrease over time. and / or use cases.details of the messages / conversations between the actor (test script) and the target-of-test. Primarily.

34 in SQL Execution. we have 80 test scripts executing the same use case. in the above histogram. syntax. As the test progresses. many different tests are implemented and executed. each with a specific test objective. In this display. and 16 in the Other state. including units and integrated units. and resource usage. Concepts – Types of Tests There is much more to testing software than testing only the functions. Each focused on testing only one characteristic or attribute of the target-of-test. test scripts remain in one state or are not showing changes. In order to achieve this. if during test execution. Additional tests must focus on characteristics / attributes such as the target-of-test's: o Integrity (resistance to failure) o Ability to be installed / executed on different platforms o Ability to handle many requests simultaneously o . and response time characteristics of a target-of-test. interface. 12 in the Query. Quality Dimension Type of Test Reliability Integrity test: Tests which focus on assessing the target-of-test's robustness (resistance to failure) and technical compliance to language.. we would expect to see the number of scripts in each state change. 14 test scripts are in the Idle state. However. The displayed output would be typical of a test execution that is executing normally and is in the middle of the execution. . This test is implemented and executed against different target-of-tests. 4 in SQL Connect. this could indicate a problem with the test execution or the need to implement or evaluate other performance measures.25 For example..

This test is implemented and executed against application(s) and systems. this test is done for web-enabled applications ensuring that all links are connected. data. reference-workload and system. Security test: Tests focused on ensuring the target-oftest. memory. integrated units. and there is no orphaned content. application(s). Benchmark test: A type of performance test that compares the performance of a [new or unknown] target-of-test to a known. Function test: Tests focused on verifying the target-oftest functions as intended. providing the required service(s). and systems. Load test: A type of performance test to verify and assess acceptability of the operational limits of a system under varying workloads while the systemunder-test remains constant. This test is implemented and executed against different target-oftests. either as input and output or resident within the database. or use case(s). This test may also be implemented as a system performance test. or data entry of the maximum amount of data in each field. This test is implemented and executed various targets-of-test. Volume test: Testing focused on verifying the targetof-test ability to handle large amounts of data. Configuration test: Tests focused on ensuring the target-of-test functions as intended on different hardware and / or software configurations. including units.26 Structure test: Tests that focus on assessing the targetof-test's adherence to its design and formation. Volume testing includes test strategies such as creating queries that [would] return the entire contents of the database. etc.). method(s). Installation test: Tests focused on ensuring the targetof-test installs as intended on different hardware and / or software configurations and under different conditions (such as insufficient disk space or power interrupt). When systems incorporate distributed architectures or Function Performance . Contention test: Tests focused on verifying the targetof-test's can acceptably handle multiple actor demands on the same resource (data records. Measurements include the characteristics of the workload and response time. or have so many restrictions that no data is returned. Typically. appropriate content is displayed. (or systems) is accessible to only those actors intended.

or diminished shared resources. implemented early in the iteration.  The Implementer performs unit test as the unit is developed. in this case. unavailable services / hardware. Integration Test  Integration testing is performed to ensure that the components in the implementation model operate properly when combined to execute a use case. Unit Test  Unit test. The details of unit test are described in the Implementation workflow. Stress test: A type of performance test that focuses on ensuring the system functions as intended when abnormal conditions are encountered. is the whole implementation model for the system.27 load balancing. Performance profile: A test in which the target-oftest's timing profile is monitored.  The target. or when well-defined subsets of its behavior are implemented. Concepts – Stages In Test Testing is usually applied to different types of targets in different stages of the software's delivery cycle. including execution flow. focuses on verifying the smallest testable elements of the software. System Test  System testing is done when the software is functioning as a whole. These stages progress from testing small components (unit testing) to testing completed systems (system testing). function and system calls to identify and address performance bottlenecks and inefficient processes. special tests are performed to ensure the distribution and load balancing methods function appropriately.Integration testing exposes incompleteness or mistakes in the package's interface specifications. insufficient memory. Stresses on the system may include extreme workloads.  Unit testing is typically applied to components in the implementation model to verify that control flows and data flows are covered and function as expected. which you find from sequence diagrams for that use case. .  These expectations are based on how the component participates in executing a use case.  Often the packages being combined come from different development teams. data access.  The target-of-test is a package or a set of packages in the implementation model.

such as for audio or video presentation. while being a great strength. Load testing . or letters.  Verifying there is no orphaned content. or renaming the target-content files.Verifies the acceptability of the target-of-test's performance behavior using varying configurations while the operational conditions remain constant. is also a tremendous weakness. Concepts – Performance Test Included in Performance Testing are the following types of tests:     Benchmark testing . etc. helping the user traverse the application (web-site).). Care must be taken to investigate orphaned content to determine the cause . Performance profiling . or program-controlled links. Orphaned content are those files for which there is no "inbound" link in the current web-site.  Ensuring there are no broken links. Contention test: . as structural integrity can easily be damaged.is it orphaned .Compares the performance of new or unknown target-of-test to a known reference standard such as existing software or measurement(s). Links may also be broken due to the use of improper syntax. graphics. including missing slashes. Java scripts.  The goal of acceptance testing is to verify that the software is ready and can be used by the end-users to perform those functions and tasks the software was built to do.Verifies the acceptability of the target-of-test's performance behavior under varying operational conditions (such as number of users. Broken links are those links for which the target-content cannot be found. it may also be used for as a navigation aid. plug-in-rendered content.) for each link is displayed. Links may be broken for many reasons. colons. Each link should be verified to ensure that the correct target-content is presented to the user.Verifies the target-of-test can acceptably handle multiple actor demands on the same resource (data records. These applications may also include "active content". memory. Structure testing is implemented and executed to verify that all links (static or active) are properly connected. that is. such as diminished resources or extremely high number of users. These tests include:  Verifying that the proper content (text. there is no way to access or present the content. or Java applications. etc. This free-form nature of the web-based applications (via its links).Verifies the acceptability of the target-of-test's performance behavior when abnormal or extreme conditions are encountered. However.) while the configuration remains constant. Stress testing . number of transactions.  Concepts – Structure test Web-based applications are typically constructed using a series of documents (both HTML text documents and GIF/JPEG graphics) connected by many static links. Frequently this active content is used for output only. such as bookmarks.28 Acceptance Test  Acceptance testing is the final test action prior to deploying the software. and a few active. removing. such as forms. etc. hyperlinks to other target-content (in the same or different web-site). including moving. Different types of links are used to reference target-content in web-based applications. or hot-spots.

 There are no particular test cases to follow. which permits regression testing.29 because it is truly no longer needed? Is it orphaned due to a broken link? Or is it accessed by a link external to the current web-site. The goal of acceptance testing is to verify that the software is ready and can be used by the end-users to perform those functions and tasks the software was built to do.  The progress of the tests can be measured and monitored.  The tests can be automated. or an objective group of people chosen by the end-user organization. There are three common strategies for implementing an acceptance test. since you are only looking for defects you expect to find. respectively). or ignore the orphan. Formal Acceptance Testing  Formal acceptance testing is a highly managed process and is often an extension of the system test. the test procedures for performing the test are not as rigorously defined as for formal acceptance testing. Concepts – Acceptance Test    Acceptance testing is the final test action prior to deploying the software.  The tests are planned and designed as carefully and in the same detail as system testing.  The acceptability criteria are known. organizational and corporate standards.  The individual tester determines what to do. They are: Formal acceptance Informal acceptance or alpha test Beta test The strategy you select is often based on the contractual requirements.  It may not uncover subjective defects in the software. . the appropriate action(s) should be taken (remove the content file.  The details of the tests are known and can be measured. and the application domain. The disadvantages include:  Requires significant resources and planning.  Informal acceptance testing is most frequently performed by the end-user organization. Informal Acceptance Testing  In informal acceptance testing.  The activities and artifacts are the same as for system testing. The benefits of this form of testing are:  The functions and features to be tested are known. Once determined.  The tests may be a re-implementation of system tests.  Acceptance testing is completely performed by the end-user organization. repair the broken link.

The disadvantages include:  Resources. and approach taken is entirely up to the individual tester.  Resources for acceptance testing are not under the control of the project and may be constricted. the data. Beta Testing  Beta testing is the least controlled of the three acceptance test strategies. The benefits of this form of testing are:  Testing is implemented by end users.  You will uncover more subjective defects than with formal acceptance testing. planning.  End users may conform to the way the system works and not see the defects.  Each tester is responsible for identifying their own criteria for whether to accept the system in its current state or not.  End users may focus on comparing the new system to a legacy system. rather than looking for defects.  Beta testing is implemented by end users.  The progress of the tests can be measured and monitored.  End users may focus on comparing the new system to a legacy system.  You will uncover more subjective defects than with formal or informal acceptance testing. often with little or no management from the development (or other non end-user) organization.  Resources for acceptance testing are not under the control of the project and may be constricted. the amount of detail.  End users may conform to the way the system works and not see or report the defects.  You have no control over which test cases are used. The disadvantages include:  Not all functions and / or features may be tested. .  Acceptability criteria are not known. rather than looking for defects.  Beta testing is the most subjective of all the acceptance test strategies.  Large volumes of potential test resources.  In beta testing. and management resources are required.  Increases customer satisfaction to those who participate.  You need increased support resources to manage the beta testers.  The acceptability criteria are known.  Test progress is difficult to measure.30 The benefits of this form of testing are:  The functions and features to be tested are known.

design. Below are descriptions regarding the classification of test automation tools. such as memory error detection and performance. for reasons of timing. Function Test tools may be categorized by the function they perform. The analysis yields information on the logic flow. Later. such as complexity. White-box tools rely upon knowledge of the code. or lines of code. and to date. blackbox tools rely upon the input and output conditions to evaluate the test. In fact. or through the generation from use cases or supplemental specifications  Static measurement tools that analyze information contained in the design model(s). transformation. evaluation. Where as white-box tools have knowledge of how the target-of-test processes the request. Specialization of Test Tools In addition to the broad classifications of tools presented above. tools may also be classified by specialization. data flow. The data may be acquired through conversion. expense. design model(s). the limitations of the tool. or capture of existing data.  Test management tools that assist in the planning. The measurements include the run-time operation of the code.31 Concepts – Test Automation and Tools Test automation tools are increasingly being brought to the market to automate the activities in test.  Dynamic measurement tools that perform an analysis during the execution of the code. or safety. When evaluating different tools for test automation. or other fixed sources. or other source material to implement and execute the tests. extraction. execution.  Simulators or Drivers that perform activities that are not. . Test data is acquired during the recording of events (during test implementation). implementation. no single tool is capable of automating all the activities in test. and some are so focused they only address a part of an activity.  Record/Playback tools combine data acquisition with dynamic measurement. the data is used to "playback" the test script which is used to evaluate the execution of the target-of-test. it is necessary that you understand the type of tool it is. White-box versus Black-box Test tools are often characterized as either white-box or black-box. and what activities the tool addresses and automates. and management of the test activities or artifacts. most tools are specific to one or a few activities. or could not be available for testing purposes. Black-box tools rely only upon the use cases or functional description of the target-of-test. or the technology / knowledge needed to use the tools. maintainability. or quality metrics. source code. based upon the manner in which the tool is used. during test execution. A number of tools exist. Typical function designations for tools include:  Data acquisition tools that acquire data to be used in the test activities.

data streams in a communication system. For example. databases. or other measures of quality. Typical classes of coverage include. The parameters may indicate reliability. Test case generators automate the generation of test data. complexity. Comparators differ in their specificity to a particular data formats. maintainability. error inputs. and function points. during testing. reports. logic branch or node (code-based). Sources include. requirements-based (use cases).     . and limit and boundary cases. Test case generators use either a formal specification of the target-of-test's data inputs or the design model(s) / source code to produce test data that tests the nominal inputs. in some dimension. Coverage monitor tools indicate the completeness of testing by identifying how much of the target-of-test was covered.32  Quality metric tools are static measurement tools that perform a static analysis of the design model(s) or source code to establish a set of parameters describing the target-of-test's quality. or design model(s) / source code. comparators may be pixel-based (to compare bitmap images) or object-based (to compare object properties or data). Data extractors provide inputs for test cases from existing sources. Comparator tools compare test results with reference results and identify differences. data state.

 Items that are to be identified as requirements for test must be verifiable. Generation of hierarchy of requirements for test.  They must have an observable.)  Requirements for test are used as the basis for test coverage.  To create the test plan. or newly generated. measurable outcome. The hierarchy is a logical grouping of the requirements for test. The scope and role of the test effort is identified. Common methods include grouping the items by o Use-case o Business case .  A requirement that is not verifiable is not a requirement for test. test design.33 Workflow Detail – Plan Test Activity Plan Test Inputs From  Use Case Model  Design Model  Integration Build Plan Resulting Artifacts  Test Plan Activity Plan Test Purpose:  To collect and organize test-planning information. Steps are of the activity are explained below Identify requirements for Test Identifying the requirements for test is the start of the test planning activity. etc.         Indicate the requirements for test. Requirements for test are used to determine the overall test effort (for scheduling. The following is performed to identify requirements for test:  Reviewing the material The most common sources of requirements for test include: o Existing requirement lists o Use cases o Use-case models o Use-case realizations o Supplemental specifications o Design requirements o Business cases o Interviews with end-users o Review of existing systems. The hierarchy may be based upon an existing hierarchy.

the probability of a use case or requirement failing.the impact or consequences use case (requirement. The output of this step is a report (the hierarchy) identifying those requirements that will be the target of test  Assess Risk Purpose:  To Maximize test effectiveness and prioritize test efforts  To establish an acceptable test sequence To assess risk perform the following:  Identify and justify a risk factor The most important requirements for test are those that reflect highest risk o Risk can be viewed from several perspectives: o Effect .  . etc.) or a combination of these.identifying an undesirable outcome and determining what use case or requirement(s). but also those that are frequently used (as these often have the highest end-user visibility). would result in the undesirable outcome o Likelihood .34 o Type of test (functional. Each requirement for test should be reviewed and a risk factor identified (such as high. o The test priority factor identifies the relative importance of the test requirement and the order or sequence in which it will be tested. should they fail. performance.) failing o Cause . medium or low)  Identify and justify an operational profile factor o Not only are the highest risk requirements for test tested. o This is accomplished by reviewing the business case(s) or by conducting interviews with end-users and their managers Identify and justify a test priority factor o A test priority factor should be identified and justified. o Identify an operational profile factor for each requirement for test o A statement justifying why a specific factor value was identified. etc.

modifying.  Identify and describe the approach to test The approach to test is a statement (or statements) describing how the testing will be implemented.  For each use case. Identify the criteria for test    The criteria for test are objective statements indicating the value(s) used to determine / identify when testing is complete.  Test procedures will be implemented to simulate managing customer accounts over a period of three months. Test procedures will include adding. including valid and invalid input data. and the quality of the application-under-test.  . The test criteria may be a series of statements or a reference to another document (such as a process guide or test standards). each executing functions A.  Test procedures will be designed and developed for each use case. Test criteria should identify: o what is being tested (the specific target-of-test) o how is the measurement being made o what criteria is being used to evaluate the measurement Sample test criteria: For each high priority use case: o All planned test cases and test procedures have been executed.  Test procedures will be implemented and test scripts executed by 1500 virtual users. and C and each using different input data. test cases will be identified and executed. o All planned test cases and test procedures have been re-executed and no new defects identified. and deleting accounts. The strategy does not have to be detailed.35 Develop Test Strategy Purpose    Identifies and communicates the test techniques and tools Identifies and communicates the evaluation methods for determining product quality and test completion Communicate to everyone how you will approach the testing and what measures you will use to determine the completion and success of testing. but it should give the reader an indication of how you will test. B. customers. o All identified defects have been addressed.

cases. and milestones Estimate Test Efforts The following assumptions should be considered when estimating the test effort:  productivity and skill / knowledge level of the human resources working on the project (such as their ability to use test tools or program)  parameters about the application to be built (such as number of windows. and the percent of re-use)  test coverage (the acceptable depth for which testing will be implemented and executed. System stages Integration Unit First iteration Test of this Test of this iteration's Test of this iteration's test iteration's test test cases cases cases that target builds that target units that target the system Following iterations Test of this Test of this iteration's Test of this iteration's test iteration's test test cases. data entities and relationships.  Testing effort needs to include time for regression test.) It is not the same to state each use case / requirement. there is the need to identify who will do the testing and what is needed to support the test activities. The following table shows how regression test cases can accumulate over several iterations for the different testing stages. was tested if only one test case will be implemented and executed (per use case / requirement). testing.36 Identify resources Once it's been identified what's being tested and how. well as test cases from as well as test cases from as well as test previous previous iterations that cases from iterations that have have been previous been designed designed for regression iterations that for regression testing. Often many test cases are required to acceptably test a use case / requirement. schedule. Identifying resource requirements includes determining what resources are needed. . components. as cases. including the following:  Human resources (number of persons and skills)  Test environment (includes hardware and software)  Tools  Data Create Schedule Purpose: To Identify and communicate test effort. Iterations vs.

resources. Early iterations introduce a larger number of new functions and new tests. been for Generate Test Schedule      A test project schedule can be built from the work estimates and resource assignments. maintained. test strategies.37 have designed regression testing. the early iterations require more work on test planning and design while the later iterations are weighted towards test execution and evaluation.  Identify test deliverables The purpose of the test deliverables section is to identify and define how the test artifacts will be created. etc.) should be revised to reflect any changes. and made available to others. Consequently. If necessary. a review of all the existing project information should be done to ensure the test plan contains the most current and accurate information. test related information (requirements for test. These artifacts include: o Test Model o Test Cases o Test Procedures o Test Scripts o Change Request . Generate Test Plan Purpose: To organize and communicate to others the test-planning information To generate a test plan. In the iterative development environment. All test activities are repeated in every iteration. a separate test project schedule is needed for each iteration. As the integration process continues the number of new tests diminish. perform the following:  Review / refine existing materials Prior to generating the test plan. and a growing number of regression tests need to be executed to validate the accumulated functions.

The test plan should be distributed to at least the following: o All test workers o Developer representative o Share holder representative o Stakeholder representative o Client representative o End-user representative . This is accomplished by assembling all the test information gathered and generated into a single report.38  Generate the test plan The last step in the Plan Test activity is to generate the test plan.

Pro Project  Creating a project by creating new Req. you identified the test requirements .39 How to Use Test Manager in Activity – Plan Test In the Plan Test activity.  Create a Project Using Rational Administrator  Creating project using existing Req. Low) . features. Project  Starting and Selecting a project in Test Manager  Insert a requirement  Insert a child requirement  Edit requirement properties  Delete a requirement. Medium. Entering these test requirements into TestManager will enable you to automatically generate Test Coverage reports and track your progress in subsequent test activities. and characteristics of the application that you will implement and execute tests against. you identified what you are going to test.  Using the Attributes tab page in properties of requirement – to set the priority (High. That is. functions.the use cases.

Determine the performance measures and criteria. Inputs From  Design Guidelines  Use Cases  Supplementary Specs  Implemented Component Resulting Artifacts  Test Cases  Test Procedures  Workload Analysis Document Activity Design Test Activity – Design Test Workload Analysis Document (Performance Testing Only) Purpose  To identify and describe the different variables that affect system use and performance  To identify the sub-set of use cases to be used for performance testing Workload Analysis is performed to generate a workload analysis document that can be implemented for performance testing. Clarify the objectives of performance testing and the use cases. 4. To identify test procedures that show how the test cases will be realized. Identify the use cases to be implemented in the model. 7. 3. 2. Identify And Describe Test Cases . 8. Review the use cases to be implemented and identify the execution frequency. 6. Select the most frequently invoked use cases and those that generate the greatest load on the system. 5. Generate test cases for each of the use cases identified in the previous step.40 Workflow Detail – Design Test Purpose:   To identify a set of verifiable test cases for each build. Identify the critical measurement points for each test case. 9. Identify the actors and actor characteristics to be simulated / emulated in the performance tests. The primary inputs to a workload analysis document include:  Software development plan  Use Case Model  Design Model  Supplemental Specifications Workload analysis includes the following: 1. Identify the workload to be simulated / emulated in the performance tests (in terms of number of actors / actor classes and actor profiles).

there will be existing test cases. Note: If testing of a previous version has already been implemented.  The expected result in terms of the output state. review the test cases and identify the actual values that support the test cases.  review the use case flow of events.41    To identify and describe the test conditions to be used for testing To identify the specific data necessary for testing To identify the expected results of test For each requirement for test (identified in plan test – activity in previous workflow detail) Analyze Application Workflow  The purpose of this step is to identify and describe the actions and / or steps of the actor when interacting with the system.  The design model. or  walk through and describing the actions / steps the actor takes when interacting with the system The purpose of this step is to establish what test cases are appropriate for the testing of each requirement for test. Identify test case data Using the matrix created above. but is neither used as input or output for a specific test case . the actions should be described as generic as possible without specific references to actual components or objects. condition. that is. Data for three purposes will be identified during this step:  Data values used as input  Data values for the expected results  Data needed to support the test case. These test cases should be reviewed for use and design as for regression testing. use-case scenario.  The use case. or technical or supplemental requirement the test cases is derived from.  These test procedure descriptions are then used to identify and describe the test cases necessary to test the application. traverse your target-of-test (system. or data value(s).  Target-of-test application map (as generated by an automated test script generation tool). For each use case or requirement.  Any technical or supplemental requirements. Regression test cases should be included in the current iteration and combined with the new test cases that address new behavior. subsystem or component). Describe the test cases by stating:  The test condition (or object or application state) being tested. Identify and describe test cases The primary input for identifying test cases is:  The use cases that. at some point.  These early test procedure descriptions should be high-level.

42 .

The reviews are done in a similar fashion as the analysis done previously:  Review the use case flow of events. execution instructions. and evaluation methods. If utilizing an automated test script generation tool. the following is done to create the test model:  Identify the relationship or sequence of the test procedure to other test procedures (or the generated test scripts to each other). and  Review of the described test procedures. and  Walk through the steps the actor takes when interacting with the system. which might mean that they can be satisfied by the same test procedure. In test design. To be able to reuse the implementation of such behavior. 1. Plan script 2. you can choose to structure you test procedures so that one test procedure can be used for several test cases. You use this information as the specifications for recording or programming your test scripts. in the test model. with TestManager you link this information to a test script and generate test coverage reports that track your test design progress.43 Identify and Structure Test Procedures Purpose   To analyze use case workflows and test cases to identify test procedures To identify. what needs to be tested. you identified the test requirements. This leads to formal test procedures that identify test set-up. For each described test procedure (or application map and generated test scripts). and how the tests will be implemented. and/or  Review the application map Develop the test model The purpose of the test model is to communicate what will be tested. you decide how test requirements will be tested. how it will be tested. . The following should be considered while developing the test model:  Many test cases are variants of one another.  Identify the start condition or state and the end condition or state for the test procedure  Indicate the test cases to be executed by the test procedure (or generated test scripts). and the previously described test procedures to determine if any changes have been made to the use case workflow that affects the identification and structuring of test procedures. Edit script properties Review application workflows or application map Review the application workflow(s).  Many test cases may require overlapping behavior to be executed. Then. the relationship(s) between test cases and test procedures creating the test model Using Rational Test Manager In the Plan Test activity. that is. review the generated application map (used to generate the test scripts) to ensure the hierarchical list of UI objects representing the controls in the user interface of the target-of-test are correct and relevant to your test and/or the use cases being tested.

state. Note: a described test procedure.  Instructions for execution: the detailed steps / actions taken by the tester to implement and execute the tests (to the degree of stating the object or component). which must be executed in sequence. at a minimum.  Evaluation of results: the method and steps used to analyze the actual results obtained comparing them with the expected results. Proper structuring of the test procedures includes revising and modifying the described test procedures to include. it should be determined if a separate structured test procedure (for those common steps) should be created. or action for the structured test procedure. state. When using an automated test script generation tool. when structured may become several structured test procedures. . o The windows or dialog boxes in which the controls are displayed.  Data values entered (or referenced test case).44  Many test procedures may include actions or steps that are common to many test cases or other test procedures. This is done to maximize reuse and minimize test procedure maintenance. o The controls are exercised in the desired order. Test procedures can be manually executed or implemented as test scripts (for automated execution). the following information:  Set-up: how to create the condition(s) for the test case(s) that is (are) being tested and what data is needed (either as input or within the test database). or action for the structured test procedure. o Test cases are identified for those controls requiring test data. In these instances. while the test case specific steps remain in a separate structured test procedure. review the application map and generated test scripts to ensure the following is reflected in the test model: o The appropriate/desired controls are included in the application map and test scripts. the resulting computer-readable file is known as a test script. When a test procedure is automated. or referenced test case) for each action / step.  Starting condition.  Expected result (condition or data.  Structure test procedures The previously described test procedures are insufficient for the implementation and execution of test.  Ending condition.

at least.  Code based coverage.  Requirements based coverage is based upon using use cases.  Code based coverage uses the code generated as the total test item and measures a characteristic of the code that has been executed during testing (such as lines of code executed or the number of branches traversed). the following workers:  All test workers  Developer representative  Share holder representative  Stakeholder representative . Identify the method to be used and state how the measurement will be collected. This type of coverage measurement can only be implemented after the code has been generated. Generate and distribute test coverage reports Identified in the test plan is the schedule of when test coverage reports are generated and distributed. These reports should be distributed to. but they are collected or calculated differently. how the data should be interpreted. use case flows. and how the metric will be used in the process. Both identify the percentage of the total testable items that will be (or have been) tested. or test conditions as the measure of total test items and can be used during test design. requirements.45 Review and Assess Test Coverage Purpose: To identify and describe the measures of test that will be used to identify the completeness of testing Identify test coverage measures There are two methods of determining test coverage:  Requirements based coverage.

generate or acquire test Scripts For each structured test procedure in the test model at least one test script is created or generated. generate. tools. or acquiring test scripts: 1. and application build) . Generate. generate.Implement Test Activity Implement Test Inputs From  Implemented Component  Test Cases       Updated Test Cases Design Model Updated Test Procedures Test Packages and Test Classes Component Build Resulting Artifacts  Test Scripts  Test Cases (Updated)  Test Procedures (Updated)  Test Packages and Test Classes  Test Subsystems and Test Components Design Test Packages and Classes Implement Test Subsystems and Components Activity – Implement Test  To create or generate reusable test scripts  To maintain traceability of the test implementation artifacts back to the associated test cases and use cases or requirements for test Steps    Record. Review existing test scripts for potential use 2. or program test scripts Identify test-specific functionality in the design and implementation models Establish external data sets Record. software. Use test tools to create test scripts instead of programming them (when feasible) 5. Set-up the test environment (including all hardware. or acquire test scripts: 1. Use existing scripts when feasible 4. generating. Minimize test script maintenance 3.46 Workflow detail . The following considerations should be addressed when creating. or program test scripts Purpose: To create or automatically generate the appropriate test scripts which implement (and execute) the test cases and test procedures as desired Create. Refer to application gui objects and actions in the method that is most stable (such as by object name or using mouse clicks) The following steps are performed to create. Maximize test script reuse 2. data.

or acquire test scripts until the desired / required test scripts have been created 9. Identify test-specific functionality in the design and implementation models Purpose: To specify the requirements for software functions needed to support the implementation or execution of testing Identify the test-specific functionality that should be included in the design model and in the implementation model. Initialize the environment (to ensure the environment is in the proper state or condition for the test) 4. they should be tested / debugged to ensure the test scripts implement the tests appropriately and execute properly. Record / capture: for each structured test procedure. generate. Execute the test scripts 4. or acquiring test scripts. Set-up the test environment (if necessary) 2. Re-initialize the environment 3. Continue to create. Determine appropriate next action: a. The most common use of test specific functionality is during integration test where there is the need to provide stubs or drivers for components or systems that are not yet included or implemented. generating. generating. a test coverage reports should be generated to verify that the test scripts have achieved the desired test coverage. Create or acquire the test scripts: 5. The following steps are performed to test / debug test scripts: 1. Evaluate Results 5. Unexpected results: determine cause of problem and resolve Review and Evaluate Test Coverage Upon the completion of creating. Modifying existing scripts: edit the existing manually.47 3. There are two styles: . execute the test procedure to create a new test script by following the steps / actions identified in the structured test procedure and using the appropriate recording techniques (to maximize reuse and minimize maintenance) 6. This step should be performed using the same version of the software build used to create / acquire the test scripts. generate the instructions using the appropriate programming techniques 8. or acquiring test scripts. Programming: for each structured test procedure. Results as expected / desired: no actions necessary b. Modify the test scripts as necessary (as defined in the test model) Test / debug test scripts Upon the completion of creating. or delete the non-required instructions and re-record the new instructions using the recording description above 7.

 Establish External Data Sets Purpose: To create and maintain data. Based on input from the test designer identify and specify test-specific classes and packages in the design model.  Stubs and drivers that are more intelligent and can "simulate" more complex behavior. A driver or stub of a design package contains simulated classes for the classes that form the public interface of the original package. stored externally to the test scripts. . but there is no behavior defined for the methods other than to provide for input (to the target for test) or returning a pre-defined value (to the target for test). Use the second style prudently because it takes more resources to implement. that are used by the test scripts during test execution External data sets provide value to test in the following ways:  Data is external to the test script eliminating hard-coded references in the test script  External data can be modified easily with little or no test script impact  Additional test cases can easily be added to the test data with little or no test script modifications  External data can be shared with many test scripts  External data sets can contain data values used to control the test scripts (conditional branching logic Activity Design Test Packages and Classes Purpose: To design test-specific functionality Steps    Identify Test-Specific Packages and Classes Design Interface to Automated Test Tool Design Test Procedure Behavior Identify Test-Specific Packages and Classes Purpose: To identify and design the classes and packages that will provide the needed test specific functionality. A balance between value added (by creating a complex stub / driver) and the effort necessary to implement and test the stub / driver is necessary. A driver or stub of a design class has the same methods as the original class.48 Stubs and drivers that are simply "dummies" with no functionality other than being able to enter a specific value (or values) or return a pre-defined value (or values).

To automate test procedures for which there is no automation tool. Steps  Execute Test Procedures  Evaluate Execution of Test  Verify Test Results  Recover From Halted Tests Execute Test Procedures   Set-up the test environment to ensure that all the needed components (hardware. etc. Identify what behavior is needed to make your test automation tool communicate with your target for test in an efficient way. data. Initialize the test environment to ensure all components are in the correct initial state for the start of testing. software. identify the appropriate design classes and packages. tools. Use the test cases and the use cases they derive from as input.) have been implemented and are in the test environment. Design Test Procedure Behavior Purpose: To automate test procedures for which there is no automated test tool available. Identify and describe the appropriate design classes and packages.49 Design Interface to Automated Test Tool Purpose: To identify the interface necessary for the integration of an automated test tool with test-specific functionality. Inputs From Test Scripts Builds Resulting Artifacts Test Results Defects . Activity Implement Test Components and Subsystems Purpose: To implement test-specific functionality Steps    Implement and Unit Test Drivers / Stubs Implement and Unit Test Interface to Automated Test Tool(s) Implement and Unit Test Test-Procedure Behavior Workflow Detail – Execute Test In Integration Test Stage Activity Execute Test Activity Execute Test Purpose: To execute tests and capture test results.

 Missing GUI windows . Verify that the actual missing windows are / were removed from the target-of-test. .  If testing terminates normally. the test results should be reviewed to ensure that the test results are reliable and reported failures. and their corrective actions are given below:  Test verification failures . The cause of the abnormal / premature termination needs to be identified. and the tests re-executed before any additional test activities are performed.  Unexpected GUI windows . continue with Recover From Halted Tests. The most common failures reported when test procedures and test scripts execute completely. Automated testing: The test scripts created during the Implement Test activity are executed. the test results may be unreliable.this occurs for several reasons. corrected. warnings.this occurs when the actual result and the expected result do not match. Verify that the verification method(s) used focus only on the essential items and / or properties and modify if necessary. or unexpected results were not caused by external influences (to the target-of-test). If testing terminates abnormally. Ensure that the test environment has been set-up and initialized as intended for proper test execution. such as improper set-up or data. When testing ends abnormally.50  Execute the test procedures. then continue with Verify Test Results:  Abnormal or premature: the test procedures (or scripts) did not execute completely or as intended.   Evaluate Execution of Test Purpose:  To determine whether testing executed to completion or halted  To determine if corrective action is required The execution of testing ends or terminates in one of two conditions:  Normal: all the test procedures (or scripts) execute as intended and to completion. Manual execution: The structured test procedures developed during the Design Test activity are used to manually execute test. Ensure that the test environment has been set-up and initialized as intended for proper test execution.this failure is noted when a GUI window is expected to be available (but not necessarily active) and is not. Verify Test Results Purpose:  To determine if the test results are reliable  To identify appropriate corrective action the test results indicate flaws in the test effort or artifacts Upon the completion of testing. The most common is when a GUI window other than the expected one is active or the number of displayed GUI windows is greater than expected. o Note: executing the test procedures will vary dependent upon whether testing is automated or manual.

Test execution should be done under a controlled environment. or events occur while the test script is executing  Test environment appears unresponsive or in an undesirable state (such as hung or crashed). hardware crashes. then the Execute Test Activity is complete. the appropriate corrective action should be taken and the testing re-executed.51 If the reported failures are due to errors identified in the test artifacts. Recover from Halted Tests Purpose:  To determine the appropriate corrective action to recover from a halted test  To correct the problem. The system integrator compiles and links the system in increments. and re-execute the tests There are two major types of halted tests:  Fatal errors . Each increment needs to go through testing of the functionality that has been added. This includes:  A test system that is isolated from non-test influences  The ability to set-up a known initial state for the test system(s) and return to this state upon the completion of testing (to re-execute tests) .specific to automated testing. windows. do the following:  Determine the actual cause of the problem  Correct the problem  Re-set-up test environment  Re-initialize test environment  Re-execute tests Workflow Detail – Execute Test in System Test Stage Purpose: The purpose of the System Test Stage is to ensure that the complete system functions as intended. you will execute system testing several times until the whole system (as defined by the goal of the iteration) has functions as intended and meets the test's success or completion criteria. etc. recover.the system fails (network failures. as well as all tests the previous builds went through (regression tests). Both types of abnormal termination to testing may exhibit the same symptoms:  Many unexpected actions. this is when a test script cannot execute a command (or line of code). To recover from halted tests. The output artifacts for this activity are the test results. or due to problems with the test environment. If the test results indicate the failures are genuinely due to the target-of-test. see "Recover From Halted Tests" below. Within iteration.)  Test Script Command Failures . For additional information.