You are on page 1of 26

UNIT V TEST AUTOMATION

Software test automation – skill needed for automation – scope of automation – design and
architecture for automation – requirements for a test tool – challenges in automation – Test
metrics and measurements – project, progress and productivity metrics.
SOFTWARE TEST AUTOMATION:
Developing software to test the software is called test automation.

Test automation has several advantages. They are:


1. Automation save time as software can execute test cases faster than humans do.
2. Test automation can free the test engineers from mundane tasks and make them focus on
more creative tasks.
3. Automated tests can be more reliable.
4. Automation helps in immediate testing.
5. Automation can protect an organization against attrition of test engineers.
6. Test automation opens up opportunities for better utilization of global resources.
7. Certain types of testing cannot be executed without automation.
8. Automation means end-to-end, not test execution alone.
SKILLS NEEDED FOR AUTOMATION:
There are different generations of automations. The skill required for automation depends on
what generation of automation the company is in or desires to be in the near future.
It is classified into three generations. They are:
i. First Generation – Record and Playback: Record and playback avoids the
repetitive nature of executing tests.
A test engineer records the sequence of actions by keyboard characters or mouse
clicks and those recorded script can be played back later in the same order as they
were recorded.
Since a recorded script can be played back multiple times, the testing can be done
faster and multiple times.
ii. Second Generation – Data-driven : This method helps in developing test scripts that
generates the set of input conditions and corresponding expected output.
This enables the tests to be repeated for different input and output conditions.
iii. Third Generation – Action-driven: All actions that appear on the application are
automatically tested, based on a generic set of controls defined for automation.
The set of actions are represented as objects and those objects are reused.
The user needs to specify only the operations and everything else is needed for those
actions are automatically generated.

Classification of Skills for Automation:


Automation – Automation – Second Automation – Third Generation
First Generation
Generation
Skills for test case Skills for test case Skills for test case Skills for
automation automation automation framework
Scripting Languages Scripting Languages
Record-playback tools usage Programming Languages Programming Design and
Languages architecture skills
for framework
creation
Knowledge of data Design and Generic test
generation techniques architecture of the requirements for
product under test multiple products
Usage of the product Usage of the
under test framework
SCOPE OF AUTOMATION:
The automation requirements define what needs to be automated looking into various aspects.
The specific requirements can vary from product to product, from situation to situation, from
time to time.
The scope of automation can be identified by the following given below:
i. Identifying the types of testing amenable to automation
a. Stress, reliability, scalability and performance testing: These types of testing
require the test cases to be run from a large number of different machines for a
time period. Test cases belonging to these testing types become the first candidate
for automation.
b. Regression Tests: They are repetitive in nature. These test cases are executed
multiple times during the product development phases. Test automation will save
time and effort for this type of test.
c. Functional Tests: These kinds of tests may require a complex set up and thus
require specialized skill, which may be available on an ongoing basis. Automation
can enable using less-skilled people to run these tests on an ongoing basis.
ii. Automating Areas Less prone to change:
In a product scenario, the changes in requirements are quite common.
Automation should consider those areas where requirements go through lesser or no
changes.
Changes in requirements cause scenarios and new features to be impacted, not the
basic functionality of the product. While automating such basic functionality of the
product has to be considered first, so that they can be used for “regression test bed”.
iii. Automate Tests that pertain to standards:
One of the tests that products may have to undergo is compliance to standards. For
example, a product providing a JDBC interface should satisfy the standard JDBC
tests.
Automating for standards provides a dual advantage. Test suites developed for
standards are not only used for product testing but can also be sold as test tools for the
market.
Automating for standards creates new opportunities for them to be sold as
commercial tools.
iv. Management Aspects in automation: Adequate effort has to be spent to obtain management
commitment.
The automated test cases need to be maintained till the product reaches obsolescence.
Since it involves significant effort to develop and maintain automated tools, obtaining
management commitment is an important activity.
Return on investment is another aspect to be considered.
Effort estimates for automation should give a clear indication to the management on
the expected return on investment.
DESIGN AND ARCHITECTURE FOR AUTOMATION:
Design and architecture is an important aspect of automation. The design has to represent all requirements
in modules and in the interactions between modules. Testing, both internal interface and external interface
have to be captured by design and architecture.
Consider the figure below:

The thin arrows indicate the internal interfaces and the thick arrows indicate the external
interface. Architecture for test automation involves two major heads:
1. A test infrastructure that covers a test case database and
2. A defect database or defect repository.
These are shown as external modules.
1. External Modules:
There are two external modules namely
TCDB(Test case data base)
Defect db.
All the test cases, the steps to execute them and the history of their execution are stored
in the TCDB.
The test cases in TCDB can be manual or automated.
Defect DB or defect database or defect repository contains details of all the defects that
are found in various products that are tested in a particular organization.
It contains defects and all the related information.
Test engineers submit the defects for manual test cases.
For automated test cases, the framework can automatically submit the defects to the
defect db during execution.
2. Scenario and Configuration File Modules:
o A configuration file contains a set of variables that are used in automation. The variables
could be for the test framework for other modules in automation such as tools and
metrics.
o A configuration file is important for running the test cases for various execution
conditions and for running the tests for various input and output conditions and states.
o The values of variables in this configuration file can be changed dynamically to achieve
different execution, input, output and state conditions.
3. Test Cases and Test Framework Modules:
o A test framework is a module that combines “what to execute” and “how they have to be
executed”. It picks up the specific test cases that are automated and picks up the
scenarios and executes them.
o The test framework is considered the core of automation design. It subjects the test cases
to different scenarios. The framework monitors the results of every iteration and the
results are stored.
o The test framework contains the main logic for interacting, initiating, and controlling all
modules.
4. Tools and Result Modules
o When a test framework performs its operations, there are a set of tools that may be
required. For example, when test cases are stored as source code files in TCDB, they
need to be extracted and compiled by build tools.
o In order to run the compiled code, certain runtime and utilities may be required.
o When a test framework executes a set of test cases with a set of scenarios for the
different values provided by the configuration file, the results for each of the test case
along with the scenarios and variable values have to be stored for future analysis and
action.
o The history of all the previous tests should be recorded and kept as archives.
5. Report Generator and Reports/Metrics Modules
 Once the results of a test run are available, the next step is to prepare the test reports
and metrics.
 There should be customized reports such as an executive report, which gives very
high level status; technical reports, which give a moderate level of details of the test
runs.
 The module that takes the necessary inputs and prepares a formatted report is called a
report generator.
 Once the results are available, the report generators can generate metrics.
 All the reports and metrics that are generated are stored in the reports/metrics
modules of automation for further use and analysis.
GENERIC REQUIREMENTS FOR TEST TOOL/FRAMEWORK
REQUIREMENT 1: No hard coding in the test suite.
One of the most requirements for a test suite is to keep all variables separately in a file. By
following this practice, the source code for the test suite need not be modified every time it is
required to run the tests for different values of the variables.
The variables for the test suite are called configuration variables. The file in which all variable
names and their associated values are kept is called configuration file.
Providing inline comment for each of the variables will make the test suite more usable and may
avoid improper usage of variables.
Example:

REQUIREMENT 2: Test case/suite should be expandable.


The test cases go through large amount of changes and additionally there are situations for the
new test cases to be added to the test suite. Test case modification and new test case insertion
should not result in the existing test cases failing.
REQUIREMENT 3: Reuse of code for different types of testing, test cases.
The functionality of the product when subjected to different scenarios becomes test cases for
different types of testing. This encourages the reuse of code in automation. All the functions that
are needed by more than one test case can be separated and included in libraries. When writing
code for automation, adequate care has to be taken to make them modular by providing function,
libraries and including files.
REQUIREMENT 4: Automatic set up and clean up.
Each test program should have a setup program that will create the necessary setup before
executing the test cases.A setup for one test case may work negatively for another test case.
Hence, it is important not only to create the setup but also undo the setup soon after the test
execution for the test case.
REQUIREMENT 5: Independent test Cases
The test cases need to be independent not only in the design phase, but also in the execution
phase.To execute a particular test case, it should not expect any other test case to have been
executed before and not assume that certain other test cases will be run after it.
REQUIREMENT 6: Test Case Dependency
There may be a need for test cases to depend on others.
It makes it necessary for a particular test case to be executed before or after a dependent test case
is selected for execution.
A test tool or framework should provide both features.
REQUIREMENT 7: Insulating test cases during execution
To avoid test cases failing due to some unforeseen events, the framework should provide an
option for users to block some of the events.
There has to be an option in the framework to specify what events can affect the test suite and
what should not.
REQUIREMENT 8: Coding standards and directory structure
Coding standards and proper directory structures for a test suite may help the new engineers
in understanding the test suite fast and help in maintaining the test suite.
REQUIREMENT 9: Selection execution of test cases
The test tools or a framework should have a facility for the test engineer to select a particular test
case or a set of test cases and execute them.
The selection of test cases need not be in any order and any combination should be allowed.
CHALLENGES IN AUTOMATION:
 Automation takes time and effort and pays off in the long run.
 Automation requires significant initial outlay of money as well as a steep learning curve
for the test engineers before it can start paying off.
 The main challenge is because of the heavy front-loading of costs of test automation,
management starts to look for an early payback.
 Successful test automation endeavors are characterized by unflinching management
commitment, a clear vision of the goals, and the ability to set realistic short-term goals
that track progress with respect to the long-term vision.
TEST METRICS AND MEASUREMENTS –PROJECT, PROGRESS AND
PRODUCTIVITY METRICS
A metric is a quantitative measure of the degree to which a system, system component, or
process processes a given attribute.
Metrics derive information from raw data with a view to help in decision making.
Metrics can be classified into 1. Product metrics and 2. Process metrics.
Product metrics can be further classified into
i. Project Metrics: A set of metrics that indicates how the project is planned and
executed.
ii. Progress Metrics: A set of metrics that tracks how the different activities of the
project are progressing.
iii. Productivity Metrics: A set of metrics that takes into account various productivity
numbers that can be collected and used for planning and tracking testing activities.
PROJECT METRICS:
A typical project starts with requirements gathering and ends with product release. All the
phases that fall in between these points need to be planned and tracked.
Effort and schedule are the two factors to be tracked for any phase or activity.
a. Effort Variance (Planned vs. Actual)
Effort variance is the deviation of the actual effort from the estimated effort
Effort variance % = (Actual effort – Revised Estimate) * 100
Revised Estimate
A variance of more than 5% in any of the Software Development Life Cycle (SDLC) phase
indicates the scope for improvement in the estimation. The variance can be negative. A negative
variance is an indication of an over estimate.
These variance numbers along with analysis can help in better estimation for the next release or
the next release or the next revised estimation cycle.
b. Schedule Variance (Planned vs. Actual)
Schedule variance is the deviation of the actual schedule from the estimated schedule. Schedule
variance is calculated at the end of every milestone to find out how well the project is doing with
respect to the schedule.

PROGRESS METRICS:
Any project needs to be tracked from two angles.
i. How well the project is doing with respect to effort and schedule.
ii. How well the product is meeting the quality requirements for the release.
a. Test Defect Metrics: It focuses on the number of number of defects.
1. Defect find rate: When tracking and plotting the total number of defects found in the
product at regular intervals from beginning to end of a product development cycle, it may
show a pattern for defect arrival.The idea of testing is to find as many defects as possible early in
the cycle. Once the majority of the modules become available and the defects that are blocking the
tests are fixed, the defect arrival rate increases.
After a certain period of defect fixing and testing, the arrival of defects tends to slow down
and a continuation of that trend enables product release.
2. Defect Fix Rate: If the goal of testing is to find defects as early as possible, it is natural to
expect that the goal of development should be to fix defects as soon as they arrive. Defect
fix rate should be equal to defect arrival rate.When defects are fixed in the product, it opens the
doors for the introduction of new defects. Hence it is good idea to fix the defects early and test those
defects thoroughly to find out all introduced defects.
3. Outstanding defects rate:
Outstanding defect rate= Total defects found – Total defects fixed
When testing is in progress, the outstanding defects should be kept very close to zero so that
the development team’s bandwidth is available to analyze and fix the issues soon after they arrive.
b. Development Defect Metrics: It focuses on mapping the defects to different
components of the product.
1. Component-wise defect distribution: It is important to map the defects to different
components of the product so that they can be assigned to the appropriate developer to fix
those defects.
Based on the number of defects existing in each of the modules, the effort needed to fix
them, and the availability of skill sets for each of the modules, the project manager assigns
resources accordingly.
The defect classification as well as the total defects corresponding to each component in the
product helps the project manager in assigning and resolving those defects.
2. Defect Density and defect removal rate: The defect density metrics maps the defects in the
product with the volume of code
Defects per KLOC = Total defects found in the product
Total Executable AMD lines of code in KLOC
Where KLOC – Kilo lines of code
AMD – Added, Modified, Deleted code
Defect Removal rate = (Defects found by verification activities + Defects
Found in unit testing)
*100
Defects found by test teams
This formula helps in finding the efficiency of verification activities and unit testing which are
normally responsibilities of the development team and compare them to the defects found by the
testing teams.
PRODUCTIVITY METRICS:
Productivity Metrics combine several measurements and parameters with effort spend on the
product. They help in finding out the capability of the team as well as for other purposes,
such as
 Estimating for the new release
 Finding out how well the team is progressing
 Estimating the number of defects that can be found
1. Defects per 100 Hours of Testing: The metric defects per 100 hours of testing covers
the effort spent in testing.
Defects per 100 hours of testing = (Total defects found in the product For a period ) * 100
Total Hours spend to get those defects
2. Test Cases Executed per 100 Hours of Testing: The number of test cases executed by
the test team for a particular duration depends on team productivity and quality of the
product.
Test cases executed per 100 hours of testing = (Total test cases executed for a period
/ Total hours spent in test execution) * 100

STATUS MEETINGS, REPORTS, AND CONTROL ISSUES


 Measurement-related data, and other useful test-related information such as test
documents and problem reports, should be collected and organized by the testing staff.
 The test manager can then use these items for presentation and discussion at the periodic
meetings used for project monitoring and controlling. These are called project status
meetings.
 Test-specific status meetings can also serve to monitor testing efforts, to report test
progress, and to identify any test-related problems.
 Testers can meet separately and use test measurement data and related documents to
specifically discuss test status. Following this meeting they can then participate in the
overall project status meeting, or they can attend the project meetings as an integral part
of the project team and present and discuss test-oriented status data at that time.
 Another type of project-monitoring meeting is the milestone meeting that occurs when a
milestone has been met.
 A milestone meeting is a mechanism for the project team to communicate with upper
management and in some cases user/client groups. Major testing milestones should also
precipitate such meetings to discuss accomplishments and problems that have occurred in
meeting each test milestone, and to review activities for the next milestone phase. Testing
staff, project managers, SQA staff, and upper managers should attend.
 Typical test milestone meeting attendees are shown in Figure below:

Test milestone meetings, participants, inputs, and outputs


 It is important that all test-related information be available at the meeting, for example,
measurement data, test designs, test logs, test incident reports, and the test plan itself.
 Status meetings usually result in some type of status report published by the project
manager that is distributed to upper management.
 Test managers should produce similar reports to inform management of test progress.
The reports should be brief and contain the following items:
• Activities and accomplishments during the reporting period. All tasks that were attended to
should be listed, as well as which are complete. Progress made since the last reporting period
should also be described.
• Problems encountered since the last meeting period. The report should include a discussion
of the types of new problems that have occurred, their probable causes, and how they impact on
the project. Problem solutions should be described.
• Problems solved. At previous reporting periods problems were reported that have now been
solved. Those should be listed, as well as the solutions and the impact on the project.
• Outstanding problems. These have been reported previously, but have not been solved to
date. Report on any progress.
• Current project (testing) state versus plan. This is where graphs using process measurement
data play an important role. Examples will be described below. These plots show the current
state of the project (testing) and trends over time.
• Expenses versus budget. Plots and graphs are used to show budgeted versus actual expenses.
Earned value charts and plots are especially useful here.
• Plans for the next time period. List all the activities planned for the next time period as well
as the milestones.
An example bar graph for monitoring purposes is shown in Figure below.

Graph showing trends in test execution


 The bar graph shows the numbers for tests that were planned, available, executed, and
passed during the first 6 weeks of the testing effort.
 The number of tests executed and the number passed has gone up over the 6 weeks, the
number passed is approaching the number executed.
 The graph indicates to the manager that the number of executed tests is approaching the
number of tests available, and that the number of tests passed is also approaching the
number available, but not quite as quickly. All are approaching the number planned.
 If one extrapolates, the numbers should eventually converge at some point in time. The
bar graph, or a plot, allows the manager to identify the time frame in which this will
occur. Managers can also compare the number of test cases executed each week with the
amount that were planned for execution.
 The Figure below shows another graph based on defect data.
Sample plot for monitoring fault detection during test.
 The total number of faults found is plotted against weeks of testing effort.
 In this plot the number tapers off after several weeks of testing. The number of defects
repaired is also plotted. It lags behind defect detection since the code must be returned to
the developers who locate the defects and repair the code.
 In many cases this will be a very time-consuming process.
The agenda for a status meeting on testing includes a discussion of the work in progress
since the last meeting period.
Measurement data is presented, graphs are produced, and progress is evaluated. Test logs
and incident reports may be examined to get a handle on the problems occurring. If there
are problem areas that need attention, they are discussed and solutions are suggested to
get the testing effort back on track (control it).
Problems currently occurring may be closely associated with risks identified by the test
manager through the risk analysis done in test planning.
As testing progresses, status meeting attendees have to make decisions about whether to
stop testing or to continue on with the testing efforts, perhaps developing additional tests
as part of the continuation process.
They need to evaluate the status of the current testing efforts as compared to the expected
state specified in the test plan. In order to make a decision about whether testing is
complete the test team should refer to the stop-test criteria included in the test.
If they decide that the stop-test criteria have been met, then the final status report for
testing, the test summary report should be prepared. This is a summary of the testing
efforts, and becomes a part of the project’s historical database.
At project postmortems the test summary report can be used to discuss successes and
failures that occurred during testing.
CRITERIA FOR TEST COMPLETION
In the test plan the test manager describes the items to be tested, test cases, tools needed,
scheduled activities, and assigned responsibilities.
As the testing effort progresses many factors impact on planned testing schedules and
tasks in both positive and negative ways. For example, although a certain number of test
cases were specified, additional tests may be required.
This may be due to changes in requirements, failure to achieve coverage goals, and
unexpected high numbers of defects in critical modules.
Other unplanned events that impact on test schedules are, for example, laboratories that
were supposed to be available are not (perhaps because of equipment failures) or testers
who were assigned responsibilities are absent (perhaps because of illness or assignments
to other higher priority projects).
Given these events and uncertainties, test progress does not often follow plan. Tester
managers and staff should do their best to take actions to get the testing effort on track.
Since it is not possible to determine with certainty that all defects have been identified,
the decision to stop testing always carries risks.
If we stop testing now, we do save resources and are able to deliver the software to our
clients. However, there may be remaining defects that will cause catastrophic failures, so
if we stop now we will not find them. As a consequence, clients may be unhappy with
our software and may not want to do business with us in the future.
Part of the task of monitoring and controlling the testing effort is making this decision
about when testing is complete under conditions of uncertainly and risk. Managers should
not have to use guesswork to make this critical decision. The test plan should have a set
of quantifiable stop-test criteria to support decision making.
The weakest stop test decision criterion is to stop testing when the project runs out of
time and resources. TMM level 1 organizations often operate this way and risk client
dissatisfaction for many projects.
TMM level 2 organizations plan for testing and include stop-test criteria in the test plan.
They have very basic measurements in place to support management when they need to
make this decision. The stop-test criteria are as follows.
1. All the Planned Tests That Were Developed Have Been Executed and Passed.
This may be the weakest criterion. It does not take into account the actual dynamics of the testing
effort, for example, the types of defects found and their level of severity.
2. All Specified Coverage Goals Have Been Met.
An organization can stop testing when it meets its coverage goals as specified in the test plan.
For example, using white box coverage goals we can say that we have completed unit test when
we have reached 100% branch coverage for all units.
3. The Detection of a Specific Number of Defects Has Been Accomplished.
This approach requires defect data from past releases or similar projects. The defect distribution
and total defects is known for these projects, and is applied to make estimates of the number and
types of defects for the current project.

Some possible stop-test criteria


4. The Rates of Defect Detection for a Certain Time Period Have Fallen Below a Specified
Level.
The manager can use graphs that plot the number of defects detected per unit time. When the rate
of detection of defects of a severity rating under some specified threshold value falls below that
rate threshold, testing can be stopped
5. Fault Seeding Ratios Are Favorable.
The technique is based on intentionally inserting a known set of defects into a program. This
provides support for a stop-test decision. It is assumed that the inserted set of defects are typical
defects; that is, they are of the same type, occur at the same frequency, and have the same impact
as the actual defects in the code. The technique works as follow. Several members of the test
team insert (or seed) the code under test with a known set of defects. The other members of the
team test the code to try to reveal as many of the defects as possible. The number of undetected
seeded defects gives an indication of the number of total defects remaining in the code (seeded
plus actual).
A ratio can be set up as follows:

Using this ratio we can say, for example, if the code was seeded with 100 defects and 50 have
been found by the test team, it is likely that 50% of the actual defects still remain and the testing
effort should continue. When all the seeded defects are found the manager has some confidence
that the test efforts have been completed.

SOFTWARE CONFIGURATION MANAGEMENT


o Software systems are constantly undergoing change during development and
maintenance.
o By software systems we include all software artifacts such as requirements and design
documents, test plans, user manuals, code, and test cases.
o Different versions, variations, builds, and releases exist for these artifacts. Organizations
need staff, tools, and techniques to help them track and manage these artifacts and
changes to the artifacts that occur during development and maintenance.
o To control and monitor the testing process, testers and test mangers also need access to
configuration management tools and staff.
o There are four major activities associated with configuration management. They are:
1. Identification of the Configuration Items
The items that will be under configuration control must be selected, and the relationships between
them must be formalized. An example relationship is “part-of” which is relevant to composite
items. Relationships are often expressed in a module interconnection language (MIL).
Figure 5.5 shows four configuration items, a design specification, a test specification, an object
code module, and source code module.
Figure 5.5 Sample configuration items.
The arrows indicate links or relationships between them. In addition to identification of
configuration items, procedures for establishment of baseline versions for each item must be in
place. Baselines are formally reviewed and agreed upon versions of software artifacts, from
which all changes are measured. They serve as the basis for further development and can be
changed only through formal change procedures. Baselines plus approved changes from those
baselines constitute the correct configuration identification for the item.
2. Change Control : There are two aspects of change control—one is tool-based, the other team-based.
The team involved is called a configuration control board. This group oversees changes in the software
system. The members of the board should be selected from SQA staff, test specialists, developers, and
analysts. It is this team that oversees, gives approval for, and follows up on changes. They develop
change procedures and the formats for change request forms.
3. Configuration status reporting
These reports help to monitor changes made to configuration items. They contain a history of all
the changes and change information for each configuration item. Each time an approved change
is made to a configuration item, a configuration status report entry is made. These reports are
kept in the CMS database and can be accessed by project personnel so that all can be aware of
changes that are made. The reports can answer questions such as:
• who made the change;
• what was the reason for the change;
• what is the date of the change;
• what is affected by the change.
Reports for configuration items can be disturbed to project members and discussed at status
meetings.
4. Configuration audits
After changes are made to a configuration item, configuration audit is done to ensure the changes
have been done properly. The audit is usually conducted by the SQA group or members of the
configuration control board. They focus on issues that are not covered in a technical review. A
checklist of items to cover can serve as the agenda for the audit. For each configuration item the
audit should cover the following:
o Compliance with software engineering standards.
o The configuration change procedure.
o Related configuration items.
o Reviews.
A review is a group meeting whose purpose is to evaluate a software artifact or a set of
software artifacts.
The general goals for the reviewers are to:
 identify problem components or components in the software artifact that need improvement;
 identify components of the software artifact that do not need improvement;
 identify specific errors or defects in the software artifact (defect detection);
 ensure that the artifact conforms to organizational standards.

Role of reviews in testing software deliverables.


Types of Reviews
Reviews can be formal or informal. They can be technical or managerial. Managerial reviews usually focus
on project management and project status. The technical reviews are used to:
 verify that a software artifact meets its specification;
 to detect defects; and
 check for compliance to standards.
The two most widely used types of reviews are discussed below:
Inspections as a Type of Technical Review
Inspections are a type of review that is formal in nature and requires prereview preparation on
the part of the review team. Several steps are involved in the inspection process as outlined in
Figure below.
Steps in the Inspection Process

 The responsibility for initiating and carrying through the steps belongs to the inspection
leader (or moderator) who is usually a member of the technical staff or the software
quality assurance team.
 The inspection leader plans for the inspection, sets the date, invites the participants,
distributes the required documents, runs the inspection meeting, appoints a recorder to
record results, and monitors the followup period after the review.
 The key item that the inspection leader prepares is the checklist of items that serves as
the agenda for the review. The checklist varies with the software artifact being
inspected. It contains items that inspection participants should focus their attention on
check, and evaluate.
 The inspection participants address each item on the checklist. The recorder records any
discrepancies, misunderstandings, errors, and ambiguities; in general, any problems
associated with an item. The completed checklist is part of the review summary
document.
 The inspection process begins when inspection preconditions are met as specified in the
inspection policies, procedures, and plans.
 The Inspection leader announces the inspection meeting and distributes the items to be
inspected, the checklist, and any other auxiliary material to the participants usually a day
or two before the scheduled meeting.
 Participants must do their homework and study the items and the checklist.
 When the actual meeting takes place the document under review is presented by a
reader, and is discussed as it read.
 Attention is paid to issues related to quality, adherence to standards, testability,
traceability, and satisfaction of the users/clients requirements.
 All the items on the checklist are addressed by the group as a whole, and the problems
are recorded. Inspection metrics are also recorded.
 The recorder documents all the findings and the measurements.
Walkthrough as a Type of Technical Review
 Walkthroughs are a type of technical review where the producer of the reviewed material
serves as the review leader and actually guides the progression of the review.
 Walkthroughs have traditionally been applied to design and code.
 In the case of detailed design or code walkthroughs, test inputs may be selected and
review participants then literally walk through the design or code with the set of inputs in
a line-by-line manner.
 If the presenter gives a skilled presentation of the material, the walkthrough participants
are able to build a comprehensive mental model of the detailed design or code and are
able to both evaluate its quality and detect defects.
 Walkthroughs may be used for material other than code, for example, data descriptions,
reference manuals, or even specifications.
Developing a Review Program
Reviews are an effective tool used along with execution-based testing to support defect
detection, increased software quality, and customer satisfaction.
Reviews should be used for evaluating newly developing products as well as new releases or
versions of existing software.
If reviews are conducted on the software artifacts as they are developed throughout the software
life cycle, they serve as filters for each artifact.
1. When testing software, unexpected behavior is observed because of a defect(s) in the
code. The symptomatic information is what the developer works with to find and repair
(fix) the defect.
 Reviews are a more orderly process. They concentrate on stepping through the reviewed
item focusing on specific areas.
 During a review there is a systematic process in place for building a real-time mental
model of the software item.
 The reviewers step through this model building process as a group. Reviewers know
exactly where they are focused in the document or code and where the problem has
surfaced.
 They can basically carry out defect detection and defect localization tasks at the same
time.
2. Reviews also have the advantage of a two-pass approach for defect detection. Pass 1 has
individuals first reading the reviewed item and pass 2 has the item read by the group as a whole.
If one individual reviewer did not identify a defect or a problem, others in the group are likely to
find it. The group evaluation also makes false identification of defects/ problems less likely.
3. Inspectors have the advantage of the checklist which calls their attention to specific areas that
are defect prone. These are important clues. Testers/developers may not have such information
available.
Components of Review Plans
Reviews are development and maintenance activities that require time and resources. They
should be planned so that there is a place for them in the project schedule. An organization
should develop a review plan template that can be applied to all software projects. The template
should specify the following items for inclusion in the review plan.
• review goals;
• items being reviewed;
• preconditions for the review;
• roles, team size, participants;
• training requirements;
• review steps;
• checklists and other related documents to be disturbed to participants;
• time requirements;
• the nature of the review log and summary report;
• rework and follow-up.
Review Goals
As in the test plan or any other type of plan, the review planner should specify the goals to be
accomplished by the review. Some general review goals are
(i) identification of problem components or components in the software artifact that need
improvement
(ii) identification of specific errors or defects in the software artifact
(iii) ensuring that the artifact conforms to organizational standards, and
(iv) communication to the staff about the nature of the product being developed.
Pre conditions and Items to Be Reviewed
Given the principal goals of a technical review—early defect detection, identification of problem
areas, and familiarization with software artifacts— many software items are candidates for
review. In many organizations the items selected for review include:
• requirements documents
• design documents
• code
• test plans
• user manuals
• training manuals
• standards documents
The preconditions need to be described in the review policy statement and specified in the
review plan for an item. General preconditions for a review are:
(i) the review of an item(s) is a required activity in the project plan
(ii) a statement of objectives for the review has been developed
(iii) the individuals responsible for developing the reviewed item indicate readiness for the
review
(iv) the review leader believes that the item to be reviewed is sufficiently complete for the review
to be useful.
Roles , Participants, TeamSize andTime Requirements
Two major roles that need filling for a successful review are (i) a leader or moderator, and (ii) a
recorder. These are shown in Figure below.
REVIEW ROLES
The success of the review depends on the experience and expertise of the moderator.
Reviewing a software item is a tedious process and requires great attention to details. The
moderator needs to be sure that all are prepared for the review and that the review
meeting stays on track.
The moderator/planner must ensure that a time period is selected that is appropriate for
the size and complexity of the item under review.
Review sessions can be scheduled over 2-hour time periods separated by breaks. The
time allocated for a review should be adequate enough to ensure that the material under
review can be adequately covered.
The review recorder has the responsibility for documenting defects, and recording review
findings and recommendations.
Other roles may include a reader who reads or presents the item under review. Readers
are usually the authors or preparers of the item under review.
The author( s) is responsible for performing any rework on the reviewed item. In a
walkthrough type of review, the author may serve as the moderator, but this is not true for
an inspection. All reviewers should be trained in the review process.
The size of the review team will vary depending type, size, and complexity of the item
under review. The minimal team size of 3 ensures that the review will be public.

Review team membership constituency.


The testers should take part in all major milestone reviews to ensure:
 effective test planning;
 traceability between tests, requirements, design and code elements;
 discussion, and support of testability issues;
 support for software product quality issues;
 the collection and storage of review defect data;
 support for adequate testing of “trouble-prone” areas.
Review procedures
For each type of review that an organization wishes to implement, there should be a set of
standardized steps that define the given review procedure.
For example, the steps for an inspection are initiation, preparation, inspection meeting,
reporting results, and rework and follow-up.
For each step in the procedure the activities and tasks for all the reviewer participants
should be defined.
The review plan should refer to the standardized procedures where applicable.
Review Training
Review participants need training to be effective.
Responsibility for reviewer training classes usually belongs to the internal technical
training staff.
Alternatively, an organization may decide to send its review trainees to external training
courses run by commercial institutions.
Review participants, and especially those who will be review leaders, need the training.
Test specialists should also receive review training.

Topics for review training sessions


1. Review of Process Concepts
Reviewers should understand basic process concepts, the value of process improvement, and the
role of reviews as a product and process improvement tool.
2. Review of Quality Issues
Reviewers should be made familiar with quality attributes such as correctness, testability,
maintainability, usability, security, portability, and so on, and how can these be evaluated in a
review.
3. Review of Organizational Standards for Software Artifacts
Reviewers should be familiar with organizational standards for software artifacts. For example,
what items must be included in a software document; what is the correct order and degree of
coverage of topics expected; what types of notations are permitted. Good sources for this
material are IEEE standards and guides.
4. Understanding the Material to Be Reviewed
Concepts of understanding and how to build mental models during comprehension of code and
software-related documents should be covered.
5. Defect and Problem Types
Review trainees need to become aware of the most frequently occurring types of problems or
errors that are likely to occur during development.
They need to be aware what their causes are, how they are transformed into defects, and where
they are likely to show up in the individual deliverables.
The trainees should become familiar with the defect type categories, severity levels, and numbers
and types of defects found in past deliverables of similar systems.
6. Communication and Meeting Management Skills
These topics are especially important for review leaders. It is their responsibility to communicate
with the review team, the preparers of the reviewed document, management, and in some cases
clients/user group members. Review leaders need to have strong oral and written communication
skills and also learn how to conduct a review meeting.
7. Review Documentation and Record Keeping.
Review leaders need to learn how to prepare checklists, agendas, and logs for review meetings.
Checklists for inspections should be appropriate for the item being inspected. Checklists in
general should focus on the following issues:
 most frequent errors;
 completeness of the document;
 correctness of the document;
 adherence to standards.
8. Special Instructions
During review training there may be some topics that need to be covered with the review
participants. For example, there may be interfaces with hardware that involve the reviewed item,
and reviewers may need some additional background discussion to be able to evaluate those
interfaces.
9. Practice Review Sessions
Review trainees should participate in practice review sessions. There are very instructive and
essential. One option is for instructors to use existing documents that have been reviewed in the
past and have the trainees do a practice review of these documents.
Review Checklists:
Inspections formally require the use of a checklist of items that serves as the focal point for
review examinations and discussions on both the individual and group levels.
Checklists are very important for inspectors. They provide structure and an agenda for the review
meeting. They guide the review activities, identify focus areas for discussion and evaluation,
ensure all relevant items are covered, and help to frame review record keeping and measurement.
Reviews are really a two-step process:
(i) reviews by individuals, and
(ii) (ii) reviews by the group.
The checklist plays its important role in both steps. The first step involves the individual
reviewer and the review material. Prior to the review meeting each individual must be provided
with the materials to review and the checklist of items. It is his responsibility to do his homework
and individually inspect that document using the checklist as a guide, and to document any
problems he encounters. When they attend the group meeting which is the second review step,
each reviewer should bring his or her individual list of defect/problems, and as each item on the
checklist is discussed they should comment.
Each item that undergoes a review requires a different checklist that addresses the special issues
associated with quality evaluation for that item. However each checklist should have components
similar to those shown in Table below:

Requirements Reviews
In addition to covering the items on the general document checklist as shown in Table 10.2, the
following items should be included in the checklist for a requirements review.
 completeness (have all functional and quality requirements described in the problem
statement been included?);
 correctness (do the requirements reflect the user’s needs? are they stated without error?);
 consistency (do any requirements contradict each other?);
 clarity (it is very important to identify and clarify any ambiguous requirements);
 relevance (is the requirement pertinent to the problem area? Requirements should not be
superfluous);
 redundancy (a requirement may be repeated; if it is a duplicate it should be combined
with an equivalent one);
 testability (can each requirement be covered successfully with one or more test cases?
can tests determine if the requirement has been satisfied?);
Table : A Sample general review checklist for software documents
Design Reviews
Designs are often reviewed in one or more stages. It is useful to review the high level
architectural design at first and later review the detailed design. At each level of design it is
important to check that the design is consistent with the requirements and that it covers all the
requirements. Again the general checklist is applicable with respect to clarity, completeness,
correctness and so on. Some specific items that should be checked for at a design review are:
 a description of the design technique used;
 an explanation of the design notation used;
 evaluation of design alternatives
 quality of the user interface;
 quality of the user help facilities;
 identification of execution criteria and operational sequences;
 clear description of interfaces between this system and other software and hardware systems;
 coverage of all functional requirements by design elements;
 coverage of all quality requirements, for example, ease of use, portability,
maintainability, security, readability, adaptability, performance requirements (storage,
response time) by design elements;
 reusability of design components;
 testability (how will the modules, and their interfaces be tested? How will they be
integrated and tested as a complete system?).
For reviewing detailed design the following focus areas should also be revisited:
 encapsulation, information hiding and inheritance;
 module cohesion and coupling;
 quality of module interface description;
 module reuse.
Code Reviews : Code reviews are useful tools for detecting defects and for evaluating code quality. Some
organizations require a clean compile as a precondition for a code review. Code review checklists can have
both general and language-specific components. The general code review checklist can be used to review
code written in any programming language. There are common quality features that should be checked no
matter what implementation language is selected. Table below shows a list of items that should be included
in a general code checklist.
Test Plan Reviews
Test plans are also items that can be reviewed. Some organizations will review them along with
other related documents. For example, a master test plan and an acceptance test plan could be
reviewed with the requirements document, the integration and system test plans reviewed with
the design documents, and unit test plans reviewed with detailed design documents.
Other testing products such as test design specifications, test procedures, and test cases can also
be reviewed. These reviews can be held in conjunction with reviews of other test-related items or
other software
Reporting Review Results
Several information-rich items result from technical reviews. These items are listed below. The
items can be bundled together in a single report or distributed over several distinct reports.
Review polices should indicate the formats of the reports required. The review reports should
contain the following information.
1. For inspections—the group checklist with all items covered and comments relating to each
item.
2. For inspections—a status, or summary, report signed by all participants.
3. A list of defects found, and classified by type and frequency. Each defect should be cross-
referenced to the line, pages, or figure in the reviewed document where it occurs.
4. Review metric data.
The inspection report on the reviewed item is a document signed by all the reviewers. It may
contain a summary of defects and problems found and a list of review attendees, and some
review measures such as the time period for the review and the total number of major/minor
defects. The reviewers are responsible for the quality of the information in the written report.
There are several status options available to the review participants on this report. These are:
1. Accept: The reviewed item is accepted in its present form or with minor rework required that
does not need further verification.
2. Conditional accept: The reviewed item needs rework and will be accepted after the moderator
has checked and verified the rework.
3. Reinspect: Considerable rework must be done to the reviewed item.
The inspection needs to be repeated when the rework is done. Before signing their name to such
an inspection report reviewers need to be sure that all checklist items have been addressed, all
defects recorded, and all quality issues discussed. This is important for several reasons.
A milestone meeting is usually held, and clients are notified of the completion of the milestone.
If the software item is given a conditional accept or a reinspect, a follow-up period occurs where the
authors must address all the items on the problem/defect list. The moderator reviews the rework in the case
of a conditional accept. Another inspection meeting is required to reverify the items in the case of a
“reinspect” decision.
IEEE standards suggest that the inspection report contain vital data such as:
(i) number of participants in the review;
(ii) the duration of the meeting;
(iii) size of the item being reviewed (usually LOC or number of pages);
(iv) total preparation time for the inspection team;
(v) status of the reviewed item;
(vi) estimate of rework effort and the estimated date for completion of the rework.
This data will help an organization to evaluate the effectiveness of the review process and to
make improvements.
26

You might also like