Professional Documents
Culture Documents
Software test automation – skill needed for automation – scope of automation – design and
architecture for automation – requirements for a test tool – challenges in automation – Test
metrics and measurements – project, progress and productivity metrics.
SOFTWARE TEST AUTOMATION:
Developing software to test the software is called test automation.
The thin arrows indicate the internal interfaces and the thick arrows indicate the external
interface. Architecture for test automation involves two major heads:
1. A test infrastructure that covers a test case database and
2. A defect database or defect repository.
These are shown as external modules.
1. External Modules:
There are two external modules namely
TCDB(Test case data base)
Defect db.
All the test cases, the steps to execute them and the history of their execution are stored
in the TCDB.
The test cases in TCDB can be manual or automated.
Defect DB or defect database or defect repository contains details of all the defects that
are found in various products that are tested in a particular organization.
It contains defects and all the related information.
Test engineers submit the defects for manual test cases.
For automated test cases, the framework can automatically submit the defects to the
defect db during execution.
2. Scenario and Configuration File Modules:
o A configuration file contains a set of variables that are used in automation. The variables
could be for the test framework for other modules in automation such as tools and
metrics.
o A configuration file is important for running the test cases for various execution
conditions and for running the tests for various input and output conditions and states.
o The values of variables in this configuration file can be changed dynamically to achieve
different execution, input, output and state conditions.
3. Test Cases and Test Framework Modules:
o A test framework is a module that combines “what to execute” and “how they have to be
executed”. It picks up the specific test cases that are automated and picks up the
scenarios and executes them.
o The test framework is considered the core of automation design. It subjects the test cases
to different scenarios. The framework monitors the results of every iteration and the
results are stored.
o The test framework contains the main logic for interacting, initiating, and controlling all
modules.
4. Tools and Result Modules
o When a test framework performs its operations, there are a set of tools that may be
required. For example, when test cases are stored as source code files in TCDB, they
need to be extracted and compiled by build tools.
o In order to run the compiled code, certain runtime and utilities may be required.
o When a test framework executes a set of test cases with a set of scenarios for the
different values provided by the configuration file, the results for each of the test case
along with the scenarios and variable values have to be stored for future analysis and
action.
o The history of all the previous tests should be recorded and kept as archives.
5. Report Generator and Reports/Metrics Modules
Once the results of a test run are available, the next step is to prepare the test reports
and metrics.
There should be customized reports such as an executive report, which gives very
high level status; technical reports, which give a moderate level of details of the test
runs.
The module that takes the necessary inputs and prepares a formatted report is called a
report generator.
Once the results are available, the report generators can generate metrics.
All the reports and metrics that are generated are stored in the reports/metrics
modules of automation for further use and analysis.
GENERIC REQUIREMENTS FOR TEST TOOL/FRAMEWORK
REQUIREMENT 1: No hard coding in the test suite.
One of the most requirements for a test suite is to keep all variables separately in a file. By
following this practice, the source code for the test suite need not be modified every time it is
required to run the tests for different values of the variables.
The variables for the test suite are called configuration variables. The file in which all variable
names and their associated values are kept is called configuration file.
Providing inline comment for each of the variables will make the test suite more usable and may
avoid improper usage of variables.
Example:
PROGRESS METRICS:
Any project needs to be tracked from two angles.
i. How well the project is doing with respect to effort and schedule.
ii. How well the product is meeting the quality requirements for the release.
a. Test Defect Metrics: It focuses on the number of number of defects.
1. Defect find rate: When tracking and plotting the total number of defects found in the
product at regular intervals from beginning to end of a product development cycle, it may
show a pattern for defect arrival.The idea of testing is to find as many defects as possible early in
the cycle. Once the majority of the modules become available and the defects that are blocking the
tests are fixed, the defect arrival rate increases.
After a certain period of defect fixing and testing, the arrival of defects tends to slow down
and a continuation of that trend enables product release.
2. Defect Fix Rate: If the goal of testing is to find defects as early as possible, it is natural to
expect that the goal of development should be to fix defects as soon as they arrive. Defect
fix rate should be equal to defect arrival rate.When defects are fixed in the product, it opens the
doors for the introduction of new defects. Hence it is good idea to fix the defects early and test those
defects thoroughly to find out all introduced defects.
3. Outstanding defects rate:
Outstanding defect rate= Total defects found – Total defects fixed
When testing is in progress, the outstanding defects should be kept very close to zero so that
the development team’s bandwidth is available to analyze and fix the issues soon after they arrive.
b. Development Defect Metrics: It focuses on mapping the defects to different
components of the product.
1. Component-wise defect distribution: It is important to map the defects to different
components of the product so that they can be assigned to the appropriate developer to fix
those defects.
Based on the number of defects existing in each of the modules, the effort needed to fix
them, and the availability of skill sets for each of the modules, the project manager assigns
resources accordingly.
The defect classification as well as the total defects corresponding to each component in the
product helps the project manager in assigning and resolving those defects.
2. Defect Density and defect removal rate: The defect density metrics maps the defects in the
product with the volume of code
Defects per KLOC = Total defects found in the product
Total Executable AMD lines of code in KLOC
Where KLOC – Kilo lines of code
AMD – Added, Modified, Deleted code
Defect Removal rate = (Defects found by verification activities + Defects
Found in unit testing)
*100
Defects found by test teams
This formula helps in finding the efficiency of verification activities and unit testing which are
normally responsibilities of the development team and compare them to the defects found by the
testing teams.
PRODUCTIVITY METRICS:
Productivity Metrics combine several measurements and parameters with effort spend on the
product. They help in finding out the capability of the team as well as for other purposes,
such as
Estimating for the new release
Finding out how well the team is progressing
Estimating the number of defects that can be found
1. Defects per 100 Hours of Testing: The metric defects per 100 hours of testing covers
the effort spent in testing.
Defects per 100 hours of testing = (Total defects found in the product For a period ) * 100
Total Hours spend to get those defects
2. Test Cases Executed per 100 Hours of Testing: The number of test cases executed by
the test team for a particular duration depends on team productivity and quality of the
product.
Test cases executed per 100 hours of testing = (Total test cases executed for a period
/ Total hours spent in test execution) * 100
Using this ratio we can say, for example, if the code was seeded with 100 defects and 50 have
been found by the test team, it is likely that 50% of the actual defects still remain and the testing
effort should continue. When all the seeded defects are found the manager has some confidence
that the test efforts have been completed.
The responsibility for initiating and carrying through the steps belongs to the inspection
leader (or moderator) who is usually a member of the technical staff or the software
quality assurance team.
The inspection leader plans for the inspection, sets the date, invites the participants,
distributes the required documents, runs the inspection meeting, appoints a recorder to
record results, and monitors the followup period after the review.
The key item that the inspection leader prepares is the checklist of items that serves as
the agenda for the review. The checklist varies with the software artifact being
inspected. It contains items that inspection participants should focus their attention on
check, and evaluate.
The inspection participants address each item on the checklist. The recorder records any
discrepancies, misunderstandings, errors, and ambiguities; in general, any problems
associated with an item. The completed checklist is part of the review summary
document.
The inspection process begins when inspection preconditions are met as specified in the
inspection policies, procedures, and plans.
The Inspection leader announces the inspection meeting and distributes the items to be
inspected, the checklist, and any other auxiliary material to the participants usually a day
or two before the scheduled meeting.
Participants must do their homework and study the items and the checklist.
When the actual meeting takes place the document under review is presented by a
reader, and is discussed as it read.
Attention is paid to issues related to quality, adherence to standards, testability,
traceability, and satisfaction of the users/clients requirements.
All the items on the checklist are addressed by the group as a whole, and the problems
are recorded. Inspection metrics are also recorded.
The recorder documents all the findings and the measurements.
Walkthrough as a Type of Technical Review
Walkthroughs are a type of technical review where the producer of the reviewed material
serves as the review leader and actually guides the progression of the review.
Walkthroughs have traditionally been applied to design and code.
In the case of detailed design or code walkthroughs, test inputs may be selected and
review participants then literally walk through the design or code with the set of inputs in
a line-by-line manner.
If the presenter gives a skilled presentation of the material, the walkthrough participants
are able to build a comprehensive mental model of the detailed design or code and are
able to both evaluate its quality and detect defects.
Walkthroughs may be used for material other than code, for example, data descriptions,
reference manuals, or even specifications.
Developing a Review Program
Reviews are an effective tool used along with execution-based testing to support defect
detection, increased software quality, and customer satisfaction.
Reviews should be used for evaluating newly developing products as well as new releases or
versions of existing software.
If reviews are conducted on the software artifacts as they are developed throughout the software
life cycle, they serve as filters for each artifact.
1. When testing software, unexpected behavior is observed because of a defect(s) in the
code. The symptomatic information is what the developer works with to find and repair
(fix) the defect.
Reviews are a more orderly process. They concentrate on stepping through the reviewed
item focusing on specific areas.
During a review there is a systematic process in place for building a real-time mental
model of the software item.
The reviewers step through this model building process as a group. Reviewers know
exactly where they are focused in the document or code and where the problem has
surfaced.
They can basically carry out defect detection and defect localization tasks at the same
time.
2. Reviews also have the advantage of a two-pass approach for defect detection. Pass 1 has
individuals first reading the reviewed item and pass 2 has the item read by the group as a whole.
If one individual reviewer did not identify a defect or a problem, others in the group are likely to
find it. The group evaluation also makes false identification of defects/ problems less likely.
3. Inspectors have the advantage of the checklist which calls their attention to specific areas that
are defect prone. These are important clues. Testers/developers may not have such information
available.
Components of Review Plans
Reviews are development and maintenance activities that require time and resources. They
should be planned so that there is a place for them in the project schedule. An organization
should develop a review plan template that can be applied to all software projects. The template
should specify the following items for inclusion in the review plan.
• review goals;
• items being reviewed;
• preconditions for the review;
• roles, team size, participants;
• training requirements;
• review steps;
• checklists and other related documents to be disturbed to participants;
• time requirements;
• the nature of the review log and summary report;
• rework and follow-up.
Review Goals
As in the test plan or any other type of plan, the review planner should specify the goals to be
accomplished by the review. Some general review goals are
(i) identification of problem components or components in the software artifact that need
improvement
(ii) identification of specific errors or defects in the software artifact
(iii) ensuring that the artifact conforms to organizational standards, and
(iv) communication to the staff about the nature of the product being developed.
Pre conditions and Items to Be Reviewed
Given the principal goals of a technical review—early defect detection, identification of problem
areas, and familiarization with software artifacts— many software items are candidates for
review. In many organizations the items selected for review include:
• requirements documents
• design documents
• code
• test plans
• user manuals
• training manuals
• standards documents
The preconditions need to be described in the review policy statement and specified in the
review plan for an item. General preconditions for a review are:
(i) the review of an item(s) is a required activity in the project plan
(ii) a statement of objectives for the review has been developed
(iii) the individuals responsible for developing the reviewed item indicate readiness for the
review
(iv) the review leader believes that the item to be reviewed is sufficiently complete for the review
to be useful.
Roles , Participants, TeamSize andTime Requirements
Two major roles that need filling for a successful review are (i) a leader or moderator, and (ii) a
recorder. These are shown in Figure below.
REVIEW ROLES
The success of the review depends on the experience and expertise of the moderator.
Reviewing a software item is a tedious process and requires great attention to details. The
moderator needs to be sure that all are prepared for the review and that the review
meeting stays on track.
The moderator/planner must ensure that a time period is selected that is appropriate for
the size and complexity of the item under review.
Review sessions can be scheduled over 2-hour time periods separated by breaks. The
time allocated for a review should be adequate enough to ensure that the material under
review can be adequately covered.
The review recorder has the responsibility for documenting defects, and recording review
findings and recommendations.
Other roles may include a reader who reads or presents the item under review. Readers
are usually the authors or preparers of the item under review.
The author( s) is responsible for performing any rework on the reviewed item. In a
walkthrough type of review, the author may serve as the moderator, but this is not true for
an inspection. All reviewers should be trained in the review process.
The size of the review team will vary depending type, size, and complexity of the item
under review. The minimal team size of 3 ensures that the review will be public.
Requirements Reviews
In addition to covering the items on the general document checklist as shown in Table 10.2, the
following items should be included in the checklist for a requirements review.
completeness (have all functional and quality requirements described in the problem
statement been included?);
correctness (do the requirements reflect the user’s needs? are they stated without error?);
consistency (do any requirements contradict each other?);
clarity (it is very important to identify and clarify any ambiguous requirements);
relevance (is the requirement pertinent to the problem area? Requirements should not be
superfluous);
redundancy (a requirement may be repeated; if it is a duplicate it should be combined
with an equivalent one);
testability (can each requirement be covered successfully with one or more test cases?
can tests determine if the requirement has been satisfied?);
Table : A Sample general review checklist for software documents
Design Reviews
Designs are often reviewed in one or more stages. It is useful to review the high level
architectural design at first and later review the detailed design. At each level of design it is
important to check that the design is consistent with the requirements and that it covers all the
requirements. Again the general checklist is applicable with respect to clarity, completeness,
correctness and so on. Some specific items that should be checked for at a design review are:
a description of the design technique used;
an explanation of the design notation used;
evaluation of design alternatives
quality of the user interface;
quality of the user help facilities;
identification of execution criteria and operational sequences;
clear description of interfaces between this system and other software and hardware systems;
coverage of all functional requirements by design elements;
coverage of all quality requirements, for example, ease of use, portability,
maintainability, security, readability, adaptability, performance requirements (storage,
response time) by design elements;
reusability of design components;
testability (how will the modules, and their interfaces be tested? How will they be
integrated and tested as a complete system?).
For reviewing detailed design the following focus areas should also be revisited:
encapsulation, information hiding and inheritance;
module cohesion and coupling;
quality of module interface description;
module reuse.
Code Reviews : Code reviews are useful tools for detecting defects and for evaluating code quality. Some
organizations require a clean compile as a precondition for a code review. Code review checklists can have
both general and language-specific components. The general code review checklist can be used to review
code written in any programming language. There are common quality features that should be checked no
matter what implementation language is selected. Table below shows a list of items that should be included
in a general code checklist.
Test Plan Reviews
Test plans are also items that can be reviewed. Some organizations will review them along with
other related documents. For example, a master test plan and an acceptance test plan could be
reviewed with the requirements document, the integration and system test plans reviewed with
the design documents, and unit test plans reviewed with detailed design documents.
Other testing products such as test design specifications, test procedures, and test cases can also
be reviewed. These reviews can be held in conjunction with reviews of other test-related items or
other software
Reporting Review Results
Several information-rich items result from technical reviews. These items are listed below. The
items can be bundled together in a single report or distributed over several distinct reports.
Review polices should indicate the formats of the reports required. The review reports should
contain the following information.
1. For inspections—the group checklist with all items covered and comments relating to each
item.
2. For inspections—a status, or summary, report signed by all participants.
3. A list of defects found, and classified by type and frequency. Each defect should be cross-
referenced to the line, pages, or figure in the reviewed document where it occurs.
4. Review metric data.
The inspection report on the reviewed item is a document signed by all the reviewers. It may
contain a summary of defects and problems found and a list of review attendees, and some
review measures such as the time period for the review and the total number of major/minor
defects. The reviewers are responsible for the quality of the information in the written report.
There are several status options available to the review participants on this report. These are:
1. Accept: The reviewed item is accepted in its present form or with minor rework required that
does not need further verification.
2. Conditional accept: The reviewed item needs rework and will be accepted after the moderator
has checked and verified the rework.
3. Reinspect: Considerable rework must be done to the reviewed item.
The inspection needs to be repeated when the rework is done. Before signing their name to such
an inspection report reviewers need to be sure that all checklist items have been addressed, all
defects recorded, and all quality issues discussed. This is important for several reasons.
A milestone meeting is usually held, and clients are notified of the completion of the milestone.
If the software item is given a conditional accept or a reinspect, a follow-up period occurs where the
authors must address all the items on the problem/defect list. The moderator reviews the rework in the case
of a conditional accept. Another inspection meeting is required to reverify the items in the case of a
“reinspect” decision.
IEEE standards suggest that the inspection report contain vital data such as:
(i) number of participants in the review;
(ii) the duration of the meeting;
(iii) size of the item being reviewed (usually LOC or number of pages);
(iv) total preparation time for the inspection team;
(v) status of the reviewed item;
(vi) estimate of rework effort and the estimated date for completion of the rework.
This data will help an organization to evaluate the effectiveness of the review process and to
make improvements.
26