Professional Documents
Culture Documents
Software Testing Made Easy
Software Testing Made Easy
Software
Testing
- made easy
Page 1 of 128
Table of Contents
1.Testing Fundamentals....................................................................................................... 7
1.1.Definition................................................................................................................... 7
1.2.Objective.................................................................................................................... 7
1.3.Benefits of Testing..................................................................................................... 7
2.Quality Assurance, Quality Control, Verification & Validation.......................................8
2.1.Quality Assurance...................................................................................................... 8
2.2.Quality Control.......................................................................................................... 8
2.3.Verification................................................................................................................ 8
2.4.Validation...................................................................................................................8
3.SDLC & STLC..................................................................................................................9
3.1.STLC Software Testing Life Cycle ........................................................................9
3.2.Models of SDLC & STLC......................................................................................... 9
3.2.1.V-Model.............................................................................................................. 9
3.2.2.W-Model........................................................................................................... 10
3.2.3.Waterfall Model................................................................................................ 11
3.2.4.Extreme Programming Model...........................................................................11
3.2.5.Spiral Model......................................................................................................12
4.Testing Standards............................................................................................................13
4.1.SW CMM:.............................................................................................................13
4.2.SW TMM.............................................................................................................. 14
4.2.1.Levels of SW TMM........................................................................................ 14
4.2.1.1.Level 1: Initial............................................................................................ 14
4.2.1.2.Level 2: Phase Definition...........................................................................14
4.2.1.3.Level 3: Integration.................................................................................... 14
4.2.1.4.Level 4: Management and Measurement................................................... 14
4.2.1.5.Level 5: Optimization / Defect Prevention and Quality Control............... 15
4.2.2.Need to use SW-TMM......................................................................................15
4.2.3.SW-TMM Assessment Process.........................................................................15
4.2.4.SW-TMM Summary......................................................................................... 16
4.3.ISO : International Organisation for Standardisation...............................................16
4.4.ANSI / IEEE Standards............................................................................................16
4.5.BCS - SIGIST.......................................................................................................... 16
5.Testing Techniques......................................................................................................... 17
5.1.Static Testing Techniques........................................................................................ 17
5.1.1.Review - Definition...........................................................................................17
5.1.2.Types of Reviews..............................................................................................17
5.1.2.1.Walkthrough.............................................................................................. 17
5.1.2.2.Inspection................................................................................................... 17
5.1.2.3.Informal Review........................................................................................ 18
5.1.2.4.Technical Review.......................................................................................18
5.1.3.Activities performed during review.................................................................. 18
Page 2 of 128
Page 3 of 128
Page 4 of 128
10.7.6.Approach.........................................................................................................57
10.7.7.Item Pass/Fail Criteria.....................................................................................57
10.7.8.Suspension Criteria and Resumption Requirements.......................................57
10.7.9.Test Deliverables............................................................................................ 57
10.7.10.Testing Tasks................................................................................................ 57
10.7.11.Environmental Needs....................................................................................58
10.7.12.Responsibilities............................................................................................. 58
10.7.13.Staffing and Training Needs......................................................................... 58
10.7.14.Schedule........................................................................................................ 58
10.7.15.Risks and Contingencies............................................................................... 58
10.7.16.Approvals...................................................................................................... 59
10.8.High Level Test Conditions / Scenario.................................................................. 59
10.8.1.Processing logic.............................................................................................. 59
10.8.2.Data Definition ...............................................................................................59
10.8.3.Feeds Analysis................................................................................................ 60
10.9.Test Case................................................................................................................ 60
10.9.1.Expected Results.............................................................................................61
10.9.1.1.Single Expected Result............................................................................ 61
10.9.1.2.Multiple Expected Result.........................................................................61
10.9.2.Pre-requirements............................................................................................. 61
10.9.3.Data definition................................................................................................ 62
11.Test Execution Process................................................................................................. 63
11.1.Pre- Requirements .................................................................................................63
11.1.1.Version Identification Values......................................................................... 63
11.1.2.Interfaces for the application...........................................................................63
11.1.3.Unit testing sign off........................................................................................ 63
11.1.4.Test Case Allocation....................................................................................... 64
11.2.Stages of Testing: ..................................................................................................64
11.2.1.Comprehensive Testing - Round I.................................................................. 64
11.2.2.Discrepancy Testing - Round II...................................................................... 64
11.2.3.Sanity Testing - Round III...............................................................................64
12.Defect Management...................................................................................................... 65
12.1.Defect Definition................................................................................................ 65
12.2.Types of Defects.................................................................................................... 66
12.3.Defect Reporting ................................................................................................... 66
12.4.Tools Used............................................................................................................. 66
12.4.1.ClearQuest (CQ)............................................................................................. 66
12.4.2.TestDirector (TD):.......................................................................................... 67
12.4.3.Defect Tracker.................................................................................................67
12.5.Defects Meetings....................................................................................................67
12.6.Defects Publishing................................................................................................. 67
12.7.Defect Life Cycle................................................................................................... 68
13.Test Closure Process..................................................................................................... 69
13.1.Sign Off..................................................................................................................69
13.2.Authorities..............................................................................................................69
13.3.Deliverables........................................................................................................... 69
13.4.Metrics................................................................................................................... 69
Page 5 of 128
13.4.1.Defect Metrics.................................................................................................69
13.4.2.Defect age: ..................................................................................................... 69
13.4.3.Defect Analysis: ............................................................................................. 69
13.4.4.Test Management Metrics...............................................................................70
13.4.5.Debriefs With Test Team................................................................................70
14.Testing Activities & Deliverables.................................................................................71
14.1.Test Initiation Phase...............................................................................................71
14.2.Test Planning Phase............................................................................................... 71
14.3.Test Design Phase.................................................................................................. 71
14.4.Test Execution & Defect Management Phase........................................................72
14.5.Test Closure Phase................................................................................................. 72
15.Maveric Systems Limited............................................................................................. 73
15.1.Overview................................................................................................................73
15.2.Leadership Team....................................................................................................73
15.3.Quality Policy.........................................................................................................73
15.4.Testing Process / Methodology..............................................................................74
15.4.1.Test Initiation Phase........................................................................................75
15.4.2.Test Planning Phase........................................................................................ 76
15.4.3.Test Design Phase........................................................................................... 78
15.4.4.Execution and Defect Management Phase......................................................79
15.4.4.1.Test Execution Process............................................................................ 79
15.4.4.2.Defect Management Process.................................................................... 79
15.4.5.Test Closure Phase.......................................................................................... 81
15.5.Test Deliverables Template................................................................................... 82
15.5.1.Project Details Form...................................................................................... 82
15.5.2.Minutes of Meeting.........................................................................................84
15.5.3.Top Level Project Checklist............................................................................85
15.5.4.Test Strategy Document..................................................................................86
15.5.5.Configuration Management and Quality Plan.................................................86
15.5.6.Test Environment Request.............................................................................. 87
15.5.7.Risk Analysis Document.................................................................................88
15.5.8.Clarification Document...................................................................................88
15.5.9.Test condition / Test Case Document............................................................. 88
15.5.10.Test Script Document................................................................................... 88
15.5.11.Traceability Matrix....................................................................................... 89
15.5.12.Daily Status Report....................................................................................... 89
15.5.13.Weekly Status Report....................................................................................91
15.5.14.Defect Report................................................................................................ 92
15.5.15.Final Test Checklist...................................................................................... 93
15.5.16.Final Test Report...........................................................................................94
15.5.17.Project De-brief Form................................................................................... 94
16.Q & A............................................................................................................................95
16.1.General................................................................................................................... 95
16.2.G.E. Interview Questions ................................................................................... 99
17.Glossary...................................................................................................................... 119
18.References...................................................................................................................128
Page 6 of 128
1.
Testing Fundamentals
1.1.Definition
The process of exercising software to verify that it satisfies specified requirements and to detect
errors
BS7925-1
Testing is the process of executing a program with the intent of finding errors
Glen Myers
Testing identifies faults, whose removal increases the software quality by increasing the
softwares potential reliability. Testing is the measurement of software quality. We measure how
closely we have achieved quality by testing the relevant factors such as correctness, reliability,
usability, maintainability, reusability and testability.
`
1.2.Objective
A good test is one that has a high probability of finding an as-yet-undiscovered error.
Testing should also aim at suggesting changes or modifications if required, thus adding
value to the entire process.
The objective is to design tests that systematically uncover different classes of errors and
do so with a minimum amount of time and effort.
Software reliability and software quality based on the data collected during testing
1.3.Benefits of Testing
Cost reduction
Time reduction
Defect reduction
Page 7 of 128
2.
2.1.Quality Assurance
A planned and systematic pattern for all actions necessary to provide adequate confidence that
the item or product conforms to established technical requirements
2.2.Quality Control
QC is a process by which product quality is compared with applicable standards, and the action
taken when nonconformance is detected.
Quality Control is defined as a set of activities or techniques whose purpose is to ensure that all
quality requirements are being met. In order to achieve this purpose, processes are monitored
and performance problems are solved.
2.3.Verification
The process of evaluating a system or component to determine whether the products of the given
development phase satisfy the conditions imposed at the start of that phase.
[IEEE]
2.4.Validation
Determination of the correctness of the products of software development with respect to the user
needs and requirements.
BS7925-1
Difference Table:
Quality Analysis
Study on Process followed in Project
development
Quality Control
Study on Project for its Function and
Specification
Verification
Process of determining whether output of one
phase of development conforms to its
previous phase
Validation
Process of determining whether a fully
developed system conforms to its SRS
document
Verification is concerned
containment of errors
with
phase
Page 8 of 128
3.
3.2.1.V-Model
The figure shows the brief description of the V-Model kind of testing. Every phase of the STLC in
this model corresponds to some activity in the SDLC. The Requirement Analysis would
correspondingly have an acceptance testing activity at the end. The design has Integration
Testing (IT) and the System Integration Testing (SIT) and so on.
Page 9 of 128
V model is model in which testing is done parallel with development. Left side of v model,
reflect development input for the corresponding testing activities.
In the Software Development Life Cycle, both the Development activity and the testing
activities start almost at the same time with the same information in their hands. The
development team will apply "do-procedures" to achieve the goals and the testing team
will apply "Check-procedures" to verify that. Its a parallel process and finally arrives to the
product with almost no bugs or errors
V-model is one of the SDLC STLC; it includes testing from the unit level to business level.
That is after completing the coding tester starts testing the code by keeping the design
phase documents that all the modules had been integrated or not, after that he will verify
for system is according to the requirements or not, and at last he will go for business
scenarios where he can validate by the customer and he can do the alpha testing and
beta testing. And at last he decides to have the complete stable product.
The V model shows the Development Cycle Stages and Maps it to Testing Cycles, but it
fails to address how to start for all these test levels in parallel to development. It is a
parallel activity which would give the tester the domain knowledge and perform more
value added, high quality testing with greater efficiency. Also it reduces time since the test
plans, test cases, test strategy are prepared during the development stage itself.
3.2.2.W-Model
From the view of testing, all of the models presented previously are deficient in various ways. The
test activities first start after the implementation:
The connection between the various test stages and the basis for the test is not clear
The tight link between test, debug and change tasks during the test phase is not clear
In the following, the W-model is presented. This is based on the general V-model and the
disadvantages previously mentioned are removed.
Page 10 of 128
One of the first models for software development is the so-called waterfall-model by B.W.Boehm.
The individual phases i.e. activities that were defined here are to be found in nearly all models
proposed since. In this it was set out that each of the activities in the software development must
be completed before the next phase begins. A return in the development process was only
possible to an immediate previous phase.
In the waterfall-model, testing directly follows the implementation. By this model it was suggested
that activities for testing could first be started after the implementation. Preparatory tasks for the
testing were not clear. A further disadvantage is that testing, as the last activity before release,
could be relatively easily shortened or omitted altogether. This, in practice, is unfortunately all too
common. In this model, the expense of the removal of faults and defects found is only
recognizable through a return to the implementation phase.
Page 11 of 128
3.2.5.Spiral Model
In the spiral-model a cyclical and prototyping view of software development was shown. Tests
were explicitly mentioned (risk analysis, validation of requirements and of the development) and
the test phase was divided into stages. The test activities included module, integration and
acceptance tests. However, in this model the testing also follows the coding. The exception to this
is that the test plan should be constructed after the design of the system. The spiral model also
identifies no activities associated with the removal of defects
Page 12 of 128
4.
Testing Standards
Testing of software is defined very differently by different people and different corporations. You
have process standards bodies, like ISO, SPICE, IEEE, etc. that attempt to impose a process to
whatever types of development projects you do (be it hardware, software, embedded systems,
etc.) and some of that will, by proxy, speak to testing. However, these are more there to guide the
process and not the testing, such as it is. So, for example, IEEE will give you ideas for templates
for such things as test case specifications, test plans, etc. That may help you out.
On the other hand, those IEEE templates tell you nothing about actually testing the product itself.
They basically just show you how to document that you are testing the product. The same thing
pretty much applies with ISO. ISO is the standard for international projects and yet it, like IEEE,
does not really force or even advocate a certain "testing standard." You also have other process
and project oriented concepts out there like the Capability Maturity Model (CMM)
Some of the organization that define testing standards are
BS British Standards
ISO- International Organization of Standards
CMM- Capability Maturity Model
SPICE- Software Process Improvement and Capability Determination
NIST-National institute of Standards and Technology
DoD-Department of Defense
4.1.SW CMM:
SEI - Software Engineering Institute, Carnegie Mellon University.
CMM - Capability Maturity Model
Software Process
A software process can be defined as a set of activities, methods, practices, and transformations
that people use to develop and maintain software and the associated products
Software Process Capability
Software Process Capability describes the range of expected results that can be achieved by
following a software process.The software process capability of an organization provides one
means of predicting the most likely outcomes to be expected from the next software project the
organization undertakes.
Software Process Maturity
Software Process Maturity is the extent to which a specific process is explicitly defined, managed,
measured, controlled, and effective. Maturity implies a potential growth in capability and indicates
both the richness of an organizations software process and the consistency with which it is
applied in projects throughout the organization
The five levels of SW- CMM
Level 1: Initial
Page 13 of 128
4.2.SW TMM
SW-TMM is a testing process improvement tool that can be used either in conjunction with the
SW-CMM or as a stand-alone tool.
4.2.1.Levels of SW TMM
4.2.1.1.Level 1: Initial
A chaotic process
Not distinguished from debugging and ill defined
The tests are developed ad hoc after coding is complete
Usually lack a trained professional testing staff and testing tools
The objective of testing is to show that the system and software work
4.2.1.3.Level 3: Integration
Page 14 of 128
best to implement the improvements either in a pilot project or in phases track progress and
achievements prior to expanding organization wide also good in a limited application easier to
fine-tune the new process prior to expanded implementation
Page 15 of 128
4.5.BCS - SIGIST
A meeting of the Specialist Interest Group on Software Testing was held in January 1989 (this
group was later to affiliate with the British Computer Society). This meeting agreed that existing
testing standards are generally good standards within the scope which they cover, but they
describe the importance of good test case selection, without being specific about how to choose
and develop test cases.
The SIG formed a subgroup to develop a standard which addresses the quality of testing
performed. Draft 1.2 was completed by November 1990 and this was made a semi-public release
Page 16 of 128
5.
Testing Techniques
BS 7925-1
5.1.1.Review - Definition
Review is a process or meeting during which a work product or set of work products, is presented
to project personnel, managers, users, customers, or other interested parties for comment or
approval.
[IEEE]
5.1.2.Types of Reviews
There are three general classes of reviews:
5.1.2.1.Walkthrough
A review of requirements, designs or code characterized by the author of the material under
review guiding the progression of the review.
[BS 7925-1]
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no
preparation is usually required.
These are led by the author of the document, and are educational in nature. Communication is
therefore predominately one-way in nature. Typically they entail dry runs of designs, code and
scenarios/ test cases.
5.1.2.2.Inspection
A group review quality improvement process for written material. It consists of two aspects;
product (document itself) improvement and process improvement (of both document production
and inspection).
[BS 7925-1]
An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a
moderator, reader, and a recorder to take notes. The subject of the inspection is typically a
Page 17 of 128
5.1.2.3.Informal Review
5.1.2.4.Technical Review
Technical reviews are also known as peer review as it is vital that participants are made up from
the 'peer group', rather than including managers.
Documented
Defined fault detection process
Includes peers and technical experts
No management participant
Comparison of review types
Review type
Primary purpose
Led by
Participants
Degree of formality
Walkthrough
Education
Author
Peers
Presentational
Inspection
Reader,
Recorder,
Author, Inspector
Formal
defined
Inspection process
Informal review
Find
problems
quickly and cheaply
Not defined
Not defined
Largely Unplanned
and Undocumented
Technical review
Finding faults
Chairperson
Page 18 of 128
The activities include the foundations for a manageable and high-quality test process. A test
strategy is determined after a risk evaluation, a cost estimate and test plan are developed and
progress monitoring and reporting are established. During the development process all plans
must be updated and completed and all decisions must be checked for validity.
In a mature development process reviews and inspections are carried out through the whole
process. The review of the requirement document answers questions like: Are all customers
requirements fulfilled? Are the requirements complete and consistent? And so on. It is a look back
to fix problems before going on in development. But just as important is a look forward. Ask
questions like: Are the requirements testable? Are they testable with defensible expenditure? If
the answer is no, then there will be problems to implement these requirements. If you have no
idea how to test some requirements then it is likely that you have no idea how to implement these
requirements. At this stage of the development process all the knowledge for the acceptance
tests is available and to hand. So this is the best place for doing all the planning and preparing for
acceptance testing.
For example one can
Establish priorities of the tests depending on criticality
Specify (functional and non-functional) test cases
Specify and - if possible - provide the required infra-structure
At this stage all of the acceptance test preparation is finished and can be achieved.
As with the acceptance test preparation, all of the system test preparation is finished at this early
development stage.
During the review of the architectural design one can look forward and ask questions like: What is
about the testability of the design? Are the components and interfaces testable? Are they testable
with defensible expenditure? If the components are too expensive to test a re-work of the
architectural design has to be done before going further in the development process. Also at this
stage all the knowledge for integration testing is available. All preparation, like specifying control
Page 19 of 128
Page 20 of 128
Black-box test design treats the system as a "black-box", so it does not explicitly use knowledge
of the internal structure. Black box testing is based solely on the knowledge of the system
requirements. Black-box test design is usually described as focusing on testing functional
requirements. In comparison, White-box testing allows one to peek inside the "box", and it
focuses specifically on using internal knowledge of the software to guide the selection of test data.
Black box testing focuses on testing the function of the program or application against its
specifications. Specifically, this technique determines whether combinations of inputs and
operations produce expected results.
Test Case design Techniques under Black Box Testing:
Page 21 of 128
Few general guidelines for determining the equivalence classes can be given
For a function that computes the square root of an integer in the range of 1 and 5000:
Test cases must include the values: {0, 1, 5000, and 5001}.
Page 22 of 128
The C&E diagram is also known as the Fishbone/Ishikawa diagram because it was drawn to
resemble the skeleton of a fish, with the main causal categories drawn as "bones" attached to the
spine of the fish, as shown below
Example C&E diagram for a Server crash issue:
Page 23 of 128
Advantages
5.2.1.4.Comparison Testing
Page 24 of 128
5.2.2.White-Box Testing:
Test case selection that is based on an analysis of the internal structure of the component.
BS7925-1
Testing based on an analysis of internal workings and structure of a piece of software. Also known
as Structural Testing / Glass Box Testing / Clear Box Testing. Tests are based on coverage of
code statements, branches, paths, conditions
.
Aims to establish that the code works as designedExamines the internal structure and
implementation of the program
Target specific paths through the programNeeds accurate knowledge of the design,
implementation and code
Statement coverage
Branch coverage
Condition coverage
Path coverage
Data flow-based testing
Page 25 of 128
5.2.2.1.Statement Coverage:
A test case design technique for a component in which test cases are designed to execute
statements.
BS7925-1
Design test cases so that every statement in a program is executed at least once. Unless a
statement is executed, we have no way of knowing if an error exists in that statement
Example:
Euclid's GCD computation algorithm:
int f1(int x, int y){
while (x != y){
if (x>y) then
x=x-y;
else y=y-x;
}
return x;
}
By choosing the test set {(x=3, y=3), (x=4, y=3), (x=3, y=4)} all statements are executed at least
once.
5.2.2.2.Branch Coverage:
Branch : A conditional transfer of control from any statement to any other statement in a
component, or an unconditional transfer of control from any statement to any other statement in
the component except the next statement, or when a component has more than one entry point, a
transfer of control to an entry point of the component.
Branch Testing: A test case design technique for a component in which test cases are designed
to execute branch outcomes.
BS7925-1
Branch testing guarantees statement coverage
Example
Test cases for branch coverage can be: {(x=3, y=3), (x=4, y=3), (x=3, y=4)}
Page 26 of 128
BS7925-1
Test cases are designed such that:
Each component of a composite conditional expression given both true and false values.
Example
Consider the conditional expression ((c1.and.c2).or.c3):
Each of c1, c2 and c3 are exercised at least once i.e. given true and false values.
5.2.2.4.Path Coverage:
Path: A sequence of executable statements of a component, from an entry point to an exit point.
Path testing: A test case design technique in which test cases are designed to execute paths of
a component.
BS7925-1
Page 27 of 128
Sequence
If
While
Until
Case
On a flow graph:
Arrows called edges represent flow of control
Circles called nodes represent one or more actions
Areas bounded by edges and regions called regions
A predicate node is a node containing a condition
Any procedural design can be translated into a flow graph. Note that compound Boolean
expressions at tests generate at least two predicate nodes and additional arcs.
Cyclomatic Complexity:
The Cyclomatic complexity gives a quantitative measure of the logical complexity.
Introduced by Thomas McCabe in 1976, it measures the number of linearly-independent
paths through a program module. This measure provides a single ordinal number that can
be compared to the complexity of other programs. Cyclomatic complexity is often
referred to simply as program complexity, or as McCabe's complexity.
This value gives the number of independent paths in the Basis set, and an upper bound for
the number of tests to ensure that each statement is executed at least once. An
independent path is any path through a program that introduces at least one new set of
processing statements or a new condition (i.e., a new edge)
Cyclomatic complexity (CC) = E - N + p
Where
E = the number of edges of the graph
N = the number of nodes of the graph
p = the number of connected components
Example has:
Page 28 of 128
7a
7b
Cyclomatic complexity of 4.
Independent paths:
1. 1, 8
2. 1, 2, 3, 7b, 1, 8
3. 1, 2, 4, 5, 7a, 7b, 1, 8
4. 1, 2, 4, 6, 7a, 7b, 1, 8
Cyclomatic complexity provides upper bound for number of tests required to guarantee coverage
of all program statements. As one of the more widely-accepted software metrics, it is intended to
be independent of language and language format.
Deriving Test Cases
1. Using the design or code, draw the corresponding flow graph.
2. Determine the Cyclomatic complexity of the flow graph.
3. Determine a basis set of independent paths.
4. Prepare test cases that will force execution of each path in the basis set.
Note: Some paths may only be able to be executed as part of another test.
BS7925-1
Page 29 of 128
Grey box Testing is the new term, which evolved due to the different architectural usage
of the system. This is just a combination of both Black box & White box testing. Tester
should have the knowledge of both the internals and externals of the function.
Tester should have good knowledge of White Box Testing and complete knowledge of
Black Box Testing.
Page 30 of 128
6.
Grey box testing is especially important with Web and Internet applications, because the
Internet is built around loosely integrated components that connect via relatively welldefined interfaces
Difference Tables
6.1.Quality Vs Testing
Quality
Testing
Page 31 of 128
6.2.Testing Vs Debugging
Testing
Testing is done to find bugs
Debugging
Debugging is an art of fixing bugs.
Quality Control
Study on Project for its Function and
Specification
QC is a process by which product quality is
compared with applicable standards, and the
action taken when nonconformance is detected.
Validation
Process of determining whether a fully
developed system conforms to its SRS
document
Page 32 of 128
with
phase
IST
Functional Specification
Simulated
Controlled
Component
Testing Firm
Verification
UAT
Business Requirement
Live Data
Simulated Live
Business
Testing Firm / users
Validation
IST
IST need integrated System of various Unit
levels of independent functionality and checks
its workability after integration and compares it
before integration.
Page 33 of 128
Component
Test data
Test Environment
To Achieve
Tested by
Supporting Document
Used
Alpha testing
Simulated
Controlled
Functionality
Only testers
Functional Specification
Beta testing
Live
Uncontrolled
User needs
Testers and End-Users
Customer Requirement
Specification
Test Environment
7.
Regression Testing
Levels of Testing
7.1.Unit Testing
The testing of individual software components.
BS795-1
Page 34 of 128
7.1.2.Pre-requisites
Before component testing may begin the component test strategy (2.1.1) and project component
test plan (2.1.2) shall be specified.
Component test strategy
The component test strategy shall specify the techniques to be employed in the design of test
cases and the rationale for their choice. Selection of techniques shall be according to clause 3. If
techniques not described explicitly in this clause are used they shall comply with the 'Other
Testing Techniques' clause (3.13).
The component test strategy shall specify criteria for test completion and the rationale for their
choice. These test completion criteria should be test coverage levels whose measurement shall
be achieved by using the test measurement techniques defined in clause 4. If measures not
described explicitly in this clause are used they shall comply with the 'Other Test Measurement
Techniques' clause (4.13).
The component test strategy shall document the degree of independence required of personnel
designing test cases from the design process, such as:
a) the test cases are designed by the person(s) who writes the component under test;
b) the test cases are designed by another person(s);
c) the test cases are designed by a person(s) from a different section;
d) the test cases are designed by a person(s) from a different organisation;
e) the test cases are not chosen by a person.
The component test strategy shall document whether the component testing is carried out using
isolation, bottom-up or top-down approaches, or some mixture of these.
The component test strategy shall document the environment in which component tests will be
executed. This shall include a description of the hardware and software environment in which all
component tests will be run.
The component test strategy shall document the test process that shall be used for component
testing.
The test process documentation shall define the testing activities to be performed and the inputs
and outputs of each activity.
Standard for Software Component Testing
Page 35 of 128
This Figure illustrates the generic test process described in clause 2.1.1.8. Component Test
Planning shall begin the test process and Checking for Component Test Completion shall end it;
these activities are carried out for the whole component. Component Test Specification,
Component Test Execution, and Component Test Recording may however, on any one iteration,
be carried out for a subset of the test cases associated with a component. Later activities for one
test case may occur before earlier activities for another. Whenever an error is corrected by
making a change or changes to test materials or the component under test, the affected activities
shall be repeated.
7.2.Integration Testing
Testing performed to expose faults in the interfaces and in the interaction between integrated
components
BS7925-1
Page 36 of 128
The integration team is adequately staffed and trained in software integration testing.
The integration environment is ready.
The first two software components have:
o Passed unit testing.
o Been ported to the integration environment.
o Been integrated.
Documented Evidence that component has successfully completed unit test.
Adequate program or component documentation is available
Verification that the correct version of the unit has been turned over for integration.
Exit criteria:
A test suite of test cases exists for each interface between software components.
All software integration test suites successfully execute (i.e., the tests completely execute
and the actual test results match the expected test results).
Successful execution of the integration test plan
No open severity 1 or 2 defects
Component stability
Guidelines:
The iterative and incremental development cycle implies that software integration testing
is regularly performed in an iterative and incremental manner.
Software integration testing must be automated if adequate regression testing is to occur.
Software integration testing can elicit failures produced by defects that are difficult to
detect during system or launch testing once the system has been completely integrated.
Page 37 of 128
Level 1
Testing
sequence
Level 2
Level 1
Level 2
Level 2
. ..
Level 2
Level 2
stubs
Level 3
stubs
Steps:
Main control module used as the test driver, with stubs for all subordinate modules.
Replace stubs either depth first or breadth first
Replace stubs one at a time.
Test after each module integrated
Use regression testing (conducting all or some of the previous tests) to ensure new errors
are not introduced.
Verifies major control and decision points early in design process.
7.2.1.2.Bottom up Integration
An approach to integration testing where the lowest level components are tested first, then used
to facilitate the testing of higher level components. The process is repeated until the component at
the top of the hierarchy is tested.
BS7925-1
Begin construction and testing with atomic modules (lowest level modules).
Page 38 of 128
Test
drivers
Level N
Level N1
Level N
Level N
Level N1
Level N
Testing
sequence
Level N1
Steps:
Low level modules combined in clusters (builds) that perform specific software subfunctions.
Driver program developed to test. Cluster is tested.
Driver programs removed and clusters combined, moving upwards in program structure.
Bottom-up
Major
Features
Advantages
Disadvantages
Top-down
The control program is tested
first
Page 39 of 128
At any given point, more code has beenAn early working program raises morale
written and tested that with top down and helps convince management progress
testing. Some people feel that bottom-upis being made. It is hard to maintain a
is a more intuitive test philosophy.
pure top-down strategy in practice.
7.2.2.Non-Incremental Testing
7.2.2.1.Big Bang Integration
Integration testing where no incremental testing takes place prior to all the system's components
being combined to form the system.
BS7925-1
7.2.2.2.Validation Testing
Validation testing aims to demonstrate that the software functions in a manner that can be
reasonably expected by the customer.
Tests conformance of the software to the Software Requirements Specification. This should
contain a section Validation criteria which is used to develop the validation tests.
Page 40 of 128
When validation tests fail it may be too late to correct the error prior to scheduled
delivery. Need to negotiate a method of resolving deficiencies with the customer.
7.2.2.3.Configuration review
An audit to ensure that all elements of the software configuration are properly developed,
catalogued, and has necessary detail to support maintenance.
7.3.System Testing
System testing is the process of testing an integrated system to verify that it meets specified
requirements".
BS7925-1
It is further sub-divided into
Successful execution of the system test cases ,and documentation that shows
coverage of requirements and high-risk system components
System meets pre-defined quality goals
100% of total system functionality delivered
7.3.1.Functional Testing
7.3.1.1.Requirement based Testing
Page 41 of 128
BS7925-1
Requirements testing must verify that the system can perform its function correctly and
that the correctness can be sustained over a continuous period of time. Unless the system
can function correctly over an extended period of time management will not be able to
rely upon the system. The system can be tested for correctness throughout the lifecycle,
but it is difficult to test the reliability until the program becomes operational.
Objectives:
Every application should be requirements tested. The process should begin in the
requirements phase, and continue through every phase of the life cycle into operations
and maintenance. It is not a question as to whether requirements must be tested but,
rather, the extent and methods used in requirements testing.
BS7925-1
Non-Functional testing types:
Page 42 of 128
Configuration
Compatibility
Conversion
Disaster Recovery
Interoperability
Installability
Memory Management,
Maintainability
Portability
Performance
Procedure
Reliability
Recovery
Stress
Security
Usability
7.3.2.1.Recovery testing
Testing aimed at verifying the system's ability to recover from varying degrees of failure.
,,, BS7925-1
Recovery is the ability to restart operations after the integrity of the application has been
lost. The process normally involves reverting to a point where the integrity of the system
is known, and then reprocessing transactions up until the point of failure. The importance
of recovery will vary from application to application.
Objectives:
Recovery testing is used to ensure that operations can be continued after a disaster,
recovery testing not only verifies the recovery process, but also the effectiveness of the
component parts of that process. Specific objectives of recovery testing include:
7.3.2.2.Security testing
Testing whether the system meets its specified security objectives.
BS7925-1
Security is a protection system that is needed for both secure confidential information and for
competitive purposes to assure third parties their data will be protected. Protecting the
confidentiality of the information is designed to protect the resources of the organization. Security
Page 43 of 128
Determine that adequate attention has been devoted to identifying security risks
Determining that a realistic definition and enforcement of access to the system has been
implemented
Determining that sufficient expertise exists to perform adequate security testing
Conducting reasonable tests to ensure that the implemented security measures function
properly
7.3.2.3.Stress testing
Testing conducted to evaluate a system or component at or beyond the limits of its specified
requirements.
BS7925-1
Stress testing is designed to test the software with abnormal situations. Stress testing attempts to
find the limits at which the system will fail through abnormal quantity or frequency of inputs. For
example;
7.3.2.4.Performance testing
Testing conducted to evaluate the compliance of a system or component with specified
performance requirements.
IEEE
Performance testing is designed to test run time performance of software within the context of an
integrated system. It is not until all systems elements are fully integrated and certified as free of
defects the true performance of a system can be ascertained.
Page 44 of 128
Performance tests are often coupled with stress testing and often require both hardware and
software infrastructure. That is, it is necessary to measure resource utilization in an exacting
fashion. External instrumentation can monitor intervals, log events. By instrument the system, the
tester can uncover situations that lead to degradation and possible system failure.
BS7925-1
Beta testing: Operational testing at a site not otherwise involved with the software developers.
BS7925-1
This is testing of an operational nature once the software seems stable. It should be
conducted by people who represent the software vendor's market, and who will use the
product in the same way as the final version once it is released. The benefit of this type of
acceptance testing is that it will bring out operational issues from potential customers
prepared to comment on the software before it is officially released.
Alpha testing is conducted at the developer's site by a customer. The customer uses the
software with the developer 'looking over the shoulder' and recording errors and usage
problems. Alpha testing conducted in a controlled environment.
Beta testing is conducted at one or more customer sites by end users. It is 'live' testing in an
environment not controlled by the developer. The customer records and reports difficulties and
errors at regular intervals.
Page 45 of 128
7.4.1.Entry Criteria
7.4.2.Exit Criteria
All Test Scenarios/conditions would be executed and reasons will be provided for
untested conditions arising out of the following situations
Non -Availability of the Functionality
Deferred to the Future Release
All Defects Reported are in the Closed or Deferred status. The client team should sign
off the Deferred defects.
Page 46 of 128
Ensure consistency
Speed up testing to accelerate releases
Allow testing to happen more frequently
Reduce costs of testing by reducing manual labor
Improve the reliability of testing
Define the testing process and reduce dependence on the few who know it
8.
Types of Testing
8.1.Compliance Testing
Page 47 of 128
BS7925-1
The intersystem testing is designed to check and verify the interconnection between application
function correctly
Applications are frequently interconnected to other systems. The interconnection may be data
coming into the system from another application, leaving for another application frequently in
multiple cycles
The intersystem testing involves the operations of multiple systems in test. The basic need of
intersystem test arises whenever there is a change in parameters between application systems,
where multiple systems are integrated in cycles.
8.3.Parallel Testing
The process of comparing test results of processing production data concurrently in both
the old and new systems.
Process in which both the old and new modules run at the same time so that performance
and outcomes can be compared and corrected prior to deployment; commonly done with
modules like Payroll.
Testing a new or an alternate data processing system with the same source data that is
used in another system. The other system is considered as the standard of comparison.
8.4.Database Testing
The database component is a critical piece of any data-enabled application. Todays intricate mix
of client-server and Web-enabled database applications is extremely difficult to Test productively.
Testing at the data access layer is the point at which your application communicates with the
database. Tests at this level are vital to improve not only your overall Test strategy, but also your
products quality.
Database testing includes the process of validation of database stored procedures, database
triggers; database APIs, backup, recovery, security and database conversion.
Page 48 of 128
Manual support testing involves first the evaluation of the adequacy of the process and seconds
the execution of the process. The method of testing may be testing is same but the objective
remains the same.
8.6.Ad-hoc Testing
Testing carried out using no recognised test case design technique.
BS7925-1
Testing without a formal test plan or outside of a test plan. With some projects this type of testing
is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find
problems that are not caught in regular testing. Sometimes, if testing occurs very late in the
development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc
testing is referred to as exploratory testing.
8.7.Configuration Testing
Testing to determine how well the product works with a broad range of hardware/peripheral
equipment configurations as well as on different operating systems and software.
8.8.Pilot Testing
Testing that involves the users just before actual release to ensure that users become familiar
with the release contents and ultimately accept it. Often is considered a Move-to-Production
activity for ERP releases or a beta test for commercial products. Typically involves many users, is
conducted over a short period of time and is tightly controlled.
8.9.Automated Testing
Software testing that utilizes a variety of tools to automate the testing process and when the
importance of having a person manually testing is diminished. Automated testing still requires a
skilled quality assurance professional with knowledge of the automation tool and the software
being tested to set up the tests.
8.10.Mutation Testing:
The software is first tested using an initial testing method based on white-box strategies we
already discussed. After the initial testing is complete, mutation testing is taken up. The idea
behind mutation testing is to make a few arbitrary small changes to a program at a time, each
time the program is changed it is called a mutated program; the change is called a mutant.
A mutated program tested against the full test suite of the program. If there at least one test case
in the test suite for which a mutant gives an incorrect result, then the mutant is said to be dead. If
a mutant remains alive even after all test cases have been exhausted, the test suite is enhanced
to kill the mutant.
Page 49 of 128
8.11.Load Testing
Load Testing involves stress testing applications under real-world conditions to predict system
behavior and performance and to identify and isolate problems. Load testing applications can
emulate the workload of hundreds or even thousands of users, so that you can predict how an
application will work under different user loads and determine the maximum number of concurrent
users accessing the site at the same time.
. BS7925-1
Testing with the intent of determining how well a product performs when a load is placed on the
system resources that nears and then exceeds capacity.
Volume Testing, as its name implies, is testing that purposely subjects a system (both hardware
and software) to a series of tests where the volume of data being processed is the subject of the
test. Such systems can be transactions processing systems capturing real time sales or could be
database updates and or data retrieval.
8.13.Usability Testing
Testing the ease with which users can learn and use a product.
. BS7925-1
Page 50 of 128
9.
9.1.Test Associate
Reporting To:
Team Lead of a project
Responsibilities:
Page 51 of 128
Design and develop test conditions and cases with associated test data based upon
requirements
Design test scripts
Executes the test ware (Conditions, Cases, Test scripts etc.) with the test data generated
Reviews test ware, record defects, retest and close defects
Preparation of reports on Test progress
9.2.Test Engineer
Reporting To:
Team Lead of a project
Responsibilities:
Design and develop test conditions and cases with associated test data based upon
requirements
Design test scripts
Executes the test ware (Conditions, Cases, Test scripts etc.) with the test data generated
Reviews test ware, record defects, retest and close defects
Preparation of reports on Test progress
Responsible for collection of requirements from the users and evaluating the same and
send out for team discussion
Preparation of the High level design document incorporating the feedback received on the
high level design document and initiate on the low level design document
Assist in the preparation of test strategy document drawing up the test plan
Preparation of business scenarios, supervision of test cases preparation based on the
business scenarios
Maintaining the run details of the test execution, Review of test condition/cases, test
scripts
Defect Management
Preparation of test deliverable documents and defect metrics analysis report
9.4.Test Lead
Reporting To:
Test Manager
Responsibilities:
Technical leadership of the test project including test approach and tools to be used
Preparation of test strategy
Page 52 of 128
9.5.Test Manager
Reporting To:
Management
Responsibilities:
10.
10.1.Baseline Documents
Construction of an application and testing are done using certain documents. These documents
are written in sequence, each of it derived from the previous document.
Page 53 of 128
10.1.2.Functional Specification
The document that describes in detail the characteristics of the product with regard to its
intended capability.
BS7925-1
The Functional Specification document describes the functional needs, design of the flow and
user maintained parameters. It is primarily derived from Business requirement document, which
specifies the client's business needs. The proposed application should adhere to the
specifications specified in the document. This is used henceforth to develop further documents for
software construction, validation and verification of the software.
10.1.3.Design Specification
The Design Specification document is prepared based on the functional specification. It contains
the system architecture, table structures and program specifications. This is ideally prepared and
used by the construction team. The test team should also have a detailed understanding of the
design specification in order to understand the system architecture.
10.1.4.System Specification
The System Specification document is a combination of Functional specification and design
specification. This is used in case of small application or an enhancement to an application.
Case Study on each document and reverse presentation
10.2.Traceability
10.2.1.BR and FS
The requirements specified by the users in the business requirement document may not be
exactly translated into a functional specification. Therefore, a trace on specifications between
functional specification and business requirements is done a one to one basis. This helps finding
the gap between the documents. These gaps are then closed by the author of the FS, or deferred
after discussions.
Testers should understand these gaps and use them as an addendum to the FS, after getting this
signed off from the author of the FS. The final FS form may vary from the original, as deferring or
taking in a gap may have ripple effect on the application. Sometimes, these ripple effects may not
be reflected in the FS. Addendums may sometime affect the entire system and the test case
development.
Page 54 of 128
10.3.Gap Analysis
This is the terminology used on finding the difference between "what it should be" and "what it is".
As explained, it is done on the Business requirement to FS and FS to test conditions.
Mathematically, it becomes evident that Business requirements that are users needs are tested,
as Business requirement and Test conditions are matched.
Simplifying the above,
A=Business requirement
B=Functional Specification
C=Test conditions
A=B, B=C, Therefore A=C
Another way of looking at this process is to eliminate as many mismatches at every stage of the
process, there by giving the customer an application, which will satisfy their needs.
In the case of UAT, there is a direct translation of specification from the Business Requirement to
Test conditions leaving lesser amount of understandability loss.
The testing technique varies based on the projects and risks involved in the project.
It is determined by the criticality and risks involved with the Application under Test (AUT).
The technique used for testing will be chosen based on the organizational need of the end
user and based on the caracal risk factor or test factors that do impacts the systems
The technique adopted will also depend on the phases of testing
The two factors that determine the test technique are
Test factors: the risks that need to be address in testing
Test Phases: the phase of the systems development life cycle in which testing will
occur.
And also depends onetime and money spend on testing.
10.5.Error Guessing
A test case design technique where the experience of the tester is used to postulate what faults
might occur, and to design tests specifically to expose them.
BS7925-1
10.6.Error Seeding
Page 55 of 128
10.7.Test Plan
This is a summary of the ANSI/IEEE Standard 829-1983. It describes a test plan as:
A document describing the scope, approach, resources, and schedule of intended testing
activities. It identifies test items, the features to be tested, the testing tasks, who will do each task,
and any risks requiring contingency planning.
(ANSI/IEEE Standard 829-1983)
This standard specifies the following test plan outline:
A unique identifier
10.7.2.Introduction
10.7.3.Test Items
10.7.4.Features to be Tested
Page 56 of 128
All features and significant combinations of features which will not be tested
10.7.6.Approach
For each major group of features of combinations of featres, specify the approach
Specify major activities, techniques, and tools which are to be used to test the groups
Identify significant constraints on testing, such as test-item availability, testingresource availability, and deadline
Specify the criteria to be used to determine whether each test item has passed or
failed testing
10.7.9.Test Deliverables
Identify the deliverable documents: test plan, test design specifications, test case
specifications, test procedure specifications, test item transmittal reports, test logs,
test incident reports, test summary reports
10.7.10.Testing Tasks
Page 57 of 128
Identify the source for all needs which are not currently available
10.7.12.Responsibilities
Identify groups responsible for providing the test items identified in the Test Items
section
Identify groups responsible for providing the environmental needs identified in the
Environmental Needs section
10.7.14.Schedule
Page 58 of 128
Specify the names and titles of all persons who must approve the plan
10.8.1.Processing logic
It may not be possible to segment the specifications into the above categories in all applications. It
is left to the test team to decide on the application segmentation. For the segments identified by
the test team, the possible condition types that can be built are
Positive condition - Polarity of the value given for test is to comply with the condition
existence.
Negative condition - Polarity of the value given for test is not to comply with the
condition existence.
Boundary condition - Polarity of the value given for test is to assess the extreme values
of the condition.
User perspective condition - Polarity of the value given for test is to analyze the
practical usage of the condition.
10.8.2.Data Definition
In order to test the conditions and values that are to be tested, the application should be populated
with data.
There are two ways of populating the data into tables of the application.
Intelligent: Data is tailor-made for every condition and value, having reference to its
condition. These will aid in triggering certain action by the application. By constructing
such intelligent data, few data records will suffice the testing process.
Page 59 of 128
Example:
Business rule, if the interest to be paid is more than 8 % and the tenor of the deposit
exceeds one month, then the system should give a warning.
To populate an interest to be paid field of a deposit, we can give 9.5478 and make the
tenor as two months for a particular deposit. This will trigger the warning in the
application.
Unintelligent: Data is populated in mass, corresponding to the table structures. Its values
are chosen at random and not with reference to the conditions derived. This type of
population can be used for testing the performance of the application and its behavior to
random data. It will be difficult for the tester to identify his requirements from the mass
data.
Example:
Using the above example, to find a suitable record with interest exceeding 8 % and the
Tenor being more than two months is difficult.
Having now understood the difference between intelligent and unintelligent data and also
at this point having a good idea of the application, the tester should be able to design
intelligent data for his test conditions.
Application may have its own hierarchy of data structure which is interconnected.
10.8.3.Feeds Analysis
Most applications are fed with inputs at periodic intervals, like end of day or every hour etc. Some
applications may be stand alone i.e., all processes will happen within its database and no external
inputs of processed data are required.
In the case of applications having feeds, received from other machines, they are sent in a format,
which are redesigned. These feeds, at the application end, will be processed by local programs
and populated in respective tables.
It is therefore, essential for testers to understand the data mapping between the feeds and the
database tables of the application. Usually, a document is published in this regard.
Translation of the high level data designed previously should be converted into the feed formats,
in order to populate the application database.
10.9.Test Case
A set of inputs, execution preconditions, and expected outcomes developed for a particular
objective, such as to exercise a particular program path or to verify compliance with a specific
requirement.
BS7925-1
Page 60 of 128
Test cases are written based on the test conditions. It is the phrased form of Test conditions,
which becomes readable and understandable by all. Language used in the expected results
should not have ambiguity. The results expressed should be clear and have only one
interpretation possible. It is advisable to use the term "Should" in the expected results.
There are three headings under which a test case is written. Namely,
Description: Here the details of the test on a specification or a condition are written.
Data and Pre-requirements: Here either the data for the test or the specification is
mentioned. Pre-requirements for the test to be executed should also be clearly
mentioned.
Expected results: The expected result on the execution of the instruction in the
description is mentioned. In general, it should reflect in detail the result of the test
execution.
While writing a test case, to make the test cases explicit, the tester should include the following:
Reference to the rules and specifications under test in words with minimal technical
jargons.
Check on data shown by the application should refer to the table names if possible
Names of the fields and screens should also be explicit.
10.9.1.Expected Results
The outcome of executing an instruction would have a single or multiple impacts on the
application. The resultant behavior of the application after execution is the expected result.
10.9.2.Pre-requirements
Page 61 of 128
Date's that are to be maintained (Pre-date or Post date) in the database before testing ,as
its sometimes not possible to predict the dates of testing ,and populate certain date fields
when they are to trigger certain actions in the application.
o Example: Maturity date of a deposit should be the date of test. So, it is difficult to
give the value of the maturity date while data designing or preparing test cases.
10.9.3.Data definition
Data for executing the test cases should be clearly defined in the test cases. They should indicate
the values that will be entered into the fields and also indicate default values of the field.
Example:
Description: Enter Client's name
Data: John Smith
(OR)
Description: Check the default value of the interest for the deposit
Data: $ 400
In the case of calculations involved, the test cases should indicate the calculated value in the
expected results of the test case.
Example:
Description: Check the default value of the interest for the deposit
Data: $ 400
This value ($400) should be calculated using the formula specified well in advance while data
design.
Page 62 of 128
11.
The preparation to test the application is now over. The test team should next plan the execution
of the test on the application.
11.1.Pre- Requirements
11.1.1.Version Identification Values
The application would contain several program files for it to function. The version of these files
and a unique checksum number for these files is a must for change management.
These numbers will be generated for every program file on transfer from the development
machine to the test environment. The number attributed to each program file is unique and if any
change is made to the program file between the time it is transferred to the test environment and
the time when it is transferred back to the development for correction, it can be detected by using
these numbers. These identification methods vary from one client to another.
These values have to be obtained from the development team by the test team. This helps in
identifying unauthorized transfers or usage of application files by both parties involved.
The responsibilities of acquiring, comparing and tracking before and after soft base transfer lie
with the test team.
To begin an integrated test on the application, development team should have completed
tests on the software at Unit levels.
Unit testing focuses verification effort on the smallest unit of software design. Using the
Design specification as a guide, important control paths and filed validations are tested.
Clients and the development team must sign off this stage, and hand over the signed off
application with the defect report to the testing team.
o Test Plan Internal
o Test Execution Sequence
Test cases can either be executed in a random format or in a sequential fashion. Some
applications have concepts that would require sequencing of the test cases before actual
execution. The details of the execution are documented in the test plan.
Sequencing can also be done on the modules of the application, as one module would populate or
formulate information required for another.
K. Muthuvel
Page 63 of 128
11.2.Stages of Testing:
11.2.1.Comprehensive Testing - Round I
All the test scripts developed for testing are executed. Some cases the application may not have
certain module(s) ready for test; hence they will be covered comprehensively in the next pass. The
testing here should not only cover all the test cases but also business cycles as defined in the
application.
K. Muthuvel
Page 64 of 128
12.
Defect Management
12.1.Defect Definition
Error: A human action that produces an incorrect result.
[IEEE]
BS7925-1
Bug:
Any Missing functionality or any action that is performed by the system which is not
supposed to be performed is a Bug.
Is an error found BEFORE the application goes into production?
Any of the following may be the reason for birth of Bug
1. Wrong functionality
2. Missing functionality
3. Extra or unwanted functionality
Defect:
A defect is a variance from the desired attribute of a system or application.
Is an error found AFTER the application goes into production?
Defect will be commonly categorized into two types:
1. Defect from product Specification
2. Variance from customer/user expectation.
Failure:
Any Expected action that is supposed to happen if not can be referred as failure or we
can say absence of expected response for any request.
Fault:
This generally referred in hardware terminologies. A Problem, which cause the system not
to perform its task or objective.
K. Muthuvel
Page 65 of 128
12.3.Defect Reporting
Defects or Bugs when detected in the application by the tester must be duly reported through an
automated tool. Particulars that have to be filled by a tester are
Defect Id: Number associated with a particular defect, and henceforth referred by its ID
Date of execution: The date on which the test case which resulted in a defect was
executed
Defect Category: These are explained in the next section, ideally decided by the test
leader
Severity: As explained, it can be Major, Minor and Show-stopper
Module ID: Module in which the defect occurred
Status: Raised, Authorized, Deferred, Fixed, Re-raised, And Closed.
Defect description: Description as to how the defect was found, the exact steps that
should be taken to simulate the defect, other notes and attachments if any.
Test Case Reference No: The number of the test case and script in combination which
resulted in the defect.
Owner: The name of the tester who executed the test cases
Test case description: The instructions in the test cases for the step in which the error
occurred
Expected Result: The expected result after the execution of the instructions in the test
case descriptions
Attachments: The screen shot showing the defect should be captured and attached
Responsibility: Identified team member of the development team for fixing the defect.
12.4.Tools Used
Tools that are used to track and report defects are,
12.4.1.ClearQuest (CQ)
It belongs to the Rational Test Suite and it is an effective tool in Defect Management. CQ
functions on a native access database and it maintains a common database of defects. With CQ
the entire Defect Process can be customized. For e.g., a process can be designed in such a
manner that a defect once raised needs to be definitely authorized and then fixed for it to attain
the status of retesting. Such a systematic defect flow process can be established and the history
for the same can be maintained. Graphs and reports can be customized and metrics can be
derived out of the maintained defect repository.
K. Muthuvel
Page 66 of 128
12.4.3.Defect Tracker
Defect Tracker is a tool developed by Maveric Systems Ltd. an Independent Software Testing
Company in Chennai for defect management. This tool is used to manage the defect, track the
defect and report the defect effectively by the testing team.
12.5.Defects Meetings
Meetings are conducted at the end of everyday between the test team and development team to
discuss test execution and defects. Here, defect categorizations are done.
Before meetings with the development team, test team should have internal discussions with the
test lead on the defects reported to the test lead. The process ensures that all defects are
accurate and authentic to the best knowledge of the test team.
12.6.Defects Publishing
Defects that are authorized are published in a mutually accepted media like Internet or sending
the issue by email etc.
Reports that are published are
Daily status report
Summarized defect report for the individual domain / product if any
Final defect reported
Test down Times:
During the execution of the test, schedules prepared earlier may slip based on certain factors.
Time lost due to these should be recorded duly by the test team.
Server problems
o Test team may come across problems with the server, on which the application is
planted. Possible causes for the problems are
o Main server on which the application may have problems with number of
instances on it slowly down the system
o Networking to the main server or internal network may get down
K. Muthuvel
Page 67 of 128
Software compatibility with application and middleware if any may cause concerns
delaying the test start
New version of databases or middleware may not be fully compatible with the applications
Improper installation of system applications may cause delays
Interfaces with applications may not be compatible with the existing hardware setup
Problems on Testing side / Development side
Delays can also be from the test or development teams likely
Data designed may not be sufficient or compatible with the application (missing some
parameters of the data)
Maintenance of the parameters may not be sufficient for the application to function
Version transferred for testing may not be the right one
K. Muthuvel
Page 68 of 128
13.
13.1.Sign Off
Sign off Criteria: In order to acknowledge the completion of the test process and certify the
application, the following has to be completed.
All defects raised during the test execution have either been closed or deferred
13.2.Authorities
The following personnel have the authority to sign off the test execution process
13.3.Deliverables
The following are the deliverables to the Clients
Test Strategy
Traceability Matrix
13.4.Metrics
13.4.1.Defect Metrics
Analysis on the defect report is done for management and client information. These are
categorized as
13.4.2.Defect age:
Defect age is the time duration between the points of introduction of defect to the point of closure
of the defect. This would give a fair idea on the defect set to be included for smoke test during
regression.
13.4.3.Defect Analysis:
The analysis of the defects can be done based on the severity, occurrence and category of the
defects. As an example Defect density is a metric which gives the ratio of defects in specific
modules to the total defects in the application. Further analysis and derivation of metrics can be
done based on the various components of the defect management.
K. Muthuvel
Page 69 of 128
Schedule: Schedule variance is a metric determined by the ratio of the planned duration to the
actual duration of the project.
Effort: Effort variance is a metric determined by the ratio of the planned effort to the actual
effort exercised for the project.
K. Muthuvel
Page 70 of 128
14.
The process of testing involves sequence of phases with several activities involved in each phase.
Each phase of testing has various documents to be maintained that tracks the progress of testing
activities and helps for future references.
The testing deliverables of different phases are significant for monitoring the testing process and
for process improvement. It plays a significant role in
K. Muthuvel
Page 71 of 128
K. Muthuvel
Page 72 of 128
15.
15.1.Overview
Maveric Systems is an independent, specialist software testing service provider. We
significantly enhance the functionality, usability, and performance of technology solutions
deployed by our clients. We bring in a fresh perspective and rigor to testing engagements by
virtue of our dedicated focus.
Complementing our core service offering in testing is a strong capability in strategic technical
writing and risk management (verification) services.
Our forte lies in banking, financial services and insurance verticals. We have also delivered
projects in telecom, CRM, and healthcare domains. Domain expertise in chosen areas
enables us to understand client requirements quickly and completely to offer a value-added
testing service.
A core group of anchor clients have propelled us to become a key independent software
testing firm in India within a short span of three years.
15.2.Leadership Team
Exceptional management bandwidth is a unique strength that has fuelled Maveric's
aggressive growth. Our five founding Directors of Maveric took to entrepreneurship while
they were at the prime of their careers in business and technology consulting.
A Maveric spirit and collective vision is nurturing our unique culture, and driving us to
relentlessly deliver value to our clients.
Ranga Reddy
P K Bandyopadhyay
CEO
Manager - Testing
Mahesh VN
Rosario Regis
Director
Manager - Testing
Venkatesh P
Hari Narayana
Director
Manager - Testing
Subramanian NN
AP Narayanan
Director
Manager - Technical Writing
Kannan Sethuraman
Sajan CK
Principal
Manager - Fulfillment
15.3.Quality Policy
We will significantly enhance the functionality, usability, and performance of IT
solutions deployed by our clients.
K. Muthuvel
Page 73 of 128
Input
Output
Key
Signoff
K. Muthuvel
Page 74 of 128
Input
Signed proposal
Procedure
Arrange internal kick-off meeting among the team members, Test Manager, Lead
- Commercial and Lead - Operations
Distribute the baseline documents based on the individual roles defined.
Prepare a micro level schedule indicating the roles allocated for all team members
with timelines in MPP
Fill in the Project Details form and the Top Level Project Checklist
Output
Minutes of meeting
MPP to be attached to Test Strategy (Test Planning process)
Project Details form
Top Level Project Checklist to Test Delivery Management
K. Muthuvel
Page 75 of 128
K. Muthuvel
Page 76 of 128
Input
Baseline documents
MPP
Procedure
Team members understand the functionality from baseline documents
Raise and obtain resolution of clarifications
Internal presentation and reverse presentation to the client
TL defines test environment and request the client for the same
TL identifies risks associated and monitors the project level risks throughout the
project. The risks at the project delivery level are maintained by the Lead - Ops
TL prepares Test Strategy, review is by Lead - Commercial, Lead - Ops and Test
Manager
AM revises commercials if marked difference between Test Strategy and the
Proposal
TL prepares Configuration Management and Quality Plan, Review is by Lead Ops and Test Manager
Output
Clarification document
Test Environment Request to client
Risk Analysis document to Test Delivery Management
K. Muthuvel
Page 77 of 128
Input
Test Strategy
Procedure
Team members prepare the Test Conditions, Test Cases and Test Script
TL prepares the Test Scenarios (free form)
Output
Test Condition/Test Case document, Test Script, Test Scenarios (free form)
Traceability matrix to the client
Daily and Weekly status reports to client, Test Delivery Management and Account
Management
K. Muthuvel
Page 78 of 128
K. Muthuvel
Page 79 of 128
Input
Test Conditions/ Test Cases document
Test Scenarios document
Traceability matrix
Procedure
Validate test environment and deal with any issues
Execute first rounds of testing
Update the Test Condition/Test case document (and the Test Scripts, if prepared)
with actual result and status of test
Log in the Defect Report, consolidate all defects and send to client, Test Manager and
Conducts defect review meetings with client (as specified in Test Strategy)
Consolidate the Test Conditions/Test Cases to be executed in the subsequent
round in a separate document. No review or sign-off required
Carry out the test in the new version of the application
Changes to baseline or scope of work escalated to Lead - Ops
Complete rounds/stages of testing as agreed in the Test Strategy
Send daily and weekly status reports to clients and the Test Delivery Management
team
Escalate any changes in baseline documents to Delivery Management team.
Output
Defect Report
Daily and Weekly status reports to the client, Test Delivery Management and
Account Management
K. Muthuvel
Page 80 of 128
Input
Consolidated Defect Report
Procedure
Team Lead in consultation with Lead - Ops decides about closure of a project
(both complete and incomplete)
In case of incomplete testing, decisions whether to close and release deliverables
are taken by delivery management team
The team prepare the Quantitative measurements
TL prepares the Final Testing Checklist and Lead - Ops approves the same
TL prepares the Final Test Report and Lead - Ops and Test Manager Reviews the same.
Internal Review records and review records of clients are also stored.
Test Manager, Lead - Ops, Lead - Comm., Account Manager, TL and team members
carry out the de-brief meeting. Effort variances, % compliance to schedule are
documented in Project De-brief form. If required, inputs are given to Quality Department
for process improvements
Output
Final Testing Checklist and Final Test Report to the client
Project De-brief to Quality Department
K. Muthuvel
Page 81 of 128
Name of Project
Client
Location
Contact at Client Location
Name:
Designation:
Email:
Phone:
Mobile:
Name:
Designation:
Email:
Phone:
Mobile:
Name:
Designation:
Email:
Phone:
Mobile:
Duration of Testing
Test Summary
From
To
Level of Testing
White Box Testing
Black Box Testing
Manual Testing
Automation
Testing
Type of Testing
Functional Testing
Regression
Testing
Performance
Testing
Automation Tools, if any
Defect tracking / Project management
tools used, if any
K. Muthuvel
Page 82 of 128
Application Summary
Application Overview
OS with version
Database Backend
Middleware
Front-end
Software languages used
Module Summary
Module Name
Description
Testers Summary
K. Muthuvel
Page 83 of 128
Meeting
Time:
Date
and
Participants:
Absentees:
Previous Meeting Follow up:
Prepared By:
Approved By:
Date:
Date:
K. Muthuvel
Page 84 of 128
K. Muthuvel
Page 85 of 128
K. Muthuvel
Page 86 of 128
Details
Version
Client side
Hardware Details like RAM, Hard Disk Capacity etc.
Operating System
Client Softwares to be installed
Front end language, tools
Browser support
Internet connection
Automation Tools to be installed
Utility softwares like Toad, MS Projects etc.
Server side Middle Tier
Hardware Platform.
No. of CPU
RAM
Hard Disk capacity
Operating System
Software
Number of Servers
Location of Server
Server side Database
Hardware Platform.
No. of CPU
RAM
Hard Disk capacity
Operating System
Software
CPU
RAM
Hard Disk capacity
Number of Database Servers
Location of Database Servers
K. Muthuvel
Page 87 of 128
K. Muthuvel
Page 88 of 128
Project Code:
Project Name:
Phase of the Testing Life Cycle:
Application Version No:
Round:
Report Date:
Highlights of the Day:
A1. Design Phase
Module
Sn No
No of Test Condn
Designed
No of Test Cases
Designed
Remarks
Status
Tot no of
Conds in the
module
No. Executed
during the day
Planned Actua
l
Remarks
B. Defect Distribution
K. Muthuvel
Page 89 of 128
Critical
Major
Minor
Total
Defects Raised
Today (A)
Till Yesterday (B)
Total (A + B)
Defects Closed (C)
Balance Open (A +
B C)
Fixed but to be retested
Rejected
C. Environment Non-availability
From
Time
To
Time
Time Lost
Man-hours
Lost
Total
Action
Proposed
Proposed
Target Date
Responsibility
Remarks
E. General Remarks:
Prepared By:
Date:
Approved By:
Date:
K. Muthuvel
Page 90 of 128
Project Code:
Project Name:
Phase of the Testing Life Cycle:
Application Version No:
Report Date:
Report for Week:
A. Life Cycle/Process
Planned
Start
Date
End
Date
Revised
Manmonths
Start
Date
Actual
End
Date
Start
Date
Man
months
utilized
till date
Reasons
Projected
man
months
till
closure
Projected
End Date
No of Test
Condition Designed
No of Test
Cases
Designed
Remarks
SRs/Transaction
Status
Total no of
conditions in
the module
No of conditions
executed during
the week
Planned Actual
Total executed
till date
Planned
Remarks
Actual
C. Defect Distribution
Show
stopper
Critical
Major
Minor
Total
Open Defects
Break-up of Open defects
Pending
Clarifications
Fixed but to be
re-tested
Re-raised
Being Fixed
Rejected
D. Environment Non-availability
Total man-hours lost during the week:
K. Muthuvel
Page 91 of 128
Action
Proposed
Proposed
Target Date
Responsibility
Remarks
F. General Remarks
Prepared By:
Date:
Approved By:
Date:
15.5.14.Defect Report
K. Muthuvel
Page 92 of 128
Project Code:
Project Name:
Prepared By:
Date:
Approved By:
Date:
Check
Have all modules been tested
Have all conditions been tested
Have all rounds been completed
All deliverables named according to naming
convention
Are all increase in scope, timelines been tracked
in Top Level Project Checklist, Test Strategy and
design documents
Have all agreed-upon changes carried out
(change in scope, client review comments etc)
Have all defects been re-tested
Have all defects been closed or deferred status
Are all deliverables ready for delivery
Are all deliverables taken from Configuration
Management tool
Are all soft copy deliverables checked to be virus
free
Status
(Y/N)
Remarks
K. Muthuvel
Page 93 of 128
Project Code:
Project Name:
Prepared By:
Date:
Approved By:
Date:
Overview of the Application:
Key Challenges faced during Design or Execution:
Lessons learnt:
Suggested Corrective Actions:
K. Muthuvel
Page 94 of 128
16.
Q&A
16.1.General
Q1: Why does software have bugs?
Ans:
Time pressures - scheduling of software projects is difficult at best, often requiring a lot of
guesswork. When deadlines loom and the crunch comes, mistakes will be made.
Poorly documented code - it's tough to maintain and modify code that is badly written or
poorly documented that result as bugs.
Software development tools - various tools often introduce their own bugs or are poorly
documented, resulting in added bugs.
Testing the fixed code to verify that the bug is really fixed
Q3: What will happen about bugs that are already known?
Ans:
When a program is sent for testing (or a website given) a list of any known bugs should
accompany the program. If a bug is found, then the list will be checked to ensure that it is not
a duplicate. Any bugs not found on the list will be assumed to be new.
Q4: What's the big deal about 'requirements'?
Ans:
Requirements are the details describing an application's externally perceived functionality and
properties. Requirements should be, clear & documented, complete, reasonably detailed,
cohesive, attainable, and testable. A non-testable requirement would be, for example, 'userfriendly' (too subjective). Without such documentation, there will be no clear-cut way to
determine if a software application is performing correctly.
K. Muthuvel
Page 95 of 128
Low stability (bugs are expected to be easy to find, indicating that the program has
not been tested or has only been very lightly tested)
High stability (bugs are expected to be difficult to find, indicating already well tested)
K. Muthuvel
Page 96 of 128
Focus - Using a dedicated and expert test team frees the development team to focus
on sharpening their core skills in design and development, in their domain areas.
Independent assessment - Independent test team looks afresh at each test project
while bringing with them the experience of earlier test assignments, for different
clients, on multiple platforms and across different domain areas.
Save time - Testing can go in parallel with the software development life cycle to
minimize the time needed to develop the software.
Reduce Cost - Outsourcing testing offers the flexibility of having a large test team,
only when needed. This reduces the carrying costs and at the same time reduces the
ramp up time and costs associated with hiring and training temporary personnel.
Q12: What steps are needed to develop and run software tests?
Ans:
The following are some of the steps needed to develop and run software tests:
Obtain requirements, functional design, and internal design specifications and other
necessary documents
Identify application's higher-risk aspects, set priorities, and determine scope and
limitations of tests
Determine test approaches and methods - unit, integration, functional, system, load,
usability tests, etc.
Determine test input data requirements Identify tasks, those responsible for tasks,
and labor requirements
K. Muthuvel
Page 97 of 128
Determine input equivalence classes, boundary value analyses, error classes Prepare
test plan document and have needed reviews/approvals
Prepare test environment and test ware, obtain needed user manuals/reference
documents/configuration guides/installation guides, set up test tracking processes,
set up logging and archiving processes, set up or obtain test input data
Perform tests
Retest as needed
Maintain and update test plans, test cases, test environment, and test ware through
life cycle
An Acceptance Test Plan, describing the plan for acceptance testing of the software.
This would usually be published as a separate document, but might be published with
the system test plan as a single document.
A System Test Plan, describing the plan for system integration and testing. This
would also usually be published as a separate document, but might be published with
the acceptance test plan.
A Software Integration Test Plan, describing the plan for integration of testes software
components. This may form part of the Architectural Design Specification.
Unit Test Plan(s), describing the plans for testing of individual units of software.
These may form part of the Detailed Design Specifications.
The objective of each test plan is to provide a plan for verification, by testing the software, that the
software produced fulfils the requirements or design statements of the appropriate software
specification. In the case of acceptance testing and system testing, this means the Requirements
Specification.
K. Muthuvel
Page 98 of 128
K. Muthuvel
Page 99 of 128
K. Muthuvel
Should be perfectionist
Should be tactful and diplomatic
Should be innovative and creative
Should be relentless
Should possess negative thinking with good judgment skills
Should possess the attitude to break the system
K. Muthuvel
Performance Testing
Recovery testing
Sanity Test
Security Testing
Smoke testing
Web Testing
K. Muthuvel
Decision table
Equivalence Partitioning Method
Boundary Value Analysis
Cause Effect Graphing
State Transition Testing
Syntax Testing
White box
Black box
Gray Box
K. Muthuvel
K. Muthuvel
K. Muthuvel
Unit Testing
Coding
&
Debugging
Individual
Unit
Under
Integration
Testing
Unit Testing: Testing of single unit of code, module or program. it is usually done by the
developer of the unit. It validates that the software performs as designed. Deliverable of
the unit testing is software unit ready for testing with other system components.
Integration Testing: Testing of related programs, modules or units of code. Validates that
multiple parts of the system perform as per specification.
Deliverable of integration testing is parts of system ready for testing with other portions of
system.
47. What is the difference between Volume & Load?
Testing Type
Load
Data
Constant
Increase Till
Volum saturation
Point is
e
reached
User
Increase Till
saturation
Point is reached
C
K. Muthuvel
Static testing: Testing performed with expecting any response for specific request
placed at that time. Done Based on structures, Algorithms, logic, etc.,
Dynamic testing: Performed to the System that responds to any specific request.
More than all that without executing the application this testing cannot be done.
56. What is the difference between alpha testing and beta testing?
K. Muthuvel
Component
Test data
Test Environment
To Achieve
Tested by
Alpha testing
Simulated
Controlled
Functionality
Only testers
Supporting
Document Used
Functional
Specification
Beta testing
Live
Uncontrolled
User needs
Testers and
Users
Customer
Requirement
Specification
End-
Baseline Documents.
Stable application.
Enough hardware and software support E.g. Browsers, Servers, and Tools)
Optimum maintenance of resource
To test how the system can defense itself from external attacks.
How much it can with stand from breaking the system from performing its assigned
task.
Many critical software applications and services have integrated security measures
against malicious attacks. The purpose of security testing of these systems include
identifying and removing software flaws that may potentially lead to security violations,
and validating the effectiveness of security measures. Simulated security attacks can be
performed to find vulnerabilities.
62. What is database testing?
The demonstrate the backend response for front end requests
K. Muthuvel
System study
Understanding the application
Test environment setup
K. Muthuvel
Functions,
Conditions
Loops
Arrays
Structures
77. Can test condition, test case & test script help you in performing the static testing?
Static testing will be done based on Functions, Conditions, loops, arrays and structures.
so hardly not needed to have These documents, By keeping this also static testing can be
done.
78. What does dynamic testing mean?
Any dynamic application i.e., the system that responds to request of user is tested by
executing it is called dynamic testing
79. Is the dynamic testing a functional testing?
Yes, Regardless of static or dynamic if applications functionality's are attacked keeping in
mind to achieve the need then it will come under functional testing.
80. Is the Static testing a functional testing?
Yes, Regardless of static or dynamic if applications functionality's is attacked keeping in
mind to achieve the need then it will come under functional testing.
81. What is the functional testing you perform?
I have done Conformance testing, Regression testing, Workability testing, Function
Validation and Field level validation testing.
82. What is meant by Alpha Testing?
Alpha testing is testing of product or system at developers site by the customer.
K. Muthuvel
83. What kind of Document you need for going for a Functional testing?
Functional specification is the ultimate document, which expresses all the functionality's of
the application and other documents like user manual and BRS are also need for
functional testing.
Gap analysis document will add value to understand expected and existing system.
84. What is meant by Beta Testing?
User Acceptance testing which is done with the objective of achieving all users needs. In
this testing usually users or testers will involve in performing.
E.g.: a Product after completion given to customers for trail as Beta version and feedback
from users and important suggestions which will add quality will be done before release.
85. At what stage the unit testing has to be done?
After completing coding of individual functionality's unit testing can be done.
Say for E.g.: if an application have 5 functionality's to work together, if they have been
developed individually then unit testing can be carried out before their integration is
suppose to be done.
Who can perform the Unit Testing?
Both developers and testers can perform this unit level testing
86. When will the Verification & Validation be done?
Software
How
To
use
Development
Verification
Phases
v Verify
Requirements
Completeness of
gathering
Requirements
Project Planning
v Verify
vendor
capability,
if
Applicable
v Verify
completeness
of
project test plan
Verify Correctness
and completeness
of
Interim
Deliverables.
Verify contingency
plan
Project
Implementation
How to
Validation
use
Not usable
Not usable
Validate
Correctness
of changes
Validate
Regression
Validate
meets user
acceptance
criteria
Validate
Suppliers
software
Process
correctly
Validate
Software
interfaces
86A. What is the testing that a tester performs at the end of Unit Testing?
K. Muthuvel
K. Muthuvel
K. Muthuvel
Change
New Bug
No New Bug
Bug
Successful
Bad
Good
Change
Unsuccessful
Bad
Bad
Change
Because of the high probability that one of the bad outcomes will result from a change to
the system, it is necessary to do regression testing.
107. When we prefer Regression & what are the stages where we go for Regression Testing?
We Prefer regression testing to provide confidence that changes are correct & has not
affected the flow or Functionality of an application which got Modified or bugs got fixed in
it.
Stages where we go for Regression Testing are: Minimization approaches seek to satisfy structural coverage criteria by identifying
a minimal set of tests that must be retested, for identifying whether the application
works fine.
Coverage approaches are also based on coverage criteria, but do not require
minimization of the test set. Instead, they seek to select all tests that exercise
changed or affected program components.
Safe attempt instead to select every test that will cause the modified program to
produce different output than original program.
108. What is performance testing?
An important phase of the system test, often-called load, volume or performance test, and
stress tests try to determine the failure point of a system under extreme pressure. Stress
tests are most useful when systems are being scaled up to larger environments or being
implemented for the first time. Web sites, like any other large-scale system that requires
multiple accesses and processing, contain vulnerable nodes that should be tested before
deployment. Unfortunately, most stress testing can only simulate loads on various points
of the system and cannot truly stress the entire network, as the users would experience it.
K. Muthuvel
Testing
Type
Data
User
Load
Constant
Volume
Constant
Stress
K. Muthuvel
Emergency
High (Very High & high)
Medium
Low (Very Low & Low)
Testers will rate severity; it is based on the Defect we find in the application. Severity can
be rated as Critical or major or Minor. It is mostly done based on the nature of the defect
found in the Application.
Eg: - When user is not able to proceed or system gets crashes & so that tester is not able
to proceed further testing (These Bugs will be rated as Critical)
E.g.: - When user tries to add an record & then tries to view the same record added & if
the details getting displayed to the fields are not the same which the user provided as the
value to the fields (These Type of Bugs will be rated as Major Bugs)
K. Muthuvel
Business Requirement Document is the Documents required for performing the UAT
Testing by the testers.
Application should be Stable (Means, all the Modules should be tested at least once
after Integrating the Modules)
K. Muthuvel
K. Muthuvel
17.
Glossary
Testing
The process of exercising software to verify that it satisfies specified requirements and
to detect errors
Quality Assurance
A planned and systematic pattern for all actions necessary to provide adequate
confidence that the item or product conforms to established technical requirements
Quality Control
QC is a process by which product quality is compared with applicable standards, and the
action taken when nonconformance is detected.
Verification
The process of evaluating a system or component to determine whether the products of
the given development phase satisfy the conditions imposed at the start of that phase.
Validation
Determination of the correctness of the products of software development with respect to
the user needs and requirements.
Static Testing Techniques
Analysis of a program carried out without executing the program.
Review - Definition
Review is a process or meeting during which a work product or set of work products, is
presented to project personnel, managers, users, customers, or other interested parties for
comment or approval.
Walkthrough
A review of requirements, designs or code characterized by the author of the material
under review guiding the progression of the review.
Inspection
A group review quality improvement process for written material. It consists of two
aspects; product (document itself) improvement and process improvement (of both
document production and inspection).
K. Muthuvel
K. Muthuvel
Branch : A conditional transfer of control from any statement to any other statement in a
component, or an unconditional transfer of control from any statement to any other
statement in the component except the next statement, or when a component has more
than one entry point, a transfer of control to an entry point of the component.
Path Testing
Path: A sequence of executable statements of a component, from an entry point to an exit
point.
Path testing: A test case design technique in which test cases are designed to execute
paths of a component.
Data Flow-Based Testing:
Testing in which test cases are designed based on variable usage within the code.
Unit Testing
The testing of individual software components.
Integration Testing
Testing performed to expose faults in the interfaces and in the interaction between
integrated components
Incremental Integration Testing
Integration testing where system components are integrated into the system one at a time
until the entire system is integrated
Top Down Integration
An approach to integration testing where the component at the top of the component
hierarchy is tested first, with lower level components being simulated by stubs. Tested
components are then used to test lower level components. The process is repeated until
the lowest level components has been tested.
Bottom up Integration
An approach to integration testing where the lowest level components are tested first,
then used to facilitate the testing of higher level components. The process is repeated
until the component at the top of the hierarchy is tested.
K. Muthuvel
Stubs:
Stubs are program units that are stand-ins for the other (more complex) program units
that are directly referenced by the unit being tested.
Drivers:
Drivers are programs or tools that allow a tester to exercise/examine in a controlling
manner the unit of software being tested.
Big Bang Integration
Integration testing where no incremental testing takes place prior to all the system's
components being combined to form the system.
Validation Testing
Validation testing aims to demonstrate that the software functions in a manner that can be
reasonably expected by the customer.
Configuration review
An audit to ensure that all elements of the software configuration are properly developed,
catalogued, and has necessary detail to support maintenance.
System Testing
System testing is the process of testing an integrated system to verify that it meets
specified requirements".
Requirement based Testing
Designing tests based on objectives derived from requirements for the software
component (e.g., tests that exercise specific functions or probe the non-functional
constraints such as performance or security)
Business-Process based Non-Functional Testing
Testing of those requirements that do not relate to functionality. I.e. performance,
usability, etc.
Recovery testing
Testing aimed at verifying the system's ability to recover from varying degrees of
failure.
K. Muthuvel
Security testing
Testing whether the system meets its specified security objectives.
Stress testing
Testing conducted to evaluate a system or component at or beyond the limits of its
specified requirements.
Performance testing
Testing conducted to evaluate the compliance of a system or component with specified
performance requirements.
Alpha and Beta testing
Alpha testing: Simulated or actual operational testing at an in-house site not otherwise
involved with the software developers.
Beta testing: Operational testing at a site not otherwise involved with the software
developers.
User Acceptance Testing
Acceptance testing: Formal testing conducted to enable a user, customer, or other
authorized entity to determine whether to accept a system or component
Regression Testing and Re-testing
Retesting of a previously tested program following modification to ensure that faults
have not been introduced or uncovered as a result of the changes made.
Ad-hoc Testing
Testing carried out using no recognised test case design technique.
Load Testing
Load Testing involves stress testing applications under real-world conditions to predict
system behavior and performance and to identify and isolate problems. Load testing
applications can emulate the workload of hundreds or even thousands of users, so that
you can predict how an application will work under different user loads and determine the
maximum number of concurrent users accessing the site at the same time.
K. Muthuvel
K. Muthuvel
Test Plan
A record of the test planning process detailing the degree of tester indedendence, the test
environment, the test case design techniques and test measurement techniques to be used,
and the rationale for their choice. - BS
A document describing the scope, approach, resources, and schedule of intended testing
activities. It identifies test items, the features to be tested, the testing tasks, who will do
each task, and any risks requiring contingency planning. - IEEE
Test Case
A set of inputs, execution preconditions, and expected outcomes developed for a
particular objective, such as to exercise a particular program path or to verify compliance
with a specific requirement.
Comprehensive Testing - Round I
All the test scripts developed for testing are executed. Some cases the application may not
have certain module(s) ready for test; hence they will be covered comprehensively in the
next pass. The testing here should not only cover all the test cases but also business cycles
as defined in the application.
Discrepancy Testing - Round II
All the test cases that have resulted in a defect during the comprehensive pass should be
executed. In other words, all defects that have been fixed should be retested. Function
points that may be affected by the defect should also be taken up for testing. This type of
testing is called as Regression testing. Defects that are not fixed will be executed only
after they are fixed.
Sanity Testing - Round III
This is final round in the test process. This is done either at the client's site or at Maveric
depending on the strategy adopted. This is done in order to check if the system is sane
enough for the next stage i.e. UAT or production as the case may be under an isolated
environment. Ideally the defects that are fixed from the previous phases are checked and
freedom testing done to ensure integrity is conducted.
K. Muthuvel
Defect Definition
Error: A human action that produces an incorrect result.
Fault: A manifestation of an error in software. A fault, if encountered may cause a
failure.
Failure: Deviation of the software from its expected delivery or service.
A deviation from expectation that is to be tracked and resolved is termed as a defect.
Defects Classification
Showstopper
A Defect which may be very critical in terms of affecting the schedule, or it may be a
show stopper that is, it stops the user from using the system further
Major
A Defect where a functionality/data is affected significantly but not cause a showstopping condition or a block in the test process cycles.
Minor
A Defect which is isolated or does not stop the user from proceeding, but causes
inconvenience. Cosmetic Errors would also feature in this category
Severity:
How much the Bug found is supposed to affect the systems Function/Performance,
Usually we divide as Emergency, High, Medium, and Low.
Priority:
Which Bug should be solved fist in order of benefit of systems health? Normally it starts
from Emergency giving first Priority to Low as last Priority.
K. Muthuvel
Test Bed
Before Starting the Actual testing the elements which supports the testing activity such as
Test data, Data guide lines. Are collectively called as test Bed.
Data Guidelines
Data Guidelines are used to specify the data required to populate the test bed and prepare
test scripts. It includes all data parameters that are required to test the conditions derived
from the requirement / specification. The Document, which supports in preparing test
data are called Data guidelines
Test script
A Test Script contains the Navigation Steps, Instructions, Data and Expected Results
required to execute the test case(s).
Any test script should say how to drive or swim through out the application even for a
new user.
Test data
The value which are given at expected places(fields) in a system to verify its functionality
have been made ready in a piece of document called test data.
Test environment
A description of the hardware and software environment in which the tests will be run,
and any other software with which the software under test interacts when under test
including stubs and test drivers.
Traceability Matrix
Through out the testing life cycle of the project Traceability matrix has been maintained
to ensure the Verification & Validation of the testing is complete.
K. Muthuvel
18.
References
K. Muthuvel