You are on page 1of 127

Software Testing – made easy

Software
Testing
- made easy

Prepared By

K. Muthuvel, B.Com.,M.C.A.
E.P.G.D.S.T * (Software Testing)

K. Muthuvel Page 1 of 127


Software Testing – made easy

History

Version Description / Changes Author Approver Effective Date


1.0 Baseline version K. Muthuvel 10th Aug.'2005

For “Maveric Systems” Internal Use Only

No part of this volume may be reproduced or transmitted in any form or by any means
electronic or mechanical including photocopying and recording or by any information
storage or retrieval system except as may be expressly permitted.

K. Muthuvel Page 2 of 127


Software Testing – made easy

This book is dedicated to

Lord Vignesh

K. Muthuvel Page 3 of 127


Software Testing – made easy

Table of Contents
1.Testing Fundamentals....................................................................................................... 9
1.1.Definition................................................................................................................... 9
1.2.Objective.................................................................................................................... 9
1.3.Benefits of Testing..................................................................................................... 9
2.Quality Assurance, Quality Control, Verification & Validation.....................................10
2.1.Quality Assurance.................................................................................................... 10
2.2.Quality Control........................................................................................................ 10
2.3.Verification.............................................................................................................. 10
2.4.Validation.................................................................................................................10
3.SDLC & STLC................................................................................................................11
3.1.STLC – Software Testing Life Cycle ......................................................................11
3.2.Models of SDLC & STLC....................................................................................... 11
3.2.1.V-Model............................................................................................................ 11
3.2.2.W-Model........................................................................................................... 12
3.2.3.Waterfall Model................................................................................................ 13
3.2.4.Extreme Programming Model...........................................................................13
3.2.5.Spiral Model......................................................................................................14
4.Testing Standards............................................................................................................15
4.1.SW – CMM:.............................................................................................................15
4.2.SW – TMM.............................................................................................................. 16
4.2.1.Levels of SW –TMM........................................................................................ 16
4.2.1.1.Level 1: Initial............................................................................................ 16
4.2.1.2.Level 2: Phase Definition...........................................................................16
4.2.1.3.Level 3: Integration.................................................................................... 16
4.2.1.4.Level 4: Management and Measurement................................................... 16
4.2.1.5.Level 5: Optimization / Defect Prevention and Quality Control............... 16
4.2.2.Need to use SW-TMM......................................................................................17
4.2.3.SW-TMM Assessment Process.........................................................................17
4.2.4.SW-TMM Summary......................................................................................... 17
4.3.ISO : International Organisation for Standardisation...............................................18
4.4.ANSI / IEEE Standards............................................................................................18
4.5.BCS - SIGIST.......................................................................................................... 18
5.Testing Techniques......................................................................................................... 19
5.1.Static Testing Techniques........................................................................................ 19
5.1.1.Review - Definition...........................................................................................19
5.1.2.Types of Reviews..............................................................................................19
5.1.2.1.Walkthrough.............................................................................................. 19
5.1.2.2.Inspection................................................................................................... 19
5.1.2.3.Informal Review........................................................................................ 20
5.1.2.4.Technical Review.......................................................................................20
5.1.3.Activities performed during review.................................................................. 20
5.1.4.Review of the Specification / Planning and Preparing System Test................. 21
5.1.5.Roles and Responsibilities................................................................................ 22
5.2.Dynamic Testing Techniques...................................................................................23

K. Muthuvel Page 4 of 127


Software Testing – made easy

5.2.1.Black Box Testing: .......................................................................................... 23


5.2.1.1.Equivalence Class Partitioning.................................................................. 24
5.2.1.2.Boundary Value Analysis...........................................................................24
5.2.1.3.Cause and Effect Graphs............................................................................25
5.2.1.4.Comparison Testing................................................................................... 26
5.2.2.White-Box Testing:...........................................................................................27
5.2.2.1.Statement Coverage:.................................................................................. 27
5.2.2.2.Branch Coverage:.......................................................................................28
5.2.2.3.Condition Coverage:.................................................................................. 28
5.2.2.4.Path Coverage:........................................................................................... 29
5.2.2.5.Data Flow-Based Testing:..........................................................................31
5.2.2.6.Mutation Testing:.......................................................................................32
5.2.3.Grey Box Testing.............................................................................................. 32
6.Difference Tables............................................................................................................ 33
6.1.Quality Vs Testing................................................................................................... 33
6.2.Testing Vs Debugging............................................................................................. 33
6.3.Quality Assurance Vs Quality Control.................................................................... 33
6.4.Verification & Validation........................................................................................ 34
6.5.Black Box Testing & White Box Testing................................................................ 34
6.6.IST & UAT.............................................................................................................. 34
6.7.SIT & IST.................................................................................................................35
6.8. Alpha Testing & Beta Testing................................................................................ 35
6.9.Test Bed and Test Environment ..............................................................................35
6.10.Re-testing and Regression Testing.........................................................................35
7.Levels of Testing.............................................................................................................36
7.1.Unit Testing............................................................................................................. 36
7.1.1.Benefits of Unit Testing....................................................................................36
7.1.2.Pre-requisites.....................................................................................................36
7.2.Integration Testing................................................................................................... 38
7.2.1.Incremental Integration Testing........................................................................ 39
7.2.1.1.Top Down Integration................................................................................ 39
7.2.1.2.Bottom up Integration................................................................................ 40
7.2.1.3.Stub and Drivers........................................................................................ 41
7.2.2.Non-Incremental Testing.................................................................................. 42
7.2.2.1.Big Bang Integration.................................................................................. 42
7.2.2.2.Validation Testing......................................................................................42
7.2.2.3.Configuration review................................................................................. 42
7.3.System Testing......................................................................................................... 42
7.3.1.Functional Testing............................................................................................ 43
7.3.1.1.Requirement based Testing........................................................................43
7.3.2.Business-Process based Non-Functional Testing............................................. 44
7.3.2.1.Recovery testing.........................................................................................44
7.3.2.2.Security testing...........................................................................................45
7.3.2.3.Stress testing.............................................................................................. 45
7.3.2.4.Performance testing....................................................................................46
7.3.3.Alpha and Beta testing...................................................................................... 46
7.4.User Acceptance Testing......................................................................................... 47

K. Muthuvel Page 5 of 127


Software Testing – made easy

7.4.1.Entry Criteria.....................................................................................................47
7.4.2.Exit Criteria.......................................................................................................47
7.5.Regression Testing and Re-testing...........................................................................48
7.5.1.Factors favour Automation of Regression Testing........................................... 48
7.5.2.Tools used in Regression testing ......................................................................48
8.Types of Testing..............................................................................................................49
8.1.Compliance Testing................................................................................................. 49
8.2.Intersystem Testing / Interface Testing.................................................................... 49
8.3.Parallel Testing........................................................................................................ 49
8.4.Database Testing...................................................................................................... 49
8.5.Manual support Testing........................................................................................... 50
8.6.Ad-hoc Testing.........................................................................................................50
8.7.Configuration Testing.............................................................................................. 50
8.8.Pilot Testing............................................................................................................. 50
8.9.Automated Testing...................................................................................................50
8.10.Load Testing.......................................................................................................... 51
8.11.Stress and Volume Testing.................................................................................... 51
8.12.Usability Testing.................................................................................................... 51
8.13.Environmental Testing...........................................................................................51
9.Roles & Responsibilities.................................................................................................52
9.1.Test Associate.......................................................................................................... 52
9.2.Test Engineer........................................................................................................... 52
9.3.Senior Test Engineer................................................................................................52
9.4.Test Lead..................................................................................................................53
9.5.Test Manager............................................................................................................53
10.Test Preparation & Design Process...............................................................................54
10.1.Baseline Documents...............................................................................................54
10.1.1.Business Requirement.....................................................................................54
10.1.2.Functional Specification................................................................................. 54
10.1.3.Design Specification....................................................................................... 54
10.1.4.System Specification.......................................................................................54
10.2.Traceability ........................................................................................................... 54
10.2.1.BR and FS....................................................................................................... 54
10.2.2.FS and Test conditions....................................................................................55
10.3.Gap Analysis.......................................................................................................... 55
10.4.Choosing Testing Techniques................................................................................55
10.5.Error Guessing....................................................................................................... 56
10.6.Error Seeding......................................................................................................... 56
10.7.Test Plan.................................................................................................................56
10.7.1.Test Plan Identifier..........................................................................................56
10.7.2.Introduction.....................................................................................................56
10.7.3.Test Items........................................................................................................56
10.7.4.Features to be Tested.......................................................................................57
10.7.5.Features Not to Be Tested............................................................................... 57
10.7.6.Approach.........................................................................................................57
10.7.7.Item Pass/Fail Criteria.....................................................................................57
10.7.8.Suspension Criteria and Resumption Requirements.......................................57

K. Muthuvel Page 6 of 127


Software Testing – made easy

10.7.9.Test Deliverables............................................................................................ 57
10.7.10.Testing Tasks................................................................................................ 58
10.7.11.Environmental Needs....................................................................................58
10.7.12.Responsibilities............................................................................................. 58
10.7.13.Staffing and Training Needs......................................................................... 58
10.7.14.Schedule........................................................................................................ 58
10.7.15.Risks and Contingencies............................................................................... 59
10.7.16.Approvals...................................................................................................... 59
10.8.High Level Test Conditions / Scenario.................................................................. 59
10.8.1.Processing logic.............................................................................................. 59
10.8.2.Data Definition ...............................................................................................59
10.8.3.Feeds Analysis................................................................................................ 60
10.9.Test Case................................................................................................................ 61
10.9.1.Expected Results.............................................................................................61
10.9.1.1.Single Expected Result............................................................................ 61
10.9.1.2.Multiple Expected Result.........................................................................61
10.9.2.Pre-requirements............................................................................................. 62
10.9.3.Data definition................................................................................................ 62
11.Test Execution Process................................................................................................. 63
11.1.Pre- Requirements .................................................................................................63
11.1.1.Version Identification Values......................................................................... 63
11.1.2.Interfaces for the application...........................................................................63
11.1.3.Unit testing sign off........................................................................................ 63
11.1.4.Test Case Allocation....................................................................................... 64
11.2.Stages of Testing: ..................................................................................................64
11.2.1.Comprehensive Testing - Round I.................................................................. 64
11.2.2.Discrepancy Testing - Round II...................................................................... 64
11.2.3.Sanity Testing - Round III...............................................................................64
12.Defect Management...................................................................................................... 65
12.1.Defect – Definition................................................................................................ 65
12.2.Types of Defects.................................................................................................... 66
12.3.Defect Reporting ................................................................................................... 66
12.4.Tools Used............................................................................................................. 66
12.4.1.ClearQuest (CQ)............................................................................................. 66
12.4.2.TestDirector (TD):.......................................................................................... 67
12.4.3.Defect Tracker.................................................................................................67
12.5.Defects Meetings....................................................................................................67
12.6.Defects Publishing................................................................................................. 67
12.7.Defect Life Cycle................................................................................................... 68
13.Test Closure Process..................................................................................................... 69
13.1.Sign Off..................................................................................................................69
13.2.Authorities..............................................................................................................69
13.3.Deliverables........................................................................................................... 69
13.4.Metrics................................................................................................................... 69
13.4.1.Defect Metrics.................................................................................................69
13.4.2.Defect age: ..................................................................................................... 69
13.4.3.Defect Analysis: ............................................................................................. 69

K. Muthuvel Page 7 of 127


Software Testing – made easy

13.4.4.Test Management Metrics...............................................................................70


13.4.5.Debriefs With Test Team................................................................................70
14.Testing Activities & Deliverables.................................................................................71
14.1.Test Initiation Phase...............................................................................................71
14.2.Test Planning Phase............................................................................................... 71
14.3.Test Design Phase.................................................................................................. 71
14.4.Test Execution & Defect Management Phase........................................................72
14.5.Test Closure Phase................................................................................................. 72
15.Maveric Systems Limited............................................................................................. 73
15.1.Overview................................................................................................................73
15.2.Leadership Team....................................................................................................73
15.3.Quality Policy.........................................................................................................73
15.4.Testing Process / Methodology..............................................................................74
15.4.1.Test Initiation Phase........................................................................................75
15.4.2.Test Planning Phase........................................................................................ 76
15.4.3.Test Design Phase........................................................................................... 78
15.4.4.Execution and Defect Management Phase......................................................79
15.4.4.1.Test Execution Process............................................................................ 79
15.4.4.2.Defect Management Process.................................................................... 79
15.4.5.Test Closure Phase.......................................................................................... 81
15.5.Test Deliverables Template................................................................................... 82
15.5.1.Project Details Form...................................................................................... 82
15.5.2.Minutes of Meeting.........................................................................................84
15.5.3.Top Level Project Checklist............................................................................85
15.5.4.Test Strategy Document..................................................................................86
15.5.5.Configuration Management and Quality Plan.................................................86
15.5.6.Test Environment Request.............................................................................. 87
15.5.7.Risk Analysis Document.................................................................................88
15.5.8.Clarification Document...................................................................................88
15.5.9.Test condition / Test Case Document............................................................. 88
15.5.10.Test Script Document................................................................................... 88
15.5.11.Traceability Matrix....................................................................................... 89
15.5.12.Daily Status Report....................................................................................... 89
15.5.13.Weekly Status Report....................................................................................91
15.5.14.Defect Report................................................................................................ 92
15.5.15.Final Test Checklist...................................................................................... 93
15.5.16.Final Test Report...........................................................................................94
15.5.17.Project De-brief Form................................................................................... 94
16.Q & A............................................................................................................................95
16.1.General................................................................................................................... 95
16.2.G.E. – Interview .................................................................................................... 99
17.Glossary...................................................................................................................... 119

K. Muthuvel Page 8 of 127


Software Testing – made easy

1. Testing Fundamentals

1.1.Definition

“The process of exercising software to verify that it satisfies specified requirements and to detect
errors “
…BS7925-1
“Testing is the process of executing a program with the intent of finding errors”
…Glen Myers
Testing identifies faults, whose removal increases the software quality by increasing the
software’s potential reliability. Testing is the measurement of software quality. We measure how
closely we have achieved quality by testing the relevant factors such as correctness, reliability,
usability, maintainability, reusability and testability.
`

1.2.Objective

· Testing is a process of executing a program with intent of finding an error.


· A good test is one that has a high probability of finding an as-yet-undiscovered error.
· A successful test is one that uncovers an as-yet-undiscovered error.
· Testing should also aim at suggesting changes or modifications if required, thus adding
value to the entire process.
· The objective is to design tests that systematically uncover different classes of errors and
do so with a minimum amount of time and effort.
· Demonstrating that the software application appears to be working as required by the
specification
· Meeting performance requirements.
· Software reliability and software quality based on the data collected during testing

1.3.Benefits of Testing

· Increase accountability and Control


· Cost reduction
· Time reduction
· Defect reduction
· Increase productivity of the Software developers
· Quantitative Management of Software delivery

K. Muthuvel Page 9 of 127


Software Testing – made easy

2. Quality Assurance, Quality Control, Verification & Validation

2.1.Quality Assurance

“A planned and systematic pattern for all actions necessary to provide adequate confidence that
the item or product conforms to established technical requirements”

2.2.Quality Control

“QC is a process by which product quality is compared with applicable standards, and the action
taken when nonconformance is detected.”

“Quality Control is defined as a set of activities or techniques whose purpose is to ensure that all
quality requirements are being met. In order to achieve this purpose, processes are monitored
and performance problems are solved.”

2.3.Verification

“The process of evaluating a system or component to determine whether the products of the given
development phase satisfy the conditions imposed at the start of that phase.”
… [IEEE]

2.4.Validation

Determination of the correctness of the products of software development with respect to the user
needs and requirements.
… BS7925-1

Difference Table:

Quality Analysis Quality Control


Study on Process followed in Project Study on Project for its Function and
development Specification

Verification Validation
Process of determining whether output of one Process of determining whether a fully
phase of development conforms to its developed system conforms to its SRS
previous phase document

Verification is concerned with phase Validation is concerned about the final product
containment of errors to be error free

K. Muthuvel Page 10 of 127


Software Testing – made easy

3. SDLC & STLC

3.1.STLC – Software Testing Life Cycle

· Preparation of Testing Project Plan which includes Test Strategy.


· Preparation of Test Scripts which contains Test Scenarios.
· Preparation of Testing Bed. i.e.: Setting up the Test Environment
· Executing the Test Scripts (Automated as well as Manual Tests)..
· Defect Tracking with any bug tracking tools.
· Preparation of Test Completion Report and Test Incident Report.
· Preparation of Test Metrics for Continuous Process Improvement.

3.2.Models of SDLC & STLC

There are a number of different models for software development life cycle. One thing which all
models have in common is that at some point in the life cycle, software has to be tested. This
paper outlines some of the more commonly used software development life cycle, with particular
emphasis on the testing activities in each model.

3.2.1.V-Model

The figure shows the brief description of the V-Model kind of testing. Every phase of the STLC in
this model corresponds to some activity in the SDLC. The Requirement Analysis would
correspondingly have an acceptance testing activity at the end. The design has Integration
Testing (IT) and the System Integration Testing (SIT) and so on.

K. Muthuvel Page 11 of 127


Software Testing – made easy

· V model is model in which testing is done parallel with development. Left side of v model,
reflect development input for the corresponding testing activities.

· V model is the classic software development model. It encapsulates the steps in


Verification and Validation phases for each step in the SDLC. For each phase, the
subsequent phase becomes the verification (QA) phase and the corresponding testing
phase in the other arm of the V becomes the validating (Testing) phase

· In the Software Development Life Cycle, both the Development activity and the testing
activities start almost at the same time with the same information in their hands. The
development team will apply "do-procedures" to achieve the goals and the testing team
will apply "Check-procedures" to verify that. Its a parallel process and finally arrives to the
product with almost no bugs or errors

· V-model is one of the SDLC STLC; it includes testing from the unit level to business level.
That is after completing the coding tester starts testing the code by keeping the design
phase documents that all the modules had been integrated or not, after that he will verify
for system is according to the requirements or not, and at last he will go for business
scenarios where he can validate by the customer and he can do the alpha testing and
beta testing. And at last he decides to have the complete stable product.

· The V model shows the Development Cycle Stages and Maps it to Testing Cycles, but it
fails to address how to start for all these test levels in parallel to development. It is a
parallel activity which would give the tester the domain knowledge and perform more
value added, high quality testing with greater efficiency. Also it reduces time since the test
plans, test cases, test strategy are prepared during the development stage itself.

3.2.2.W-Model

From the view of testing, all of the models presented previously are deficient in various ways. The
test activities first start after the implementation:
· The connection between the various test stages and the basis for the test is not clear
· The tight link between test, debug and change tasks during the test phase is not clear

In the following, the W-model is presented. This is based on the general V-model and the
disadvantages previously mentioned are removed.

K. Muthuvel Page 12 of 127


Software Testing – made easy

3.2.3.Waterfall Model

One of the first models for software development is the so-called waterfall-model by B.W.Boehm.
The individual phases i.e. activities that were defined here are to be found in nearly all models
proposed since. In this it was set out that each of the activities in the software development must
be completed before the next phase begins. A return in the development process was only
possible to an immediate previous phase.

In the waterfall-model, testing directly follows the implementation. By this model it was suggested
that activities for testing could first be started after the implementation. Preparatory tasks for the
testing were not clear. A further disadvantage is that testing, as the last activity before release,
could be relatively easily shortened or omitted altogether. This, in practice, is unfortunately all too
common. In this model, the expense of the removal of faults and defects found is only
recognizable through a return to the implementation phase.

3.2.4.Extreme Programming Model

K. Muthuvel Page 13 of 127


Software Testing – made easy

3.2.5.Spiral Model

In the spiral-model a cyclical and prototyping view of software development was shown. Tests
were explicitly mentioned (risk analysis, validation of requirements and of the development) and
the test phase was divided into stages. The test activities included module, integration and
acceptance tests. However, in this model the testing also follows the coding. The exception to this
is that the test plan should be constructed after the design of the system. The spiral model also
identifies no activities associated with the removal of defects
K. Muthuvel Page 14 of 127
Software Testing – made easy

4. Testing Standards

Testing of software is defined very differently by different people and different corporations. You
have process standards bodies, like ISO, SPICE, IEEE, etc. that attempt to impose a process to
whatever types of development projects you do (be it hardware, software, embedded systems,
etc.) and some of that will, by proxy, speak to testing. However, these are more there to guide the
process and not the testing, such as it is. So, for example, IEEE will give you ideas for templates
for such things as test case specifications, test plans, etc. That may help you out.

On the other hand, those IEEE templates tell you nothing about actually testing the product itself.
They basically just show you how to document that you are testing the product. The same thing
pretty much applies with ISO. ISO is the standard for international projects and yet it, like IEEE,
does not really force or even advocate a certain "testing standard." You also have other process
and project oriented concepts out there like the Capability Maturity Model (CMM)

Some of the organization that define testing standards are

· BS – British Standards
· ISO- International Organization of Standards
· CMM- Capability Maturity Model
· SPICE- Software Process Improvement and Capability Determination
· NIST-National institute of Standards and Technology
· DoD-Department of Defense

4.1.SW – CMM:

SEI - Software Engineering Institute, Carnegie Mellon University.


CMM - Capability Maturity Model

Software Process

A software process can be defined as a set of activities, methods, practices, and transformations
that people use to develop and maintain software and the associated products

Software Process Capability

Software Process Capability describes the range of expected results that can be achieved by
following a software process.The software process capability of an organization provides one
means of predicting the most likely outcomes to be expected from the next software project the
organization undertakes.

Software Process Maturity

Software Process Maturity is the extent to which a specific process is explicitly defined, managed,
measured, controlled, and effective. Maturity implies a potential growth in capability and indicates
both the richness of an organization’s software process and the consistency with which it is
applied in projects throughout the organization

The five levels of SW- CMM


Level 1: Initial
Level 2: Repeatable
Level 3: Managed
Level 4: Defined
Level 5: Optimum

K. Muthuvel Page 15 of 127


Software Testing – made easy

4.2.SW – TMM

SW-TMM is a testing process improvement tool that can be used either in conjunction with the
SW-CMM or as a stand-alone tool.

4.2.1.Levels of SW –TMM

4.2.1.1.Level 1: Initial
· A chaotic process
· Not distinguished from debugging and ill defined
· The tests are developed ad hoc after coding is complete
· Usually lack a trained professional testing staff and testing tools
· The objective of testing is to show that the system and software work

4.2.1.2.Level 2: Phase Definition


· Identify testing as a separate function from debugging
· Testing becomes a defined phase following coding
· Standardize their process to the point where basic testing techniques and
methods are in place
· The objective of testing is to show that the system and software meets
specifications

4.2.1.3.Level 3: Integration
· Integrate testing into the entire life cycle
· Establish a formal testing organization
· establishes formal testing technical trainings
· controls and monitors the testing process
· begins to consider using automated test tools
· The objective of testing is based on system requirements
· Major milestone reached at this level: management
· recognizes testing as a professional activity

4.2.1.4.Level 4: Management and Measurement


· Testing is a measured and quantified process
· Development products are now tested for quality attributes such as Reliability,
Usability and Maintainability.
· Test cases are collected and recorded in a test database for reuse and
regression testing
· Defects found during testing are now logged, given a severity level, and assigned
a priority for correction

4.2.1.5.Level 5: Optimization / Defect Prevention and Quality Control

· Testing is institutionalized within the organization

K. Muthuvel Page 16 of 127


Software Testing – made easy

· Testing process is well defined and managed


· Testing costs and effectiveness are monitored
· Automated tools are a primary part of the testing process
· There is an established procedure for selecting and evaluating testing tools

4.2.2.Need to use SW-TMM

· easy to understand and use


· provide a methodology to baseline the current test
· process maturity
· designed to guide organization
· selecting process improvement strategies
· identifying critical issues to test process maturity
· provide a road map for continuous test process improvement
· provide a method for measuring progress
· allow organizations to perform their own assessment

Organizations that are using SW-CMM

· SW-TMM fulfills the design objective of being an excellent companion to SW-CMM


· SW-TMM is just another assessment tool and easily incorporated into the software
process assessment

Organizations that are not using SW-CMM

· provide an unbiased assessment of the current testing process


· provide a road map for incremental improvements
· save testing cost as the testing process moves up the maturity levels

4.2.3.SW-TMM Assessment Process

· Prepare for the assessment


· choose team leader and members
· choose evaluation tools (e.g. questionnaire)
· training and briefing
· Conduct the assessment
· Document the findings
· Analyze the findings
· Develop the action plan
· Write the final report
· Implement the improvements

best to implement the improvements either in a pilot project or in phases track progress and
achievements prior to expanding organization wide also good in a limited application easier to
fine-tune the new process prior to expanded implementation

4.2.4.SW-TMM Summary

· baseline the current testing process level of maturity

K. Muthuvel Page 17 of 127


Software Testing – made easy

· identify areas that can be improved


· identify testing processes that can be adopted organization-wide
· provide a road map for implementing the improvements
· provide a method for measuring the improvement results
· provide a companion tool to be used in conjunction with the SW-TMM

4.3.ISO : International Organisation for Standardisation

· Q9001 – 2000 – Quality Management System : Requirements


· Q9000 – 2000 – Quality Management System : Fundamentals and Vocabulary
· Q9004 – 2000 – Quality Management System : Guidelines for performance improvements

4.4.ANSI / IEEE Standards

ANSI - ‘American National Standards Institute’


IEEE Standards: Institute of Electrical and electronics Engineers (Founded in 1884)

Have an entire set of standards devoted to software. Testers should be familiar with all the
standards mentioned in IEEE.

1. 610.12-1990 IEEE Standard Glossary of Software Engineering Terminology


2. 730-1998 IEEE Standard for Software Quality Assurance plans
3. 828-1998 IEEE Standard for Software Configuration Management
4. 829-1998 IEEE Standard for Software Test Documentation
5. 830-1998 IEEE Recommended Practice for Software Requirement
Specifications.
6. 1008-1987(R1993) IEEE Standard for Software Unit Testing
7. 1012-1998 IEEE Standard for Software Verification and validation
8. 1012a-1998 IEEE Standard for Software Verification and validation –
Supplement to 1012-1998 Content
9. 1016-1998- IEEE Recommended Practice for Software Design description
10. 1028-1997 IEEE Standard for Software Reviews
11. 1044-1993 IEEE Standard Classification for Software Anomalies
12. 1045-1992 IEEE Standard for Software Productivity metrics
13. 1058-1998 IEEE Standard for Software Project Management Plans
14. 1058.1-1987 IEEE Standard for Software Management
15. 1061-1998.1 IEEE Standard for Software Quality Metrics Methodology.

4.5.BCS - SIGIST

A meeting of the Specialist Interest Group on Software Testing was held in January 1989 (this
group was later to affiliate with the British Computer Society). This meeting agreed that existing
testing standards are generally good standards within the scope which they cover, but they
describe the importance of good test case selection, without being specific about how to choose
and develop test cases.

The SIG formed a subgroup to develop a standard which addresses the quality of testing
performed. Draft 1.2 was completed by November 1990 and this was made a semi-public release
for comment. A few members of the subgroup trialed this draft of the standard within their own
organisations. Draft 1.3 was circulated in July 1992 (it contained only the main clauses) to about
20 reviewers outside of the subgroup. Much of the feedback from this review suggested that the
approach to the standard needed re-consideration.

K. Muthuvel Page 18 of 127


Software Testing – made easy

5. Testing Techniques

5.1.Static Testing Techniques

“Analysis of a program carried out without executing the program.”


… BS 7925-1

5.1.1.Review - Definition

Review is a process or meeting during which a work product or set of work products, is presented
to project personnel, managers, users, customers, or other interested parties for comment or
approval.
[IEEE]

5.1.2.Types of Reviews

There are three general classes of reviews:

· Informal / peer reviews


· Semiformal / walk-through
· Formal / inspections.

5.1.2.1.Walkthrough

“A review of requirements, designs or code characterized by the author of the material under
review guiding the progression of the review. “
[BS 7925-1]

A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no


preparation is usually required.

These are led by the author of the document, and are educational in nature. Communication is
therefore predominately one-way in nature. Typically they entail dry runs of designs, code and
scenarios/ test cases.

5.1.2.2.Inspection

A group review quality improvement process for written material. It consists of two aspects;
product (document itself) improvement and process improvement (of both document production
and inspection).
[BS 7925-1]

An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a
moderator, reader, and a recorder to take notes. The subject of the inspection is typically a
document such as a requirements specification or a test plan, and the purpose is to find problems
and see what's missing, not to fix anything. Attendees should prepare for this type of meeting by
reading thru the document; most problems will be found during this preparation. The result of the
inspection meeting should be a written report. Thorough preparation for inspections is difficult,
painstaking work, but is one of the most cost effective methods of ensuring quality.

K. Muthuvel Page 19 of 127


Software Testing – made easy

Led by trained moderator (not author), has defined roles, and includes metrics and formal process
based on rules and checklists with entry and exit criteria.

5.1.2.3.Informal Review

· Unplanned and Undocumented


· Useful, Cheap and widely used
· Contrast with walkthroughs is that communication is very much two-way in nature

5.1.2.4.Technical Review

Technical reviews are also known as peer review as it is vital that participants are made up from
the 'peer group', rather than including managers.
· Documented
· Defined fault detection process
· Includes peers and technical experts
· No management participant

Comparison of review types

Review type Primary purpose Led by Participants Degree of formality


Walkthrough Education Author Peers Presentational
Finding faults and Reader,
Formal defined
Inspection process Moderator Recorder,
Inspection process
improvement Author, Inspector
Find problems Largely Unplanned
Informal review Not defined Not defined
quickly and cheaply and Undocumented
Peers, technical Formal fault
Technical review Finding faults Chairperson
experts detection process

5.1.3.Activities performed during review

Activities in Review: Planning, overview meeting, Review meeting and follow-up.


Deliverables in Review: Product changes, source document changes and improvements.
Factors for pitfall of review: Lack of training, documentation and management support.

Review of the Requirements / Planning and Preparing Acceptance Test


At the beginning of the project the test activities must start. These first activities are:
· Fixing of test strategy and test concept
· risk analysis
· determine criticality
· expense of testing
· test intensity
· Draw up the test plan
· Organize the test team
· Training of the test team - If necessary
· Establish monitoring and reporting

K. Muthuvel Page 20 of 127


Software Testing – made easy

· Provide required hardware resources (PC, data base, …)


· Provide required software resources (software version, test tools, …)

The activities include the foundations for a manageable and high-quality test process. A test
strategy is determined after a risk evaluation, a cost estimate and test plan are developed and
progress monitoring and reporting are established. During the development process all plans
must be updated and completed and all decisions must be checked for validity.

In a mature development process reviews and inspections are carried out through the whole
process. The review of the requirement document answers questions like: Are all customers’
requirements fulfilled? Are the requirements complete and consistent? And so on. It is a look back
to fix problems before going on in development. But just as important is a look forward. Ask
questions like: Are the requirements testable? Are they testable with defensible expenditure? If
the answer is no, then there will be problems to implement these requirements. If you have no
idea how to test some requirements then it is likely that you have no idea how to implement these
requirements. At this stage of the development process all the knowledge for the acceptance
tests is available and to hand. So this is the best place for doing all the planning and preparing for
acceptance testing.

For example one can


· Establish priorities of the tests depending on criticality
· Specify (functional and non-functional) test cases
· Specify and - if possible - provide the required infra-structure
· At this stage all of the acceptance test preparation is finished and can be achieved.

5.1.4.Review of the Specification / Planning and Preparing System Test

In the review meeting of the specification documents ask questions like: Is the specification
testable? Are they testable with defensible expenditure? Only these kinds of specifications can be
realistically implemented and be used for the next steps in the development process. There must
be a re-work of the specifications if the answers to the questions are no. Here all the knowledge
for the system tests is available and to hand. Tasks in planning and preparing for system testing
include:

· Establishing priorities of the tests depending on criticality


· Specifying (functional / non-functional) system test cases
· Defining and establishing the required infra-structure

As with the acceptance test preparation, all of the system test preparation is finished at this early
development stage.

· Review of the Architectural Design


· Detailed Design Planning and
· Preparing Integration/Unit Test

During the review of the architectural design one can look forward and ask questions like: What is
about the testability of the design? Are the components and interfaces testable? Are they testable
with defensible expenditure? If the components are too expensive to test a re-work of the
architectural design has to be done before going further in the development process. Also at this
stage all the knowledge for integration testing is available. All preparation, like specifying control
flow and data flow integration test cases, can be achieved. All accordingly activities of the review
of the architectural design and the integration tests can be done here at the level of unit tests.

K. Muthuvel Page 21 of 127


Software Testing – made easy

5.1.5.Roles and Responsibilities

In order to conduct an effective review, everyone has a role to play. More specifically, there are
certain roles that must be played, and reviewers cannot switch roles easily. The basic roles in a
review are:
· The moderator
· The recorder
· The presenter
· Reviewers

Moderator:
The moderator makes sure that the review follows its agenda and stays focused on the topic at
hand. The moderator ensures that side-discussions do not derail the review, and that all reviewers
participate equally.

Recorder:
The recorder is an often overlooked, but essential part of the review team. Keeping track of what
was discussed and documenting actions to be taken is a full-time task. Assigning this task to one
of the reviewers essentially keeps them out of the discussion. Worse yet, failing to document what
was decided will likely lead to the issue coming up again in the future. Make sure to have a
recorder and make sure that this is the only role the person plays.

Presenter:

The presenter is often the author of the artifact under review. The presenter explains the artifact
and any background information needed to understand it (although if the artifact was not self-
explanatory, it probably needs some work). It’s important that reviews not become “trials” - the
focus should be on the artifact, not on the presenter. It is the moderator’s role to make sure that
participants (including the presenter) keep this in mind. The presenter is there to kick-off the
discussion, to answer questions and to offer clarification.

Reviewer:

Reviewers raise issues. It’s important to keep focused on this, and not get drawn into side
discussions of how to address the issue. Focus on results, not the means.

K. Muthuvel Page 22 of 127


Software Testing – made easy

5.2.Dynamic Testing Techniques

“The process of evaluating a system or component based upon its behaviour during execution. “

… [IEEE]

5.2.1.Black Box Testing:

“Test case selection that is based on an analysis of the specification of the component without
reference to its internal workings.”
…BS7925-1

Testing based on an analysis of the specification of a piece of software without reference to its
internal workings. The goal is to test how well the component conforms to the published
requirements for the component

It attempts to find:

· Incorrect or missing functions


· Interface errors
· Errors in data structures or external database access
· Performance errors
· Initialization and termination errors

Black-box test design treats the system as a "black-box", so it does not explicitly use knowledge
of the internal structure. Black box testing is based solely on the knowledge of the system
requirements. Black-box test design is usually described as focusing on testing functional
requirements. In comparison, White-box testing allows one to peek inside the "box", and it
focuses specifically on using internal knowledge of the software to guide the selection of test data.

Black box testing focuses on testing the function of the program or application against its
specifications. Specifically, this technique determines whether combinations of inputs and
operations produce expected results.

Test Case design Techniques under Black Box Testing:

· Equivalence class partitioning


· Boundary value analysis
· Comparison testing
· Orthogonal array testing
· Decision Table based testing
· Cause Effect Graph

K. Muthuvel Page 23 of 127


Software Testing – made easy

5.2.1.1.Equivalence Class Partitioning

Equivalence class: A portion of the component's input or output domains for which the
component's behaviour is assumed to be the same from the component's specification.
…BS7925-1

Equivalence partition testing: A test case design technique for a component in which test cases
are designed to execute representatives from equivalence classes.
…BS7925-1

Determination of equivalence classes

· Examine the input data.

· Few general guidelines for determining the equivalence classes can be given

· If the input data to the program is specified by a range of values:


o e.g. numbers between 1 to 5000.
o One valid and two invalid equivalence classes are defined.

· If input is an enumerated set of values:


o e.g. {a,b,c}
o one equivalence class for valid input values

· Another equivalence class for invalid input values should be defined.


o Example
o A program reads an input value in the range of 1 and 5000:
o computes the square root of the input number
o There are three equivalence classes:
o the set of negative integers,
o set of integers in the range of 1 and 5000,
o Integers larger than 5000.
o The test suite must include:
o representatives from each of the three equivalence classes:
o A possible test suite can be: {-5, 500, 6000}.

5.2.1.2.Boundary Value Analysis

Boundary value: An input value or output value which is on the boundary between equivalence
classes, or an incremental distance either side of the boundary.
…BS7925-1

Boundary value analysis: A test case design technique for a component in which test cases are
designed which include representatives of boundary values.
…BS7925-1
Example

· For a function that computes the square root of an integer in the range of 1 and 5000:
· Test cases must include the values: {0, 1, 5000, and 5001}.

K. Muthuvel Page 24 of 127


Software Testing – made easy

5.2.1.3.Cause and Effect Graphs

“A graphical representation of inputs or stimuli (causes) with their associated outputs (effects),
which can be used to design test cases”
…BS7925-1

Cause-effect graphing attempts to provide a concise representation of logical combinations and


corresponding actions. A Cause-and-Effect Diagram is a tool that helps identify, sort, and display
possible causes of a specific problem or quality characteristic (Viewgraph 1). It graphically
illustrates the relationship between a given outcome and all the factors that influence the outcome.

Causes (input conditions) and effects (actions) are listed for a module and an identifier is
assigned to each.

A cause-effect graph developed.

· Graph converted to a decision table.


· Decision table rules are converted to test cases.

The C&E diagram is also known as the Fishbone/Ishikawa diagram because it was drawn to
resemble the skeleton of a fish, with the main causal categories drawn as "bones" attached to the
spine of the fish, as shown below

Example C&E diagram for a Server crash issue:

K. Muthuvel Page 25 of 127


Software Testing – made easy

Advantages

· Helps determine root causes


· Encourages group participation
· Indicates possible causes of variation
· Increases process knowledge
· Identifies areas for collecting data

5.2.1.4.Comparison Testing

· In some applications, the reliability is critical.


· Redundant hardware and software may be used.
· For redundant s/w, use separate teams to develop independent versions of the software.
· Test each version with same test data to ensure all provide identical output.
· Run all versions in parallel with a real-time comparison of results.
· Even if will on run one version in final system, for some critical applications can develop
independent versions and use comparison testing or back-to-back testing.
· When outputs of versions differ, each is investigated to determine if there is a defect.
· Method does not catch errors in the specification.
· Exercise on Live Application

K. Muthuvel Page 26 of 127


Software Testing – made easy

5.2.2.White-Box Testing:

“Test case selection that is based on an analysis of the internal structure of the component.”

…BS7925-1
Testing based on an analysis of internal workings and structure of a piece of software. Also known
as Structural Testing / Glass Box Testing / Clear Box Testing. Tests are based on coverage of
code statements, branches, paths, conditions
.
• Aims to establish that the code works as designedExamines the internal structure and
implementation of the program
• Target specific paths through the programNeeds accurate knowledge of the design,
implementation and code

Test Case design techniques under White Box Testing:

· Statement coverage
· Branch coverage
· Condition coverage
· Path coverage
· Data flow-based testing
· Mutation testing

5.2.2.1.Statement Coverage:

“A test case design technique for a component in which test cases are designed to execute
statements.”
… BS7925-1

Design test cases so that every statement in a program is executed at least once. Unless a
statement is executed, we have no way of knowing if an error exists in that statement

Example:

Euclid's GCD computation algorithm:


int f1(int x, int y){
while (x != y){
if (x>y) then
x=x-y;
else y=y-x;
}
return x; }

By choosing the test set {(x=3, y=3), (x=4, y=3), (x=3, y=4)} all statements are executed at least
once.

K. Muthuvel Page 27 of 127


Software Testing – made easy

5.2.2.2.Branch Coverage:

Branch : A conditional transfer of control from any statement to any other statement in a
component, or an unconditional transfer of control from any statement to any other statement in
the component except the next statement, or when a component has more than one entry point, a
transfer of control to an entry point of the component.

Branch Testing: A test case design technique for a component in which test cases are designed
to execute branch outcomes.
… BS7925-1

Branch testing guarantees statement coverage

Example
Test cases for branch coverage can be: {(x=3, y=3), (x=4, y=3), (x=3, y=4)}

5.2.2.3.Condition Coverage:

Condition: “A Boolean expression containing no Boolean operators. For instance, A<B is a


condition but A and B is not.”
… BS7925-1

Test cases are designed such that:


Each component of a composite conditional expression given both true and false values.

Example

Consider the conditional expression ((c1.and.c2).or.c3):


Each of c1, c2 and c3 are exercised at least once i.e. given true and false values.

· Condition testing stronger than branch testing.


· Branch testing stronger than statement coverage testing.

K. Muthuvel Page 28 of 127


Software Testing – made easy

5.2.2.4.Path Coverage:

Path: A sequence of executable statements of a component, from an entry point to an exit point.

Path testing: A test case design technique in which test cases are designed to execute paths of
a component.
… BS7925-1
A testing mechanism proposed by McCabe.

Test cases which exercise basic set will execute every statement at least once. Aim is to
derive a logical complexity measure of a procedural design and use this as a guide for
defining a basic set of execution paths.

Flow Graph Notation


Notation for representing control flow

Sequence If While Until Case

On a flow graph:
· Arrows called edges represent flow of control
· Circles called nodes represent one or more actions
· Areas bounded by edges and regions called regions
· A predicate node is a node containing a condition

Any procedural design can be translated into a flow graph. Note that compound Boolean
expressions at tests generate at least two predicate nodes and additional arcs.

Cyclomatic Complexity:

The Cyclomatic complexity gives a quantitative measure of the logical complexity.

Introduced by Thomas McCabe in 1976, it measures the number of linearly-independent


paths through a program module. This measure provides a single ordinal number that can
be compared to the complexity of other programs. Cyclomatic complexity is often
referred to simply as program complexity, or as McCabe's complexity.

This value gives the number of independent paths in the Basis set, and an upper bound for
the number of tests to ensure that each statement is executed at least once. An
independent path is any path through a program that introduces at least one new set of
processing statements or a new condition (i.e., a new edge)

K. Muthuvel Page 29 of 127


Software Testing – made easy

Cyclomatic complexity (CC) = E - N + p

Where
E = the number of edges of the graph
N = the number of nodes of the graph
p = the number of connected components

Example has:

4 3

6 5

7a 7b

Cyclomatic complexity of 4.

Independent paths:
1. 1, 8
2. 1, 2, 3, 7b, 1, 8
3. 1, 2, 4, 5, 7a, 7b, 1, 8
4. 1, 2, 4, 6, 7a, 7b, 1, 8

Cyclomatic complexity provides upper bound for number of tests required to guarantee coverage
of all program statements. As one of the more widely-accepted software metrics, it is intended to
be independent of language and language format.

Deriving Test Cases

1. Using the design or code, draw the corresponding flow graph.


2. Determine the Cyclomatic complexity of the flow graph.
3. Determine a basis set of independent paths.
4. Prepare test cases that will force execution of each path in the basis set.

Note: Some paths may only be able to be executed as part of another test.

K. Muthuvel Page 30 of 127


Software Testing – made easy

5.2.2.5.Data Flow-Based Testing:

“Testing in which test cases are designed based on variable usage within the code.”
…BS7925-1
Selects test paths of a program:
According to the locations of definitions and uses of different variables in a program.

For a statement numbered S,

DEF(S) = {X/statement S contains a definition of X}


USES(S) = {X/statement S contains a use of X}
Example: 1: a=b; DEF (1) = {a}, USES (1) = {b}.
Example: 2: a=a+b; DEF(1)={a}, USES(1)={a,b}.
A variable X is said to be live at statement S1, if
X is defined at a statement S:
there exists a path from S to S1 not containing any definition of X.

DU Chain Example

1 X(){
2 a=5; /* Defines variable a */
3 While(C1) {
4 if (C2)
5 b=a*a; /*Uses variable a */
6 a=a-1; /* Defines variable a */
7 }
8 print(a); } /*Uses variable a */

Definition-use chain (DU chain)


[X,S,S1],
S and S1 are statement numbers,
X in DEF(S)
X in USES(S1), and
the definition of X in the statement S is live at statement S1.

Every DU chain in a program is covered at least once. It is very useful for selecting test paths of a
program containing nested if and loop statements

1X(){
2 B1; /* Defines variable a */
3 While(C1) {
4 if (C2)
5 if(C4) B4; /*Uses variable a */
6 else B5;
7 else if (C3) B2;
8 else B3; }
9 B6 }

[a,1,5]: a DU chain.

Assume:
DEF(X) = {B1, B2, B3, B4, B5}
USED(X) = {B2, B3, B4, B5, B6}
There are 25 DU chains.
However only 5 paths are needed to cover these chains.

K. Muthuvel Page 31 of 127


Software Testing – made easy

5.2.2.6.Mutation Testing:

The software is first tested using an initial testing method based on white-box strategies we
already discussed. After the initial testing is complete, mutation testing is taken up. The idea
behind mutation testing is to make a few arbitrary small changes to a program at a time, each
time the program is changed it is called a mutated program; the change is called a mutant.

A mutated program tested against the full test suite of the program. If there at least one test case
in the test suite for which a mutant gives an incorrect result, then the mutant is said to be dead. If
a mutant remains alive even after all test cases have been exhausted, the test suite is enhanced
to kill the mutant.

The process of generation and killing of mutants can be automated by predefining a set of
primitive changes that can be applied to the program.

The primitive changes can be:

· Altering an arithmetic operator,


· Changing the value of a constant,
· Changing a data type, etc.

A major disadvantage of mutation testing:

· computationally very expensive,


· A large number of possible mutants can be generated.

5.2.3.Grey Box Testing

· Grey box Testing is the new term, which evolved due to the different architectural usage
of the system. This is just a combination of both Black box & White box testing. Tester
should have the knowledge of both the internals and externals of the function.
· Tester should have good knowledge of White Box Testing and complete knowledge of
Black Box Testing.
· Grey box testing is especially important with Web and Internet applications, because the
Internet is built around loosely integrated components that connect via relatively well-
defined interfaces

K. Muthuvel Page 32 of 127


Software Testing – made easy

6. Difference Tables

6.1.Quality Vs Testing

Quality Testing
“Quality is giving more cushions for user to Testing is an activity done to achieve the
use system with all its expected quality.
characteristics” It is usually said as Journey
towards Excellence.

6.2.Testing Vs Debugging

Testing Debugging
Testing is done to find bugs Debugging is an art of fixing bugs.

6.3.Quality Assurance Vs Quality Control

Quality Analysis Quality Control


Study on Process followed in Project Study on Project for its Function and
development Specification
QA is a planned and systematic set of QC is a process by which product quality is
activities necessary to provide adequate compared with applicable standards, and the
confidence that requirements are properly action taken when nonconformance is detected.
established and products or services conform
to specified requirements.
It is an activity that establishes and evaluates It is an activity, which verifies if the product
the processes to produce the products by meets pre-defined standards.
preventing the introduction of issues or
defects.
QA improves the processes that are applied to Qc improves the development of a specific
multiple products that will ever be produced by product or service by identifying the defects,
a process. reporting them and correcting the defects
· help establish processes.
· Sets up measurements programs to
evaluate processes.
· Identifies weaknesses in processes
and improves them.

It is performed during development on key It is performed after a work product is produced


artifacts, like walkthroughs, reviews and against established criteria ensuring that the
inspections, mentor feedback, training, product integrates correctly into the
checklists and standards. environment.
QA is the determination of correctness of the Qc is a demonstration of consistency,
final software product by a development completeness, and correctness of the software
project with respect to the user needs and at each stage and between each stage of the
requirements and the responsibility of the development life cycle and the responsibility of
entire team the tester.

K. Muthuvel Page 33 of 127


Software Testing – made easy

6.4.Verification & Validation

Verification Validation
Process of determining whether output of one Process of determining whether a fully
phase of development conforms to its developed system conforms to its SRS
previous phase document

Verification is concerned with phase Validation is concerned about the final product
containment of errors to be error free

6.5.Black Box Testing & White Box Testing

Black Box / Functional White Box / Structural


Test case selection that is based on an Test case selection that is based on an analysis
analysis of the specification of the component of the internal structure of the component.
without reference to its internal workings
It focuses on global issues of workflows, It is based on how the system is built
configuration, performance, and so forth
It attempts to find errors in the external It applied to individual components and
behavior of the code in the following interfaces, being particularly effective at
categories: discovering localized errors in control and data
- incorrect or missing functionality flows
- interface errors;
- errors in data structures used by
interfaces
- behavior or performance errors
- initialization and termination errors.
It involves insightful test planning, careful It involves the creation of custom test data. And
design, and meticulous result checking we can reuse such test data for other kinds of
tests
Skilled manual tester know how to follow a No matter who does the structural testing, they
trail of bugs, A good manual tester also will need to understand some fundamental test
applies on the spot judgment to observed design techniques to do a good job.
results that an automated tool can’t

6.6.IST & UAT

Particulars IST UAT


Base line document Functional Specification Business Requirement
Data Simulated Live Data
Environment Controlled Simulated Live
Orientation Component Business
Tester composition Testing Firm Testing Firm / users
Purpose Verification Validation

K. Muthuvel Page 34 of 127


Software Testing – made easy

6.7.SIT & IST

SIT IST
SIT can be done when system is on the IST need integrated System of various Unit
process of integration levels of independent functionality and checks
its workability after integration and compares it
before integration.

6.8. Alpha Testing & Beta Testing

Component Alpha testing Beta testing


Test data Simulated Live
Test Environment Controlled Uncontrolled
To Achieve Functionality User needs
Tested by Only testers Testers and End-Users
Supporting Document Functional Specification Customer Requirement
Used Specification

6.9.Test Bed and Test Environment

Test Bed Test Environment


Test bed holds only testing documents Test environment includes all supportive
which supports testing which includes elements namely hardware, software, tools,
Test data, Data guidelines etc. Browsers, Servers, etc.

6.10.Re-testing and Regression Testing

Re-testing Regression Testing


To check for a particular bug and its To check for the added or new
dependencies after it is said to be fixed. functionality's effect on the existing system

K. Muthuvel Page 35 of 127


Software Testing – made easy

7. Levels of Testing

7.1.Unit Testing

“The testing of individual software components.”


… BS795-1

· Individual testing of separate units - methods and classes.


· Write many short tests (in code) that span the extents of the requirements for the module
you wish to test.
· Write tests before you write the code. You are done coding once your code can pass all
the tests.

7.1.1.Benefits of Unit Testing

· Assurance of working components before integration


· Tests are repeatable - Every time you change something you can rerun your suite of tests
to verify that the unit still works.
· Tests can be designed to ensure that the code fulfills the requirements.
· All debugging is separated from the code.

7.1.2.Pre-requisites

Before component testing may begin the component test strategy (2.1.1) and project component
test plan (2.1.2) shall be specified.

Component test strategy

The component test strategy shall specify the techniques to be employed in the design of test
cases and the rationale for their choice. Selection of techniques shall be according to clause 3. If
techniques not described explicitly in this clause are used they shall comply with the 'Other
Testing Techniques' clause (3.13).

The component test strategy shall specify criteria for test completion and the rationale for their
choice. These test completion criteria should be test coverage levels whose measurement shall
be achieved by using the test measurement techniques defined in clause 4. If measures not
described explicitly in this clause are used they shall comply with the 'Other Test Measurement
Techniques' clause (4.13).

The component test strategy shall document the degree of independence required of personnel
designing test cases from the design process, such as:
a) the test cases are designed by the person(s) who writes the component under test;
b) the test cases are designed by another person(s);
c) the test cases are designed by a person(s) from a different section;
d) the test cases are designed by a person(s) from a different organisation;
e) the test cases are not chosen by a person.

The component test strategy shall document whether the component testing is carried out using
isolation, bottom-up or top-down approaches, or some mixture of these.

The component test strategy shall document the environment in which component tests will be
executed. This shall include a description of the hardware and software environment in which all
component tests will be run.

K. Muthuvel Page 36 of 127


Software Testing – made easy

The component test strategy shall document the test process that shall be used for component
testing.

The test process documentation shall define the testing activities to be performed and the inputs
and outputs of each activity.

Standard for Software Component Testing

6 Working Draft 3.4 (27-Apr-01) © British Computer Society, SIGIST, 2001

For any given test case, the test process documentation shall require that the following activities
occur in the following sequence:
a) Component Test Planning;
b) Component Test Specification;
c) Component Test Execution;
d) Component Test Recording;
e) Checking for Component Test Completion.

This Figure illustrates the generic test process described in clause 2.1.1.8. Component Test
Planning shall begin the test process and Checking for Component Test Completion shall end it;
these activities are carried out for the whole component. Component Test Specification,
Component Test Execution, and Component Test Recording may however, on any one iteration,
be carried out for a subset of the test cases associated with a component. Later activities for one
test case may occur before earlier activities for another. Whenever an error is corrected by
making a change or changes to test materials or the component under test, the affected activities
shall be repeated.

K. Muthuvel Page 37 of 127


Software Testing – made easy

7.2.Integration Testing

“Testing performed to expose faults in the interfaces and in the interaction between integrated
components”
… BS7925-1

Testing of combined parts of an application to determine they function together correctly. The
'parts' can be code modules, individual applications, client and server applications on a network
etc. This type of testing is especially relevant to client/server and distributed systems.

Objective:

The typical objectives of software integration testing are to:


· Cause failures involving the interactions of the integrated software components when
running on a single platform.
· Report these failures to the software development team so that the underlying defects
can be identified and fixed.
· Help the software development team to stabilize the software so that it can be
successfully distributed prior to system testing.
· Minimize the number of low-level defects that will prevent effective system and launch
testing.

Entry criteria:

· The integration team is adequately staffed and trained in software integration testing.
· The integration environment is ready.
· The first two software components have:
o Passed unit testing.
o Been ported to the integration environment.
o Been integrated.
· Documented Evidence that component has successfully completed unit test.
· Adequate program or component documentation is available
· Verification that the correct version of the unit has been turned over for integration.

Exit criteria:

· A test suite of test cases exists for each interface between software components.
· All software integration test suites successfully execute (i.e., the tests completely execute
and the actual test results match the expected test results).
· Successful execution of the integration test plan
· No open severity 1 or 2 defects
· Component stability

Guidelines:

· The iterative and incremental development cycle implies that software integration testing
is regularly performed in an iterative and incremental manner.
· Software integration testing must be automated if adequate regression testing is to occur.
· Software integration testing can elicit failures produced by defects that are difficult to
detect during system or launch testing once the system has been completely integrated.

K. Muthuvel Page 38 of 127


Software Testing – made easy

7.2.1.Incremental Integration Testing

“Integration testing where system components are integrated into the system one at a time until
the entire system is integrated”
… BS795-1

Continuous testing of an application as new functionality is added; requires that various aspects of
an application's functionality be independent enough to work separately before all parts of the
program are completed, or that test drivers be developed as needed; done by programmers or by
testers. Integration testing where system components are integrated into the system one at a time
until the entire system is integrated.

7.2.1.1.Top Down Integration

“An approach to integration testing where the component at the top of the component hierarchy is
tested first, with lower level components being simulated by stubs. Tested components are then
used to test lower level components. The process is repeated until the lowest level components
has been tested.”
… BS795-1

· Modules integrated by moving down the program design hierarchy.


· Can use depth first or breadth first top down integration.
Testing
Level 1 Level 1 . ..
sequence

Level 2 Level 2 Level 2 Level 2

Level 2
stubs

Level 3
stubs

Steps:

· Main control module used as the test driver, with stubs for all subordinate modules.
· Replace stubs either depth first or breadth first
· Replace stubs one at a time.
· Test after each module integrated
· Use regression testing (conducting all or some of the previous tests) to ensure new errors
are not introduced.
· Verifies major control and decision points early in design process.

K. Muthuvel Page 39 of 127


Software Testing – made easy

7.2.1.2.Bottom up Integration

“An approach to integration testing where the lowest level components are tested first, then used
to facilitate the testing of higher level components. The process is repeated until the component at
the top of the hierarchy is tested.”
…BS7925-1

· Begin construction and testing with atomic modules (lowest level modules).
· Use driver program to test.

Test
drivers
Testing
Level N Level N Level N Level N Level N
sequence

Test
drivers
Level N–1 Level N–1 Level N–1

Steps:

· Low level modules combined in clusters (builds) that perform specific software sub-
functions.
· Driver program developed to test. Cluster is tested.
· Driver programs removed and clusters combined, moving upwards in program structure.

Bottom-up Top-down
Major · The control program is tested
Features · Allows early testing aimed t first
proving feasibility and
practicality of particular · Modules are integrated one at a
modules. time

· Major emphasis is on interface


· Modules can be integrated in testing
various clusters as desired.

· Major emphasis is on module


functionality and performance.

K. Muthuvel Page 40 of 127


Software Testing – made easy

Advantages · No test stubs are needed · No test drivers are needed


· It is easier to adjust manpower · The control program plus a few
needs modules forms a basic early
· Errors in critical modules are prototype
found early · Interface errors are discovered
early
· Modular features aid debugging
Disadvantages · Test drivers are needed · Test stubs are needed
· Many modules must be · The extended early phases dictate
integrated before a working a slow manpower buildup
program is available · Errors in critical modules at low
· Interface errors are discovered levels are found late
late
Comments At any given point, more code has been An early working program raises morale
written and tested that with top down and helps convince management progress
testing. Some people feel that bottom-up is being made. It is hard to maintain a
is a more intuitive test philosophy. pure top-down strategy in practice.

7.2.1.3.Stub and Drivers

5.2.1.3.1 Stubs:

Stubs are program units that are stand-ins² for the other (more complex) program units that are
directly referenced by the unit being tested.

Stubs are usually expected to provide the following:

An interface that is identical to the interface that will be provided by the actual program unit, and
the minimum acceptable behavior expected of the actual program unit. (This can be as simple as
a return statement)

5.2.1.3.2 Drivers:

Drivers are programs or tools that allow a tester to exercise/examine in a controlling manner the
unit of software being tested.

A driver is usually expected to provide the following:

A means of defining, declaring, or otherwise creating, any variables, constants, or other items
needed in the testing of the unit, and a means of monitoring the states of these items, any input
and output mechanisms needed in the testing of the unit

Sandwich Testing: Combines bottom-up and top-down testing using testing layer.

K. Muthuvel Page 41 of 127


Software Testing – made easy

7.2.2.Non-Incremental Testing

7.2.2.1.Big Bang Integration

“Integration testing where no incremental testing takes place prior to all the system's components
being combined to form the system.”
… BS7925-1

7.2.2.2.Validation Testing

Validation testing aims to demonstrate that the software functions in a manner that can be
reasonably expected by the customer.

Tests conformance of the software to the Software Requirements Specification. This should
contain a section “Validation criteria” which is used to develop the validation tests.

Validation test criteria

· A set of black box tests to demonstrate conformance with requirements.

· To check that: all functional requirements satisfied, all performance requirements


achieved, documentation is correct and 'human-engineered', and other
requirements are met (e.g., compatibility, error recovery, maintainability).

· When validation tests fail it may be too late to correct the error prior to scheduled
delivery. Need to negotiate a method of resolving deficiencies with the customer.

7.2.2.3.Configuration review
An audit to ensure that all elements of the software configuration are properly developed,
catalogued, and has necessary detail to support maintenance.

7.3.System Testing

“System testing is the process of testing an integrated system to verify that it meets specified
requirements".
… BS7925-1
It is further sub-divided into

· Functional system testing


· Non-Functional system testing

System test Entrance Criteria:

· Successful execution of the Integration test cases


· No open severity 1 or 2 defects
· 75-80% of total system functionality and 90% of major functionality delivered
· System stability for 48-72 hours to start test

K. Muthuvel Page 42 of 127


Software Testing – made easy

System Test Exit Criteria:

· Successful execution of the system test cases ,and documentation that shows
coverage of requirements and high-risk system components
· System meets pre-defined quality goals
· 100% of total system functionality delivered

7.3.1.Functional Testing

7.3.1.1.Requirement based Testing

“Designing tests based on objectives derived from requirements for the software component (e.g.,
tests that exercise specific functions or probe the non-functional constraints such as performance
or security)”
… BS7925-1

Requirements testing must verify that the system can perform its function correctly and
that the correctness can be sustained over a continuous period of time. Unless the system
can function correctly over an extended period of time management will not be able to
rely upon the system. The system can be tested for correctness throughout the lifecycle,
but it is difficult to test the reliability until the program becomes operational.

Objectives:

Successfully implementing user requirements is only one aspect of requirements testing.


The responsible user is normally only one of many groups having an interest in the
application system. The objectives that need to be addressed in requirements testing are:

· User requirements are implemented


· Correctness is maintained over extended processing periods.
· Application processing complies with the organization’s policies and procedures.

When to use Requirements Testing:

Every application should be requirements tested. The process should begin in the
requirements phase, and continue through every phase of the life cycle into operations
and maintenance. It is not a question as to whether requirements must be tested but,
rather, the extent and methods used in requirements testing.

K. Muthuvel Page 43 of 127


Software Testing – made easy

7.3.2.Business-Process based Non-Functional Testing

Testing of those requirements that do not relate to functionality. I.e. performance, usability, etc. “

…BS7925-1

Non-Functional testing types:

Configuration Memory Management, Reliability


Compatibility Maintainability Recovery
Conversion Portability Stress
Disaster Recovery Performance Security
Interoperability Procedure Usability
Installability

7.3.2.1.Recovery testing

“Testing aimed at verifying the system's ability to recover from varying degrees of failure.”

,,, BS7925-1

Recovery is the ability to restart operations after the integrity of the application has been
lost. The process normally involves reverting to a point where the integrity of the system
is known, and then reprocessing transactions up until the point of failure. The importance
of recovery will vary from application to application.

Objectives:

Recovery testing is used to ensure that operations can be continued after a disaster,
recovery testing not only verifies the recovery process, but also the effectiveness of the
component parts of that process. Specific objectives of recovery testing include:

· Adequate backup data is preserved


· Backup data is stored in a secure location
· Recovery procedure are documented
· Recovery personnel have been assigned and trained
· Recovery tools have been developed and are available

When to use Recovery Testing:

Recovery testing should be performed whenever the user of the application states that the
continuity of operation of the application is essential to the proper functioning of the user area.
The user should estimate the potential loss associated with inability to recover operations over
various time spans. The amount of the potential loss should both determine the amount of
resource to be put into disaster planning as well as recovery testing.

K. Muthuvel Page 44 of 127


Software Testing – made easy

7.3.2.2.Security testing

“Testing whether the system meets its specified security objectives.”


… BS7925-1

Security is a protection system that is needed for both secure confidential information and for
competitive purposes to assure third parties their data will be protected. Protecting the
confidentiality of the information is designed to protect the resources of the organization. Security
testing is designed to evaluate the adequacy of the protective procedures and countermeasures.

Objectives:

Security defects do not become as obvious as other types of defects. Therefore, the objectives of
security testing are to identify defects that are very difficult to identify. Even failures in the security
system operation may not be detected, resulting in a loss or compromise of information without
the knowledge of that loss. The security testing objectives include:

· Determine that adequate attention has been devoted to identifying security risks
· Determining that a realistic definition and enforcement of access to the system has been
implemented
· Determining that sufficient expertise exists to perform adequate security testing
· Conducting reasonable tests to ensure that the implemented security measures function
properly

When to Use security Testing:

Security testing should be used when the information and/or assets protected by the application
system are of significant value to the organization. The testing should be performed both prior to
the system going into an operational status and after the system is placed into an operational
status. The extent of testing should depend on the security risks, and the individual assigned to
conduct the test should be selected based on the estimated sophistication that might be used to
penetrate security.

7.3.2.3.Stress testing

“Testing conducted to evaluate a system or component at or beyond the limits of its specified
requirements.”
… BS7925-1

Stress testing is designed to test the software with abnormal situations. Stress testing attempts to
find the limits at which the system will fail through abnormal quantity or frequency of inputs. For
example;

· Higher rates of interrupts.


· Data rates an order of magnitude above 'normal'.
· Test cases that require maximum memory or other resources.
· Test cases that cause 'thrashing' in a virtual operating system.
· Test cases that cause excessive 'hunting' for data on disk systems.

K. Muthuvel Page 45 of 127


Software Testing – made easy

7.3.2.4.Performance testing

“Testing conducted to evaluate the compliance of a system or component with specified


performance requirements.”
… IEEE

Performance testing is designed to test run time performance of software within the context of an
integrated system. It is not until all systems elements are fully integrated and certified as free of
defects the true performance of a system can be ascertained.

Performance tests are often coupled with stress testing and often require both hardware and
software infrastructure. That is, it is necessary to measure resource utilization in an exacting
fashion. External instrumentation can monitor intervals, log events. By instrument the system, the
tester can uncover situations that lead to degradation and possible system failure.

7.3.3.Alpha and Beta testing

“Alpha testing: Simulated or actual operational testing at an in-house site not otherwise involved
with the software developers.”
… BS7925-1

“Beta testing: Operational testing at a site not otherwise involved with the software developers.”

… BS7925-1
This is testing of an operational nature once the software seems stable. It should be
conducted by people who represent the software vendor's market, and who will use the
product in the same way as the final version once it is released. The benefit of this type of
acceptance testing is that it will bring out operational issues from potential customers
prepared to comment on the software before it is officially released.
Alpha testing is conducted at the developer's site by a customer. The customer uses the
software with the developer 'looking over the shoulder' and recording errors and usage
problems. Alpha testing conducted in a controlled environment.
Beta testing is conducted at one or more customer sites by end users. It is 'live' testing in an
environment not controlled by the developer. The customer records and reports difficulties and
errors at regular intervals.

K. Muthuvel Page 46 of 127


Software Testing – made easy

7.4.User Acceptance Testing

“Acceptance testing: Formal testing conducted to enable a user, customer, or other authorized
entity to determine whether to accept a system or component”
… BS7925-1

User Acceptance Testing (UAT) is performed by Users or on behalf of the users to ensure that the
Software functions in accordance with the Business Requirement Document. UAT focuses on the
following aspects:

· All functional requirements are satisfied


· All performance requirements are achieved
· Other requirements like transportability, compatibility, error recovery etc. are satisfied.
· Acceptance criteria specified by the user is met.

7.4.1.Entry Criteria

· SIT must be completed.


· Availability of stable Test Environment with the latest version of the Application.
· Test Cases prepared by the testing team to be reviewed and signed-off by the Project
coordinator (AGM-Male).
· All User IDs requested by the testing team to be created and made available to the testing
team one week prior to start of testing.

7.4.2.Exit Criteria

· All Test Scenarios/conditions would be executed and reasons will be provided for
untested conditions arising out of the following situations
· Non -Availability of the Functionality
· Deferred to the Future Release
· All Defects Reported are in the ‘Closed’ or ‘Deferred’ status. The client team should sign
off the ‘Deferred’ defects.

K. Muthuvel Page 47 of 127


Software Testing – made easy

7.5.Regression Testing and Re-testing

“Retesting of a previously tested program following modification to ensure that faults have not
been introduced or uncovered as a result of the changes made.”
… BS7925-1

“Regression Testing is the process of testing the changes to computer programs to make sure
that the older programs still work with the new changes.”

“When making improvements on software, retesting previously tested functions to make sure
adding new features has not introduced new problems.”

Regression testing is an expensive but necessary activity performed on modified software to


provide confidence that changes are correct and do not adversely affects other system
components. Four things can happen when a developer attempts to fix a bug. Three of these
things are bad, and one is good:

New Bug No New Bug


Successful Change Bad Good
Unsuccessful Change Bad Bad

Because of the high probability that one of the bad outcomes will result from a change to the
system, it is necessary to do regression testing. A regression test selection technique chooses,
from an existing test set, the tests that are deemed necessary to validate modified software.

There are three main groups of test selection approaches in use:


· Minimization approaches seek to satisfy structural coverage criteria by identifying a
minimal set of tests that must be rerun.
· Coverage approaches are also based on coverage criteria, but do not require
minimization of the test set. Instead, they seek to select all tests that exercise changed or
affected program components.
· Safe attempt instead to select every test that will cause the modified program to produce
different output than original program.

7.5.1.Factors favour Automation of Regression Testing


· Ensure consistency
· Speed up testing to accelerate releases
· Allow testing to happen more frequently
· Reduce costs of testing by reducing manual labor
· Improve the reliability of testing
· Define the testing process and reduce dependence on the few who know it

7.5.2.Tools used in Regression testing


· WinRunner from Mercury
· e-tester from Empirix
· WebFT from Radview
· Silktest from Radview
· Rational Robot from Rational
· QA Run from Compuware

K. Muthuvel Page 48 of 127


Software Testing – made easy

8. Types of Testing

8.1.Compliance Testing

Involves test cases designed to verify that an application meets specific criteria, such as
processing four-digit year dates, properly handling special data boundaries and other business
requirements.

8.2.Intersystem Testing / Interface Testing

“Integration testing where the interfaces between system components are tested”
… BS7925-1

The intersystem testing is designed to check and verify the interconnection between application
function correctly

Applications are frequently interconnected to other systems. The interconnection may be data
coming into the system from another application, leaving for another application frequently in
multiple cycles

The intersystem testing involves the operations of multiple systems in test. The basic need of
intersystem test arises whenever there is a change in parameters between application systems,
where multiple systems are integrated in cycles.

8.3.Parallel Testing

· The process of comparing test results of processing production data concurrently in both
the old and new systems.

· Process in which both the old and new modules run at the same time so that performance
and outcomes can be compared and corrected prior to deployment; commonly done with
modules like Payroll.

· Testing a new or an alternate data processing system with the same source data that is
used in another system. The other system is considered as the standard of comparison.

8.4.Database Testing

The database component is a critical piece of any data-enabled application. Today’s intricate mix
of client-server and Web-enabled database applications is extremely difficult to Test productively.
Testing at the data access layer is the point at which your application communicates with the
database. Tests at this level are vital to improve not only your overall Test strategy, but also your
product’s quality.

Database testing includes the process of validation of database stored procedures, database
triggers; database APIs, backup, recovery, security and database conversion.

K. Muthuvel Page 49 of 127


Software Testing – made easy

8.5.Manual support Testing

Manual support testing involves all functions performed by the people in preparing data for and
using data from automated system. The objective of manual support testing is

· Verify the manual – support procedures are documented and complete


· Determine the manual-support responsibilities has been assigned
· Determine manual support people are adequately trained.

Manual support testing involves first the evaluation of the adequacy of the process and seconds
the execution of the process. The method of testing may be testing is same but the objective
remains the same.

8.6.Ad-hoc Testing

“Testing carried out using no recognised test case design technique.”


… BS7925-1

Testing without a formal test plan or outside of a test plan. With some projects this type of testing
is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find
problems that are not caught in regular testing. Sometimes, if testing occurs very late in the
development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc
testing is referred to as exploratory testing.

8.7.Configuration Testing

Testing to determine how well the product works with a broad range of hardware/peripheral
equipment configurations as well as on different operating systems and software.

8.8.Pilot Testing

Testing that involves the users just before actual release to ensure that users become familiar
with the release contents and ultimately accept it. Often is considered a Move-to-Production
activity for ERP releases or a beta test for commercial products. Typically involves many users, is
conducted over a short period of time and is tightly controlled.

8.9.Automated Testing

Software testing that utilizes a variety of tools to automate the testing process and when the
importance of having a person manually testing is diminished. Automated testing still requires a
skilled quality assurance professional with knowledge of the automation tool and the software
being tested to set up the tests.

K. Muthuvel Page 50 of 127


Software Testing – made easy

8.10.Load Testing

Load Testing involves stress testing applications under real-world conditions to predict system
behavior and performance and to identify and isolate problems. Load testing applications can
emulate the workload of hundreds or even thousands of users, so that you can predict how an
application will work under different user loads and determine the maximum number of concurrent
users accessing the site at the same time.

8.11.Stress and Volume Testing

“Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of
its specified requirements.”
…. BS7925-1

“Volume Testing: Testing where the system is subjected to large volumes of data. “
…. BS7925-1

Testing with the intent of determining how well a product performs when a load is placed on the
system resources that nears and then exceeds capacity.

Volume Testing, as its name implies, is testing that purposely subjects a system (both hardware
and software) to a series of tests where the volume of data being processed is the subject of the
test. Such systems can be transactions processing systems capturing real time sales or could be
database updates and or data retrieval.

8.12.Usability Testing

“Testing the ease with which users can learn and use a product.”
…. BS7925-1

All aspects of user interfaces are tested:


· Display screens
· messages
· report formats
· navigation and selection problems

8.13.Environmental Testing

These tests check the system’s ability to perform at the installation site.

Requirements might include tolerance for


· heat
· humidity
· chemical presence
· portability
· electrical or magnetic fields
· Disruption of power, etc.

K. Muthuvel Page 51 of 127


Software Testing – made easy

9. Roles & Responsibilities

9.1.Test Associate

Reporting To:

Team Lead of a project

Responsibilities:

· Design and develop test conditions and cases with associated test data based upon
requirements
· Design test scripts
· Executes the test ware (Conditions, Cases, Test scripts etc.) with the test data generated
· Reviews test ware, record defects, retest and close defects
· Preparation of reports on Test progress

9.2.Test Engineer

Reporting To:

Team Lead of a project

Responsibilities:

· Design and develop test conditions and cases with associated test data based upon
requirements
· Design test scripts
· Executes the test ware (Conditions, Cases, Test scripts etc.) with the test data generated
· Reviews test ware, record defects, retest and close defects
· Preparation of reports on Test progress

9.3.Senior Test Engineer

Reporting To:

Team Lead of a project

Responsibilities:

· Responsible for collection of requirements from the users and evaluating the same and
send out for team discussion
· Preparation of the High level design document incorporating the feedback received on the
high level design document and initiate on the low level design document
· Assist in the preparation of test strategy document drawing up the test plan
· Preparation of business scenarios, supervision of test cases preparation based on the
business scenarios
· Maintaining the run details of the test execution, Review of test condition/cases, test
scripts
· Defect Management
· Preparation of test deliverable documents and defect metrics analysis report

K. Muthuvel Page 52 of 127


Software Testing – made easy

9.4.Test Lead

Reporting To:

Test Manager

Responsibilities:

· Technical leadership of the test project including test approach and tools to be used
· Preparation of test strategy
· Ensure entrance criteria prior to test start-off
· Ensure exit criteria prior to completion sign-off
· Test planning including automation decisions
· Review of design documents (test cases, conditions, scripts)
· Preparation of test scenarios and configuration management and quality plan
· Manage test cycles
· Assist in recruitment
· Supervise test team
· Resolve team queries/problems
· Report and follow-up test systems outrages/problems
· Client interface
· Project progress reporting
· Defect Management
· Staying current on latest test approaches and tools, and transferring this knowledge to
test team
· Ensure test project documentation

9.5.Test Manager

Reporting To:

Management

Responsibilities:

· Liaison for interdepartmental interactions: Representative of the testing team


· Client interaction
· Recruiting, staff supervision, and staff training.
· Test budgeting and scheduling, including test-effort estimations.
· Test planning including development of testing goals and strategy.
· Test tool selection and introduction.
· Coordinating pre and post test meetings.
· Test program oversight and progress tracking.
· Use of metrics to support continual test process improvement.
· Test process definition, training and continual improvement.
· Test environment and test product configuration management.
· Nomination of training
· Cohesive integration of test and development activities.
· Mail Training Process for training needs, if required
· Review of the proposal

K. Muthuvel Page 53 of 127


Software Testing – made easy

10. Test Preparation & Design Process

10.1.Baseline Documents

Construction of an application and testing are done using certain documents. These documents
are written in sequence, each of it derived from the previous document.

10.1.1.Business Requirement

It describes user’s needs for the application. This is done over a period of time and going through
various levels of requirements. This should also portray functionalities that are technically feasible
within the stipulated time frames for delivery of the application.

As this contains user perspective requirements, User acceptance test is based on this document.

10.1.2.Functional Specification

“The document that describes in detail the characteristics of the product with regard to its
intended capability.”
… BS7925-1

The Functional Specification document describes the functional needs, design of the flow and
user maintained parameters. It is primarily derived from Business requirement document, which
specifies the client's business needs. The proposed application should adhere to the
specifications specified in the document. This is used henceforth to develop further documents for
software construction, validation and verification of the software.

10.1.3.Design Specification

The Design Specification document is prepared based on the functional specification. It contains
the system architecture, table structures and program specifications. This is ideally prepared and
used by the construction team. The test team should also have a detailed understanding of the
design specification in order to understand the system architecture.

10.1.4.System Specification

The System Specification document is a combination of Functional specification and design


specification. This is used in case of small application or an enhancement to an application.
Case Study on each document and reverse presentation

10.2.Traceability

10.2.1.BR and FS

The requirements specified by the users in the business requirement document may not be
exactly translated into a functional specification. Therefore, a trace on specifications between

K. Muthuvel Page 54 of 127


Software Testing – made easy

functional specification and business requirements is done a one to one basis. This helps finding
the gap between the documents. These gaps are then closed by the author of the FS, or deferred
after discussions.

Testers should understand these gaps and use them as an addendum to the FS, after getting this
signed off from the author of the FS. The final FS form may vary from the original, as deferring or
taking in a gap may have ripple effect on the application. Sometimes, these ripple effects may not
be reflected in the FS. Addendum’s may sometime affect the entire system and the test case
development.

10.2.2.FS and Test conditions

Test conditions built by the tester are traced with the FS to ensure full coverage of the baseline
document. If gaps between the same are obtained, tester must then build conditions for the gaps.
In this process, testers must keep in mind the rules specified in Test condition writing.

10.3.Gap Analysis

This is the terminology used on finding the difference between "what it should be" and "what it is".
As explained, it is done on the Business requirement to FS and FS to test conditions.
Mathematically, it becomes evident that Business requirements that are user’s needs are tested,
as Business requirement and Test conditions are matched.

Simplifying the above,

A=Business requirement
B=Functional Specification
C=Test conditions
A=B, B=C, Therefore A=C

Another way of looking at this process is to eliminate as many mismatches at every stage of the
process, there by giving the customer an application, which will satisfy their needs.

In the case of UAT, there is a direct translation of specification from the Business Requirement to
Test conditions leaving lesser amount of understandability loss.

10.4.Choosing Testing Techniques

· The testing technique varies based on the projects and risks involved in the project.
· It is determined by the criticality and risks involved with the Application under Test (AUT).
· The technique used for testing will be chosen based on the organizational need of the end
user and based on the caracal risk factor or test factors that do impacts the systems
· The technique adopted will also depend on the phases of testing
· The two factors that determine the test technique are
· Test factors: the risks that need to be address in testing
· Test Phases: the phase of the systems development life cycle in which testing will
occur.
· And also depends onetime and money spend on testing.

K. Muthuvel Page 55 of 127


Software Testing – made easy

10.5.Error Guessing

“A test case design technique where the experience of the tester is used to postulate what faults
might occur, and to design tests specifically to expose them. “

… BS7925-1

10.6.Error Seeding

“The process of intentionally adding known faults to those already in a computer program for the
purpose of monitoring the rate of detection and removal, and estimating the number of faults
remaining in the program.”
… [IEEE]

10.7.Test Plan

This is a summary of the ANSI/IEEE Standard 829-1983. It describes a test plan as:

“A document describing the scope, approach, resources, and schedule of intended testing
activities. It identifies test items, the features to be tested, the testing tasks, who will do each task,
and any risks requiring contingency planning.”
… (ANSI/IEEE Standard 829-1983)

This standard specifies the following test plan outline:

10.7.1.Test Plan Identifier

§ A unique identifier

10.7.2.Introduction

§ Summary of the items and features to be tested


§ Need for and history of each item (optional)
§ References to related documents such as project authorization, project plan, QA plan,
configuration management plan, relevant policies, relevant standards
§ References to lower level test plans

10.7.3.Test Items

§ Test items and their version


§ Characteristics of their transmittal media
§ References to related documents such as requirements specification, design
specification, users guide, operations guide, installation guide
§ References to bug reports related to test items
§ Items which are specifically not going to be tested (optional)

K. Muthuvel Page 56 of 127


Software Testing – made easy

10.7.4.Features to be Tested

§ All software features and combinations of features to be tested


§ References to test-design specifications associated with each feature and
combination of features

10.7.5.Features Not to Be Tested


§ All features and significant combinations of features which will not be tested
§ The reasons these features won’t be tested

10.7.6.Approach

§ Overall approach to testing


§ For each major group of features of combinations of featres, specify the approach
§ Specify major activities, techniques, and tools which are to be used to test the groups
§ Specify a minimum degree of comprehensiveness required
§ Identify which techniques will be used to judge comprehensiveness
§ Specify any additional completion criteria
§ Specify techniques which are to be used to trace requirements
§ Identify significant constraints on testing, such as test-item availability, testing-
resource availability, and deadline

10.7.7.Item Pass/Fail Criteria

§ Specify the criteria to be used to determine whether each test item has passed or
failed testing

10.7.8.Suspension Criteria and Resumption Requirements

§ Specify criteria to be used to suspend the testing activity


§ Specify testing activities which must be redone when testing is resumed

10.7.9.Test Deliverables

§ Identify the deliverable documents: test plan, test design specifications, test case
specifications, test procedure specifications, test item transmittal reports, test logs,
test incident reports, test summary reports
§ Identify test input and output data
§ Identify test tools (optional)

K. Muthuvel Page 57 of 127


Software Testing – made easy

10.7.10.Testing Tasks

§ Identify tasks necessary to prepare for and perform testing


§ Identify all task interdependencies
§ Identify any special skills required

10.7.11.Environmental Needs

§ Specify the level of security required


§ Identify special test tools needed
§ Specify necessary and desired properties of the test environment: physical
characteristics of the facilities including hardware, communications and system
software, the mode of usage (i.e., stand-alone), and any other software or supplies
needed
§ Identify any other testing needs
§ Identify the source for all needs which are not currently available

10.7.12.Responsibilities

§ Identify groups responsible for managing, designing, preparing, executing,


witnessing, checking and resolving
§ Identify groups responsible for providing the test items identified in the Test Items
section
§ Identify groups responsible for providing the environmental needs identified in the
Environmental Needs section

10.7.13.Staffing and Training Needs

§ Specify staffing needs by skill level


§ Identify training options for providing necessary skills

10.7.14.Schedule

§ Specify test milestones


§ Specify all item transmittal events
§ Estimate time required to do each testing task
§ Schedule all testing tasks and test milestones
§ For each testing resource, specify its periods of use

K. Muthuvel Page 58 of 127


Software Testing – made easy

10.7.15.Risks and Contingencies

§ Identify the high-risk assumptions of the test plan


§ Specify contingency plans for each

10.7.16.Approvals

§ Specify the names and titles of all persons who must approve the plan
§ Provide space for signatures and dates

10.8.High Level Test Conditions / Scenario

It represents the possible values that can be attributed to a particular specification.

The importance of determining the conditions are:


· Deciding the architecture of testing approach
· Evolving design of the test scripts
· Ensuring coverage
· Understanding the maximum conditions for a specification

At this point the tester will have a fair understanding of the application and his module. The
functionality can be broken into
· Field level rules
· Module level rules
· Business rules
· Integration rules

10.8.1.Processing logic

It may not be possible to segment the specifications into the above categories in all applications. It
is left to the test team to decide on the application segmentation. For the segments identified by
the test team, the possible condition types that can be built are

· Positive condition - Polarity of the value given for test is to comply with the condition
existence.
· Negative condition - Polarity of the value given for test is not to comply with the
condition existence.
· Boundary condition - Polarity of the value given for test is to assess the extreme values
of the condition.
· User perspective condition - Polarity of the value given for test is to analyze the
practical usage of the condition.

10.8.2.Data Definition

K. Muthuvel Page 59 of 127


Software Testing – made easy

In order to test the conditions and values that are to be tested, the application should be populated
with data.

There are two ways of populating the data into tables of the application.

· Intelligent: Data is tailor-made for every condition and value, having reference to its
condition. These will aid in triggering certain action by the application. By constructing
such intelligent data, few data records will suffice the testing process.

Example:

Business rule, if the interest to be paid is more than 8 % and the tenor of the deposit
exceeds one month, then the system should give a warning.

To populate an interest to be paid field of a deposit, we can give 9.5478 and make the
tenor as two months for a particular deposit. This will trigger the warning in the
application.

· Unintelligent: Data is populated in mass, corresponding to the table structures. Its values
are chosen at random and not with reference to the conditions derived. This type of
population can be used for testing the performance of the application and its behavior to
random data. It will be difficult for the tester to identify his requirements from the mass
data.

Example:

Using the above example, to find a suitable record with interest exceeding 8 % and the
Tenor being more than two months is difficult.

Having now understood the difference between intelligent and unintelligent data and also
at this point having a good idea of the application, the tester should be able to design
intelligent data for his test conditions.

Application may have its own hierarchy of data structure which is interconnected.

10.8.3.Feeds Analysis

Most applications are fed with inputs at periodic intervals, like end of day or every hour etc. Some
applications may be stand alone i.e., all processes will happen within its database and no external
inputs of processed data are required.

In the case of applications having feeds, received from other machines, they are sent in a format,
which are redesigned. These feeds, at the application end, will be processed by local programs
and populated in respective tables.

It is therefore, essential for testers to understand the data mapping between the feeds and the
database tables of the application. Usually, a document is published in this regard.

Translation of the high level data designed previously should be converted into the feed formats,
in order to populate the application database.

· Data Sheet format (ISO template)


· Exercise with the live application
· Test Case

K. Muthuvel Page 60 of 127


Software Testing – made easy

10.9.Test Case

“A set of inputs, execution preconditions, and expected outcomes developed for a particular
objective, such as to exercise a particular program path or to verify compliance with a specific
requirement.”
… BS7925-1

Test cases are written based on the test conditions. It is the phrased form of Test conditions,
which becomes readable and understandable by all. Language used in the expected results
should not have ambiguity. The results expressed should be clear and have only one
interpretation possible. It is advisable to use the term "Should" in the expected results.

There are three headings under which a test case is written. Namely,

· Description: Here the details of the test on a specification or a condition are written.

· Data and Pre-requirements: Here either the data for the test or the specification is
mentioned. Pre-requirements for the test to be executed should also be clearly
mentioned.

· Expected results: The expected result on the execution of the instruction in the
description is mentioned. In general, it should reflect in detail the result of the test
execution.

While writing a test case, to make the test cases explicit, the tester should include the following:

· Reference to the rules and specifications under test in words with minimal technical
jargons.
· Check on data shown by the application should refer to the table names if possible
· Names of the fields and screens should also be explicit.

10.9.1.Expected Results

The outcome of executing an instruction would have a single or multiple impacts on the
application. The resultant behavior of the application after execution is the expected result.

10.9.1.1.Single Expected Result

It has a single impact on the instruction executed.

Example:
Test Case Description: Click on the hyperlink "New deposit" at the top left hand corner of the main
menu screen.
Expected result: New time deposit screen should be displayed.

10.9.1.2.Multiple Expected Result

It has multiple impacts on executing the instructions.

Example:
Test Case Description: Click on the hyperlink "New deposit" at the top left hand corner of the main
menu screen.

K. Muthuvel Page 61 of 127


Software Testing – made easy

Expected result: New time deposit screen should be displayed & Customer contact date should be
pre-filled with the system date.

10.9.2.Pre-requirements

Test cases cannot normally be executed with normal state of the application. Below is the list of
possible pre-requirements that could be attached to the test case:

· Enable or Disable external interfaces


o Example: Reuters, Foreign exchange rate information service organization server
to be connected to the application.

· Time at which the test case is to be executed


o Example: Test to be executed after 2.30 p.m. in order to trigger a warning.

· Date's that are to be maintained (Pre-date or Post date) in the database before testing ,as
its sometimes not possible to predict the dates of testing ,and populate certain date fields
when they are to trigger certain actions in the application.
o Example: Maturity date of a deposit should be the date of test. So, it is difficult to
give the value of the maturity date while data designing or preparing test cases.

· Deletion of certain records to trigger an action by the application


o Example: A document availability indicator field to be made null, so as to trigger a
warning from the application.

· Change values if required to trigger an action by the application


o Example: Change the value of the interest for a deposit so as to trigger a warning
by the application.

10.9.3.Data definition

Data for executing the test cases should be clearly defined in the test cases. They should indicate
the values that will be entered into the fields and also indicate default values of the field.

Example:
Description: Enter Client's name
Data: John Smith
(OR)
Description: Check the default value of the interest for the deposit
Data: $ 400

In the case of calculations involved, the test cases should indicate the calculated value in the
expected results of the test case.

Example:
Description: Check the default value of the interest for the deposit
Data: $ 400
This value ($400) should be calculated using the formula specified well in advance while data
design.

K. Muthuvel Page 62 of 127


Software Testing – made easy

11. Test Execution Process

The preparation to test the application is now over. The test team should next plan the execution
of the test on the application.

11.1.Pre- Requirements

11.1.1.Version Identification Values

The application would contain several program files for it to function. The version of these files
and a unique checksum number for these files is a must for change management.

These numbers will be generated for every program file on transfer from the development
machine to the test environment. The number attributed to each program file is unique and if any
change is made to the program file between the time it is transferred to the test environment and
the time when it is transferred back to the development for correction, it can be detected by using
these numbers. These identification methods vary from one client to another.

These values have to be obtained from the development team by the test team. This helps in
identifying unauthorized transfers or usage of application files by both parties involved.

The responsibilities of acquiring, comparing and tracking before and after soft base transfer lie
with the test team.

11.1.2.Interfaces for the application

In some applications external interfaces may have to connected or disconnected. In both cases
the development team should certify that the application would function in an integrated fashion.
Actual navigation to and from an interface may not be covered in black box testing.

11.1.3.Unit testing sign off

· To begin an integrated test on the application, development team should have completed
tests on the software at Unit levels.

· Unit testing focuses verification effort on the smallest unit of software design. Using the
Design specification as a guide, important control paths and filed validations are tested.

· Clients and the development team must sign off this stage, and hand over the signed off
application with the defect report to the testing team.
o Test Plan – Internal
o Test Execution Sequence

Test cases can either be executed in a random format or in a sequential fashion. Some
applications have concepts that would require sequencing of the test cases before actual
execution. The details of the execution are documented in the test plan.

Sequencing can also be done on the modules of the application, as one module would populate or
formulate information required for another.

K. Muthuvel Page 63 of 127


Software Testing – made easy

11.1.4.Test Case Allocation

The test team should decide on the resources that would execute the test cases. Ideally, the
tester who designed the test cases for the module executes the test. In some cases, due to time
or resource constraint additional test cases might have to be executed by other members of the
team. Clear documentation of responsibilities should be mentioned in the test plan.

Test cases are allocated among the team and on different phases. All test cases may not be
possibly executed in the first passes. Some of the reasons for this could be:
· Functionality may some-times be introduced at a later stage and application may not
support it, or the test team may not be ready with the preparation
· External interfaces to the application may not be ready
· The client might choose to deliver some part of the application for testing and rest may be
delivered during other passes

Targets for completion of Phases

Time frames for the passes have to be decided and committed to the clients well in advance to
the start of test. Some of the factors consider for doing so are
· Number of cases/scripts: Depending on the number of test scripts and the resource
available, completion dates are prepared.
· Complexity of testing: In some cases the number of test cases may be less but the
complexity of the test may be a factor. The testing may involve time consuming
calculations or responses form external interfaces etc.
· Number of errors: This is done very exceptionally. Pre-IST testing is done to check the
health of the application soon after the preparations are done. The number of errors that
were reported should be taken as a benchmark.

The preparation to test the application is now over. The test team should next plan the execution
of the test on the application. In this section, we will see how test execution is performed.

11.2.Stages of Testing:

11.2.1.Comprehensive Testing - Round I


All the test scripts developed for testing are executed. Some cases the application may not have
certain module(s) ready for test; hence they will be covered comprehensively in the next pass. The
testing here should not only cover all the test cases but also business cycles as defined in the
application.

11.2.2.Discrepancy Testing - Round II


All the test cases that have resulted in a defect during the comprehensive pass should be
executed. In other words, all defects that have been fixed should be retested. Function points that
may be affected by the defect should also be taken up for testing. This type of testing is called as
Regression testing. Defects that are not fixed will be executed only after they are fixed.

11.2.3.Sanity Testing - Round III


This is final round in the test process. This is done either at the client's site or at Maveric
depending on the strategy adopted. This is done in order to check if the system is sane enough
for the next stage i.e. UAT or production as the case may be under an isolated environment.
Ideally the defects that are fixed from the previous phases are checked and freedom testing done
to ensure integrity is conducted.

K. Muthuvel Page 64 of 127


Software Testing – made easy

12. Defect Management

12.1.Defect – Definition

“Error: A human action that produces an incorrect result. “


… [IEEE]

“Fault: A manifestation of an error in software. A fault, if encountered may cause a failure. “


… BS7925-1

“Failure: Deviation of the software from its expected delivery or service. “


… BS7925-1
“A deviation from expectation that is to be tracked and resolved is termed as a defect. “
An evaluation of defects discovered during testing provides the best indication of software quality.
Quality is the indication of how well the system meets the requirements. So in the context defects
are identified as any failure to meet the system requirements.

Error:
“Is an undesirable deviation from requirements?”

Any problem or cause for many problems which stops the system to perform its
functionality is referred as Error

Bug:
Any Missing functionality or any action that is performed by the system which is not
supposed to be performed is a Bug.

“Is an error found BEFORE the application goes into production?”

Any of the following may be the reason for birth of Bug


1. Wrong functionality
2. Missing functionality
3. Extra or unwanted functionality

Defect:
A defect is a variance from the desired attribute of a system or application.

“Is an error found AFTER the application goes into production?”

Defect will be commonly categorized into two types:


1. Defect from product Specification
2. Variance from customer/user expectation.

Failure:
Any Expected action that is supposed to happen if not can be referred as failure or we
can say absence of expected response for any request.

Fault:
This generally referred in hardware terminologies. A Problem, which cause the system not
to perform its task or objective.

K. Muthuvel Page 65 of 127


Software Testing – made easy

12.2.Types of Defects

Defects that are detected by the tester are classified into categories by the nature of the defect.
The following are the classification
· Showstopper: A Defect which may be very critical in terms of affecting the schedule, or it
may be a show stopper – that is, it stops the user from using the system further
· Major: A Defect where a functionality/data is affected significantly but not cause a show-
stopping condition or a block in the test process cycles.
· Minor: A Defect which is isolated or does not stop the user from proceeding, but causes
inconvenience. Cosmetic Errors would also feature in this category

12.3.Defect Reporting

Defects or Bugs when detected in the application by the tester must be duly reported through an
automated tool. Particulars that have to be filled by a tester are
· Defect Id: Number associated with a particular defect, and henceforth referred by its ID
· Date of execution: The date on which the test case which resulted in a defect was
executed
· Defect Category: These are explained in the next section, ideally decided by the test
leader
· Severity: As explained, it can be Major, Minor and Show-stopper
· Module ID: Module in which the defect occurred
· Status: Raised, Authorized, Deferred, Fixed, Re-raised, And Closed.
· Defect description: Description as to how the defect was found, the exact steps that
should be taken to simulate the defect, other notes and attachments if any.
· Test Case Reference No: The number of the test case and script in combination which
resulted in the defect.
· Owner: The name of the tester who executed the test cases
· Test case description: The instructions in the test cases for the step in which the error
occurred
· Expected Result: The expected result after the execution of the instructions in the test
case descriptions
· Attachments: The screen shot showing the defect should be captured and attached
· Responsibility: Identified team member of the development team for fixing the defect.

12.4.Tools Used

Tools that are used to track and report defects are,

12.4.1.ClearQuest (CQ)

It belongs to the Rational Test Suite and it is an effective tool in Defect Management. CQ
functions on a native access database and it maintains a common database of defects. With CQ
the entire Defect Process can be customized. For e.g., a process can be designed in such a
manner that a defect once raised needs to be definitely authorized and then fixed for it to attain
the status of retesting. Such a systematic defect flow process can be established and the history
for the same can be maintained. Graphs and reports can be customized and metrics can be
derived out of the maintained defect repository.

K. Muthuvel Page 66 of 127


Software Testing – made easy

12.4.2.TestDirector (TD):

TestDirector is an Automated Test Management Tool developed by Mercury Interactive for Test
Management to help to organize and manage all phases of the software testing process, including
planning, creating tests, executing tests, and tracking defects. TestDirector enables us to manage
user access to a project by creating a list of authorized users and assigning each user a password
and a user group such that a perfect control can be exercised on the kinds of additions and
modifications and user can make to the project. Apart from Manual Test Execution, the
WinRunner automated test scripts of the project can also be executed directly from TestDirector.
TestDirector activates WinRunner, runs the tests, and displays the results. Apart form the above,
it is used for
· To report defects detected in the software.
· As a sophisticated system for tracking software defects.
· To monitor defects closely from initial detection until resolution.
· To analyze our Testing Process by means of various graphs and reports.

12.4.3.Defect Tracker

Defect Tracker is a tool developed by Maveric Systems Ltd. an Independent Software Testing
Company in Chennai for defect management. This tool is used to manage the defect, track the
defect and report the defect effectively by the testing team.

12.5.Defects Meetings

Meetings are conducted at the end of everyday between the test team and development team to
discuss test execution and defects. Here, defect categorizations are done.

Before meetings with the development team, test team should have internal discussions with the
test lead on the defects reported to the test lead. The process ensures that all defects are
accurate and authentic to the best knowledge of the test team.

12.6.Defects Publishing

Defects that are authorized are published in a mutually accepted media like Internet or sending
the issue by email etc.
· Reports that are published are
· Daily status report
· Summarized defect report for the individual domain / product if any
· Final defect reported

Test down Times:

During the execution of the test, schedules prepared earlier may slip based on certain factors.
Time lost due to these should be recorded duly by the test team.

· Server problems
o Test team may come across problems with the server, on which the application is
planted. Possible causes for the problems are
o Main server on which the application may have problems with number of
instances on it slowly down the system
o Networking to the main server or internal network may get down

K. Muthuvel Page 67 of 127


Software Testing – made easy

· Software compatibility with application and middleware if any may cause concerns
delaying the test start
· New version of databases or middleware may not be fully compatible with the applications
· Improper installation of system applications may cause delays
· Interfaces with applications may not be compatible with the existing hardware setup
· Problems on Testing side / Development side
· Delays can also be from the test or development teams likely
· Data designed may not be sufficient or compatible with the application (missing some
parameters of the data)
· Maintenance of the parameters may not be sufficient for the application to function
· Version transferred for testing may not be the right one

12.7.Defect Life Cycle

K. Muthuvel Page 68 of 127


Software Testing – made easy

13. Test Closure Process

13.1.Sign Off
Sign off Criteria: In order to acknowledge the completion of the test process and certify the
application, the following has to be completed.
· All passes have been completed
· All test cases should have been executed
· All defects raised during the test execution have either been closed or deferred

13.2.Authorities
The following personnel have the authority to sign off the test execution process
· Client: The owners of the application under test
· Project manager: Maveric Personnel who managed the project
· Project Lead: Maveric Personnel who managed the test process

13.3.Deliverables
The following are the deliverables to the Clients
· Test Strategy
· High Level Test Conditions or Scenarios and Test Conditions document
· Consolidated defect report
· Weekly Status report
· Traceability Matrix
· Test Acceptance/Summary Report.

13.4.Metrics

13.4.1.Defect Metrics
Analysis on the defect report is done for management and client information. These are
categorized as

13.4.2.Defect age:
Defect age is the time duration between the points of introduction of defect to the point of closure
of the defect. This would give a fair idea on the defect set to be included for smoke test during
regression.

13.4.3.Defect Analysis:
The analysis of the defects can be done based on the severity, occurrence and category of the
defects. As an example Defect density is a metric which gives the ratio of defects in specific
modules to the total defects in the application. Further analysis and derivation of metrics can be
done based on the various components of the defect management.

K. Muthuvel Page 69 of 127


Software Testing – made easy

13.4.4.Test Management Metrics


Analysis on the test management is done for management and client information. These are
categorized as
· Schedule: Schedule variance is a metric determined by the ratio of the planned duration to the
actual duration of the project.
· Effort: Effort variance is a metric determined by the ratio of the planned effort to the actual
effort exercised for the project.

13.4.5.Debriefs With Test Team


Completion of a project gives knowledge enrichment to the team members. Polarity of the
knowledge i.e., positive and negative should be shared commonly with the management and peer
groups.

K. Muthuvel Page 70 of 127


Software Testing – made easy

14. Testing Activities & Deliverables

The process of testing involves sequence of phases with several activities involved in each phase.
Each phase of testing has various documents to be maintained that tracks the progress of testing
activities and helps for future references.

The testing deliverables of different phases are significant for monitoring the testing process and
for process improvement. It plays a significant role in

· Identify and prioritize improvement areas


· Analyze the results and about the variability and current strengths and weakness and
indicate improvement areas
· List improvement areas
· Analyze effectiveness measurements
· Exercise on Test strategy preparation using Maveric template
· Identification of test phases, test activities and dependencies

14.1.Test Initiation Phase

The various activities happening during Test Initiation phase are:

· Functional Point Analysis.


· Risk Analysis
· Effort Estimation
· Proposal preparation and submission.
· Discussion with Client
· Contract sign-off

14.2.Test Planning Phase

The various activities happening during Test Planning phase are:

· Team formation and Task allocation


· Application understanding
· Preparation of Clarification document
· Internal Presentation
· Client Presentation
· Assess and Prioritize risk
· Preparation of Test Schedule(Effort estimation)
· Preparation of Test strategy document.

14.3.Test Design Phase

The various activities happening during Test design phase are:

· Environment set up for Testing by the IT department


· Preparation of Test condition, Test script and Test data
· Preparation of Traceability matrix
· Preparation of Daily status and Weekly status report

K. Muthuvel Page 71 of 127


Software Testing – made easy

· Approval of Design documents by the Client

14.4.Test Execution & Defect Management Phase

The various activities happening during Test Execution and defect Management are:
· Environment Check-up
· Test data population
· Execution and Defect management
· Comprehensive (Round 1)
· Discrepancy (Round 2)
· Sanity (Round3)
· Preparation of Daily status and Weekly status report
· Defect Analysis
· Preparation of Consolidated Defect report.

14.5.Test Closure Phase

The various activities happening during Test Closure are:

· Final Testing Checklist


· Preparation of Final Test Summary Report
· Test Deliverables
· Project De-brief
· Project Analysis Report

K. Muthuvel Page 72 of 127


Software Testing – made easy

15. Maveric Systems Limited

15.1.Overview

Maveric Systems is an independent, specialist software testing service provider. We


significantly enhance the functionality, usability, and performance of technology solutions
deployed by our clients. We bring in a fresh perspective and rigor to testing engagements by
virtue of our dedicated focus.

Complementing our core service offering in testing is a strong capability in strategic technical
writing and risk management (verification) services.

Our forte lies in banking, financial services and insurance verticals. We have also delivered
projects in telecom, CRM, and healthcare domains. Domain expertise in chosen areas
enables us to understand client requirements quickly and completely to offer a value-added
testing service.

A core group of anchor clients have propelled us to become a key independent software
testing firm in India within a short span of three years.

· Maveric Systems is an independent software testing company


· Delivery hubs in India and UK
· 185 Professionals on Projects across Chennai, Hyderabad, Bangalore, Mumbai, Delhi,
London, Dallas, Chicago, Melbourne, Singapore and Tokyo
· Primary Focus - Banking, Financial Services and Insurance
· New Focus Areas - Telecom and Manufacturing sectors

15.2.Leadership Team

Exceptional management bandwidth is a unique strength that has fuelled Maveric's


aggressive growth. Our five founding Directors of Maveric took to entrepreneurship while
they were at the prime of their careers in business and technology consulting.

A Maveric spirit and collective vision is nurturing our unique culture, and driving us to
relentlessly deliver value to our clients.
Ranga Reddy P K Bandyopadhyay
CEO Manager - Testing
Mahesh VN Rosario Regis
Director Manager - Testing
Venkatesh P Hari Narayana
Director Manager - Testing
Subramanian NN AP Narayanan
Director Manager - Technical Writing
Kannan Sethuraman Sajan CK
Principal Manager - Fulfillment

15.3.Quality Policy

“We will significantly enhance the functionality, usability, and performance of IT


solutions deployed by our clients. “

K. Muthuvel Page 73 of 127


Software Testing – made easy

15.4.Testing Process / Methodology

Input

Output

Key
Signoff

K. Muthuvel Page 74 of 127


Software Testing – made easy

15.4.1.Test Initiation Phase

Input
· Signed proposal

Procedure
· Arrange internal kick-off meeting among the team members, Test Manager, Lead
- Commercial and Lead - Operations
· Distribute the baseline documents based on the individual roles defined.
· Prepare a micro level schedule indicating the roles allocated for all team members
with timelines in MPP
· Fill in the Project Details form and the Top Level Project Checklist

Output
· Minutes of meeting
· MPP to be attached to Test Strategy (Test Planning process)
· Project Details form
· Top Level Project Checklist to Test Delivery Management

K. Muthuvel Page 75 of 127


Software Testing – made easy

15.4.2.Test Planning Phase

K. Muthuvel Page 76 of 127


Software Testing – made easy

Input
· Baseline documents
· MPP

Procedure
· Team members understand the functionality from baseline documents
· Raise and obtain resolution of clarifications
· Internal presentation and reverse presentation to the client
· TL defines test environment and request the client for the same
· TL identifies risks associated and monitors the project level risks throughout the
project. The risks at the project delivery level are maintained by the Lead - Ops
· TL prepares Test Strategy, review is by Lead - Commercial, Lead - Ops and Test
Manager
· AM revises commercials if marked difference between Test Strategy and the
Proposal
· TL prepares Configuration Management and Quality Plan, Review is by Lead -
Ops and Test Manager

Output
· Clarification document
· Test Environment Request to client
· Risk Analysis document to Test Delivery Management

K. Muthuvel Page 77 of 127


Software Testing – made easy

15.4.3.Test Design Phase

Input
· Test Strategy

Procedure
· Team members prepare the Test Conditions, Test Cases and Test Script
· TL prepares the Test Scenarios (free form)
· Review of the above by the Lead - Ops, Test Manager
· Client review records are also maintained. Lead - Ops is responsible for sign-off
· Team members prepare Traceability matrix if agreed in Test Strategy and updated during
Test Execution with defect ids
· Team members prepare Data Guidelines whenever required
· TL sends daily status reports to clients and the Test Delivery Management team. TL
sends the weekly status reports to clients, Test Manager, delivery management team and
the Account Manager
· TL escalates any changes in baseline documents to Delivery Management team.

Output
· Test Condition/Test Case document, Test Script, Test Scenarios (free form)
· Traceability matrix to the client
· Daily and Weekly status reports to client, Test Delivery Management and Account
Management

K. Muthuvel Page 78 of 127


Software Testing – made easy

15.4.4.Execution and Defect Management Phase

15.4.4.1.Test Execution Process

15.4.4.2.Defect Management Process

K. Muthuvel Page 79 of 127


Software Testing – made easy

Input
· Test Conditions/ Test Cases document
· Test Scenarios document
· Traceability matrix

Procedure
· Validate test environment and deal with any issues
· Execute first rounds of testing
· Update the Test Condition/Test case document (and the Test Scripts, if prepared)
with actual result and status of test
· Log in the Defect Report, consolidate all defects and send to client, Test Manager and
delivery management team
· Conducts defect review meetings with client (as specified in Test Strategy)
· Consolidate the Test Conditions/Test Cases to be executed in the subsequent
round in a separate document. No review or sign-off required
· Carry out the test in the new version of the application
· Changes to baseline or scope of work escalated to Lead - Ops
· Complete rounds/stages of testing as agreed in the Test Strategy
· Send daily and weekly status reports to clients and the Test Delivery Management
team
· Escalate any changes in baseline documents to Delivery Management team.

Output
· Defect Report
· Daily and Weekly status reports to the client, Test Delivery Management and
Account Management

K. Muthuvel Page 80 of 127


Software Testing – made easy

15.4.5.Test Closure Phase

Input
· Consolidated Defect Report

Procedure
· Team Lead in consultation with Lead - Ops decides about closure of a project
(both complete and incomplete)
· In case of incomplete testing, decisions whether to close and release deliverables
are taken by delivery management team
· The team prepare the Quantitative measurements
· TL prepares the Final Testing Checklist and Lead - Ops approves the same
· TL prepares the Final Test Report and Lead - Ops and Test Manager Reviews the same.
Internal Review records and review records of clients are also stored.
· Test Manager, Lead - Ops, Lead - Comm., Account Manager, TL and team members
carry out the de-brief meeting. Effort variances, % compliance to schedule are
documented in Project De-brief form. If required, inputs are given to Quality Department
for process improvements

Output
· Final Testing Checklist and Final Test Report to the client
· Project De-brief to Quality Department

K. Muthuvel Page 81 of 127


Software Testing – made easy

15.5.Test Deliverables Template

15.5.1.Project Details Form

Name of Project
Client
Location
Name:
Designation:
Contact at Client Location Email:
Phone:
Mobile:
Name:
Designation:
Project In-charge (Testing) Email:
Phone:
Mobile:
Name:
Designation:
Project In-charge (Development) Email:
Phone:
Mobile:
Domain of the Project

Test Summary
From To
Duration of Testing

Level of Testing
White Box Testing
Black Box Testing
Manual Testing
Automation
Testing
Type of Testing
Functional Testing
Regression
Testing
Performance
Testing
Automation Tools, if any
Defect tracking / Project management
tools used, if any

K. Muthuvel Page 82 of 127


Software Testing – made easy

Application Summary
Application Overview
OS with version
Database Backend
Middleware
Front-end
Software languages used

Module Summary

Module Name Description

Testers Summary

K. Muthuvel Page 83 of 127


Software Testing – made easy

15.5.2.Minutes of Meeting

Meeting Topic: Meeting Date and


Time:
Host of the Meeting: Venue and Meeting:

Participants:
Absentees:
Previous Meeting Follow up:

Minutes of the Meeting (Detailed) Attach if additional sheets require:

Corrective and Preventive Actions with Target Date:

Prepared By: Approved By:


Date: Date:

K. Muthuvel Page 84 of 127


Software Testing – made easy

15.5.3.Top Level Project Checklist

K. Muthuvel Page 85 of 127


Software Testing – made easy

15.5.4.Test Strategy Document

15.5.5.Configuration Management and Quality Plan

K. Muthuvel Page 86 of 127


Software Testing – made easy

15.5.6.Test Environment Request

Project Code:
Project Name:
Application Version No:
Type of Testing:
Prepared By:
Date:
Approved By:
Date:

Description Details Version


Client side
Hardware Details like RAM, Hard Disk Capacity etc.
Operating System
Client Software’s to be installed
Front end language, tools
Browser support
Internet connection
Automation Tools to be installed
Utility software’s like Toad, MS Projects etc.
Server side – Middle Tier
Hardware Platform.
No. of CPU
RAM
Hard Disk capacity
Operating System
Software
Number of Servers
Location of Server
Server side – Database
Hardware Platform.
No. of CPU
RAM
Hard Disk capacity
Operating System
Software
CPU
RAM
Hard Disk capacity
Number of Database Servers
Location of Database Servers

K. Muthuvel Page 87 of 127


Software Testing – made easy

15.5.7.Risk Analysis Document

15.5.8.Clarification Document

15.5.9.Test condition / Test Case Document

15.5.10.Test Script Document

K. Muthuvel Page 88 of 127


Software Testing – made easy

15.5.11.Traceability Matrix

15.5.12.Daily Status Report

Project Code:
Project Name:
Phase of the Testing Life Cycle:
Application Version No:
Round:
Report Date:

Highlights of the Day:


A1. Design Phase
Module No of Test Condn No of Test Cases Remarks
Designed Designed

WinRunner Scripting Progress


Sn No SRs/Transaction Status

A2. Execution Phase


Module Tot no of No. Executed Total executed till Remarks
Conds in the during the day date
module Planned Actua Planned Actual
l

B. Defect Distribution

K. Muthuvel Page 89 of 127


Software Testing – made easy

Show
Critical Major Minor Total
Stopper
Defects Raised
Today (A)
Till Yesterday (B)
Total (A + B)

Defects Closed (C)

Balance Open (A +
B – C)

Fixed but to be re-


tested
Rejected
C. Environment Non-availability
From To Error encountered &
Time Lost Man-hours
Time Time Root cause (If
Lost
Identified)

Total

D. Other Issues / Concerns

Description of Action Proposed Responsibility Remarks


the Issue / Proposed Target Date
Concern

E. General Remarks:

Prepared By:
Date:
Approved By:
Date:

K. Muthuvel Page 90 of 127


Software Testing – made easy

15.5.13.Weekly Status Report

Project Code:
Project Name:
Phase of the Testing Life Cycle:
Application Version No:
Report Date:
Report for Week:

A. Life Cycle/Process
Planned Revised Actual Reasons

Start End Man- Start End Start Man Projected Projected


Date Date months Date Date Date months man End Date
utilized months
till date till
closure

B. Highlights of the Week

B1. Design Phase


Module No of Test No of Test Remarks
Condition Designed Cases
Designed

WinRunner Scripting Progress


Sn No SRs/Transaction Status

B2. Execution Phase


Module Total no of No of conditions Total executed Remarks
conditions in executed during till date
the module the week
Planned Actual Planned Actual

C. Defect Distribution
Show
Critical Major Minor Total
stopper
Open Defects
Break-up of Open defects
Pending
Clarifications
Fixed but to be
re-tested
Re-raised
Being Fixed
Rejected

D. Environment Non-availability
Total man-hours lost during the week:

K. Muthuvel Page 91 of 127


Software Testing – made easy

E. Other Issues / Concerns


Description of Action Proposed Responsibility Remarks
the Issue / Proposed Target Date
Concern

F. General Remarks

Prepared By:
Date:
Approved By:
Date:

15.5.14.Defect Report

Round 2& Round 3 as same as Round 1

K. Muthuvel Page 92 of 127


Software Testing – made easy

15.5.15.Final Test Checklist

Project Code:
Project Name:
Prepared By:
Date:
Approved By:
Date:

Status Remarks
Check
(Y/N)
Have all modules been tested
Have all conditions been tested
Have all rounds been completed
All deliverables named according to naming
convention
Are all increase in scope, timelines been tracked
in Top Level Project Checklist, Test Strategy and
design documents
Have all agreed-upon changes carried out
(change in scope, client review comments etc)
Have all defects been re-tested
Have all defects been closed or deferred status
Are all deliverables ready for delivery
Are all deliverables taken from Configuration
Management tool
Are all soft copy deliverables checked to be virus
free

Comments and Observations:

Final inspection result: Approved/ Rejected

K. Muthuvel Page 93 of 127


Software Testing – made easy

15.5.16.Final Test Report

15.5.17.Project De-brief Form

Project Code:
Project Name:
Prepared By:
Date:
Approved By:
Date:

Overview of the Application:

Key Challenges faced during Design or Execution:

Lessons learnt:

Suggested Corrective Actions:

K. Muthuvel Page 94 of 127


Software Testing – made easy

16. Q&A

16.1.General

Q1: Why does software have bugs?


Ans:
· Miscommunication or no communication - understand the application requirements.
· Software complexity - the complexity of current software applications can be difficult to
comprehend for anyone without experience in modern-day software development.
· Programming errors - programmers "can" make mistakes.
· Changing requirements - A redesign, rescheduling of engineers, effects on other projects, etc.
If there are many minor changes or any major changes, known and unknown dependencies
among parts of the project are likely to interact and cause problems, and the complexity of
keeping track of changes may result in errors.
· Time pressures - scheduling of software projects is difficult at best, often requiring a lot of
guesswork. When deadlines loom and the crunch comes, mistakes will be made.
· Poorly documented code - it's tough to maintain and modify code that is badly written or
poorly documented that result as bugs.
· Software development tools - various tools often introduce their own bugs or are poorly
documented, resulting in added bugs.

Q2: What does "finding a bug" consist of?


Ans:
Finding a bug consists of number of steps that are performed:
· Searching for and locating a bug
· Analyzing the exact circumstances under which the bug occurs
· Documenting the bug found
· Reporting the bug and if necessary, the error is simulated
· Testing the fixed code to verify that the bug is really fixed

Q3: What will happen about bugs that are already known?
Ans:
When a program is sent for testing (or a website given) a list of any known bugs should
accompany the program. If a bug is found, then the list will be checked to ensure that it is not
a duplicate. Any bugs not found on the list will be assumed to be new.
Q4: What's the big deal about 'requirements'?
Ans:
Requirements are the details describing an application's externally perceived functionality and
properties. Requirements should be, clear & documented, complete, reasonably detailed,
cohesive, attainable, and testable. A non-testable requirement would be, for example, 'user-
friendly' (too subjective). Without such documentation, there will be no clear-cut way to
determine if a software application is performing correctly.

K. Muthuvel Page 95 of 127


Software Testing – made easy

Q5: What can be done if requirements are changing continuously?


Ans:
It's helpful if the application's initial design allows for some adaptability so that any changes done
later do not require redoing the application from scratch. To makes changes easier for the
developers the code should be well commented and well documented. Use rapid prototyping
whenever possible to help customers feel sure of their requirements and minimize changes.
Be sure that customers and management understand the scheduling impacts, inherent risks,
and costs of significant requirements changes. Design some flexibility into test cases (this is
not easily done; the best bet might be to minimize the detail in the test cases, or set up only
higher-level generic-type test plans)
Q6: When to stop testing?
Ans:
This can be difficult to determine. Many modern software applications are so complex, and run in
such an interdependent environment, so a complete testing can never be performed.
Common factors in deciding when to stop testing are:
· Deadlines achieved (release deadlines, testing deadlines, etc.)
· Test cases completed with certain percentage passed
· Test budget depleted
· Coverage of code/functionality/requirements reaches a specified point
· Defect rate falls below a certain level
· Beta or Alpha testing period ends
Q7: How does a client/server environment affect testing?
Ans:
Client/server applications can be quite complex due to the multiple dependencies among clients,
data communications, hardware, and servers. Thus testing requirements can be extensive. When
time is limited (as it usually is) the focus should be on integration and system testing. Additionally,
load/stress/performance testing may be useful in determining client/server application limitations
and capabilities.
Q8: Does it matter how much the software has been tested already?
Ans:
No. It is up to the tester to decide how much to test it before it is tested. An initial assessment of
the software is made, and it will be classified into one of three possible stability levels:
· Low stability (bugs are expected to be easy to find, indicating that the program has
not been tested or has only been very lightly tested)
· Normal stability (normal level of bugs, indicating a normal amount of programmer
testing)
· High stability (bugs are expected to be difficult to find, indicating already well tested)

Q9: How is testing affected by object-oriented designs?


Ans:
Well-engineered object-oriented design can make it easier to trace from code to internal design to
functional design to requirements. While there will be little affect on black box testing (where an

K. Muthuvel Page 96 of 127


Software Testing – made easy

understanding of the internal design of the application is unnecessary), white-box testing can be
oriented to the application's objects. If the application was well designed this can simplify test
design.
Q10: Will automated testing tools make testing easier?
Ans:
A tool set that allows controlled access to all test assets promoted better communication between
all the team members, and will ultimately break down the walls that have traditionally existed
between various groups.
Automated testing tools are only one part of a unique solution to achieving customer success. The
complete solution is based on providing the user with principles, tools, and services needed to
efficiently develop software.
Q11: Why outsource testing?
Ans:
Skill and Expertise Developing and maintaining a team that has the expertise to thoroughly test
complex and large applications is expensive and effort intensive.
Testing a software application now involves a variety of skills.
· Focus - Using a dedicated and expert test team frees the development team to focus
on sharpening their core skills in design and development, in their domain areas.
· Independent assessment - Independent test team looks afresh at each test project
while bringing with them the experience of earlier test assignments, for different
clients, on multiple platforms and across different domain areas.
· Save time - Testing can go in parallel with the software development life cycle to
minimize the time needed to develop the software.
· Reduce Cost - Outsourcing testing offers the flexibility of having a large test team,
only when needed. This reduces the carrying costs and at the same time reduces the
ramp up time and costs associated with hiring and training temporary personnel.
Q12: What steps are needed to develop and run software tests?
Ans:
The following are some of the steps needed to develop and run software tests:
· Obtain requirements, functional design, and internal design specifications and other
necessary documents
· Obtain budget and schedule requirements
· Determine project-related personnel and their responsibilities, reporting requirements,
required standards and processes (such as release processes, change processes,
etc.)
· Identify application's higher-risk aspects, set priorities, and determine scope and
limitations of tests
· Determine test approaches and methods - unit, integration, functional, system, load,
usability tests, etc.
· Determine test environment requirements (hardware, software, communications, etc.)
· Determine test-ware requirements (record/playback tools, coverage analyzers, test
tracking, problem/bug tracking, etc.)
· Determine test input data requirements Identify tasks, those responsible for tasks,
and labor requirements

K. Muthuvel Page 97 of 127


Software Testing – made easy

· Set schedule estimates, timelines, milestones


· Determine input equivalence classes, boundary value analyses, error classes Prepare
test plan document and have needed reviews/approvals
· Write test cases
· Have needed reviews/inspections/approvals of test cases
· Prepare test environment and test ware, obtain needed user manuals/reference
documents/configuration guides/installation guides, set up test tracking processes,
set up logging and archiving processes, set up or obtain test input data
· Obtain and install software releases
· Perform tests
· Evaluate and report results
· Track problems/bugs and fixes
· Retest as needed
· Maintain and update test plans, test cases, test environment, and test ware through
life cycle
·
Q13: What is a Test Strategy and Test Plan?
Ans:
A test strategy is a statement of the overall approach to testing, identifying what levels of testing
are to be applied and the methods, techniques and tools to be used. A test strategy should ideally
be organization wide, being applicable to all of organizations software developments.
Developing a test strategy, which efficiently meets the needs of an organization, is critical to the
success of software development within the organization. The application of a test strategy to a
software development project should be detailed in the projects software quality plan.
The next stage of test design, which is the first stage within a software development project, is the
development of a test plan. A test plan states what the items to be tested are, at what level they
will be tested, what sequence they are to be tested in, how the test strategy will be applied to the
testing of each item, and describes the test environment.
A test plan may be project wide, or may in fact be a hierarchy of plans relating to the various levels
of specification and testing:
· An Acceptance Test Plan, describing the plan for acceptance testing of the software.
This would usually be published as a separate document, but might be published with
the system test plan as a single document.
· A System Test Plan, describing the plan for system integration and testing. This
would also usually be published as a separate document, but might be published with
the acceptance test plan.
· A Software Integration Test Plan, describing the plan for integration of testes software
components. This may form part of the Architectural Design Specification.
· Unit Test Plan(s), describing the plans for testing of individual units of software.
These may form part of the Detailed Design Specifications.
The objective of each test plan is to provide a plan for verification, by testing the software, that the
software produced fulfils the requirements or design statements of the appropriate software
specification. In the case of acceptance testing and system testing, this means the Requirements
Specification.

K. Muthuvel Page 98 of 127


Software Testing – made easy

16.2.G.E. – Interview

1. What is Software Testing?

“The process of exercising or evaluating a system or system component by manual or automated


means to verify that it satisfies specified requirements or to identify differences between expected
and actual results..."

2. What is the Purpose of Testing?

· To uncover hidden error.


· To achieve the maximum usability of the system
· To demonstrate expected performance of the system.

3. What types of testing do testers perform?

Black-box testing, White box testing is the basic type of testing testers Performs. Apart from that
they also perform a lot of tests like
· Ad-Hoc testing
· Cookie testing
· CET ( Customer Experience test)
· Client-Server Test
· Configuration Tests
· Compatibility testing
· Conformance Testing
· Depth Test
· Error Test
· Event-Driven
· Full Test
· Negative Test
· Parallel Testing
· Performance Testing
· Recovery testing
· Sanity Test
· Security Testing
· Smoke testing
· Web Testing

4. What is the Outcome of Testing?

A stable application, performing its task as expected.

5. What is the need for testing?

The Primary need is to match requirements get satisfied with the functionality and also to
answer two questions
A. Whether the system is doing what it supposes to do?
B. Whether the system is not performing what it is not suppose to do?

6. What are the entry criteria for Functionality and Performance testing?

Functional testing:
Functional Spec. /BRS (CRS)/User Manual. An integrated application, Stable for testing
Performance Testing:

K. Muthuvel Page 99 of 127


Software Testing – made easy

Same above mentioned baseline document support and good and healthy application that
supports drastic performance testing

7. What is test metrics?

After doing the actual testing, an evaluation doing on the testing to extract some
information about the application health using outputs of testing.
Software metrics is any type of measurement, which relates to a software system,
process or related documentation.
Eg: Size of code and Found bugs on that count
Number of bugs reported per day.
Number of Conditions/Cases tested per day
It can be
· Test Efficiency
· Total number of tests executed

8. Why do you go for White box testing, when Black box testing is available?

A benchmark that certifies Commercial (Business) aspects and also functional (technical)
aspects is objectives of black box testing. Here loops, structures, arrays, conditions, files,
etc., are very micro level but they are Basement for any application, So White box takes
these things in Macro level and test these things

9. What are the entry criteria for Automation testing?

Application should be stable.


Clear Design and Flow of the application is needed

10. When to start and Stop Testing?

If we follow ‘Waterfall’ model then testing can be started after coding. If ‘V’ model is
followed then testing can be started at design phase itself.
Regard less of model the following criteria should considered
To start:
When test Environment was supportive enough for testing.
When Application study was confident enough
To Stop:
After full coverage of Scope of testing
After getting enough confidence on health of the application.

11. What is Quality

“Fitness to use”
“A journey towards excellence”
12. What is Baseline document, Can you say any two?

A baseline document, which starts the understanding of the application before the
tester, starts actual testing.
Functional Specification
Business Requirement Document

13. What is verification?

A tester uses verification method to ensure the system complies with an organization
standards and processes, relying on review or non executable methods (such as
software, hardware, documentation and personnel)
“Are we Building the Right Product”

K. Muthuvel Page 100 of 127


Software Testing – made easy

14. What is validation?

Validation physically ensures that the system operates according to plan by Executing the
system functions through series of tests that can be observed or evaluated.
“Are we building the Product Right”

15. What is quality assurance?

A planned and systematic pattern for all actions necessary to provide adequate
confidence that the item or product conforms to established technical requirements

16. What is quality control?

Quality Control is defined as a set of activities or techniques whose purpose is to ensure


that all quality requirements are being met. In order to achieve this purpose, processes
are monitored and performance problems are solved.

17. What are SDLC and TDLC?

The Flow and explanation process, which clearly pictures how a software development
and testing should be done, were explained in SDLC and TDLC respectively. (Software
development Life Cycle and testing development Life cycle)
TDLC is a informal concept and also referred as TLC

18. What are the Qualities of a Tester?

· Should be perfectionist
· Should be tactful and diplomatic
· Should be innovative and creative
· Should be relentless
· Should possess negative thinking with good judgment skills
· Should possess the attitude to break the system

19.What are the various levels of testing?


· Unit Testing
· Integration testing
· System Testing
· User Acceptance Testing

20. Tell names of some testing type which you learnt or experienced?
Any 5 or 6 types which are related to companies profile is good to say in the interview,
· Ad - Hoc testing
· Cookie Testing
· CET (Customer Experience Test)
· Client-Server Test
· Configuration Tests
· Compatibility testing
· Conformance Testing
· Depth Test
· Error Test
· Event-Driven
· Full Test
· Negative Test
· Parallel Testing

K. Muthuvel Page 101 of 127


Software Testing – made easy

· Performance Testing
· Recovery testing
· Sanity Test
· Security Testing
· Smoke testing
· Web Testing

21. What exactly is Heuristic checklist approach for unit testing?

It is method of achieving the most appropriate solution of several found by alternative


methods is selected at successive stages testing. The check list Prepared to Proceed is
called Heuristic check list

22. After completing testing, what would you deliver to the client?

· Test deliverables namely


· Test plan
· Test Data
· Test design Documents (Condition/Cases)
· Defect Reports
· Test Closure Documents
· Test Metrics

23. What is a Test Bed?

Before Starting the Actual testing the elements which supports the testing activity such as
Test data, Data guide lines. Are collectively called as test Bed.

24. What is a Data Guideline?

Data Guidelines are used to specify the data required to populate the test bed and
prepare test scripts. It includes all data parameters that are required to test the conditions
derived from the requirement / specification
The Document, which supports in preparing test data are called Data guidelines

25. Why do you go for Test Bed?

When Test Condition is executed its result should be compared to Test result (expected
result), as Test data is needed for this here comes the role of test Bed where Test data is
made ready.

26. What is Severity and Priority and who will decide what?

Severity:
How much the Bug found is supposed to affect the systems Function/Performance,
Usually we divide as Emergency, High, Medium, and Low.
Priority:
Which Bug should be solved fist in order of benefit of system’s health? Normally it starts
from Emergency giving first Priority to Low as last Priority.

27. Can Automation testing replace manual testing? If it so, how?

Automated testing can never replace manual Testing.

K. Muthuvel Page 102 of 127


Software Testing – made easy

As these tools to Follow GIGO principle of computer tools. Absence of creativity and
innovative thinking.
But
1. It speeds up the process. Follow a clear Process, which can be reviewed easily.
Better Suited for Regression testing of Manually tested Application and Performance
testing.

28. What is a test case?

A Test Case gives values / qualifiers to the attributes that the test condition can have.
Test cases, typically, are dependent on data / standards.
A Test Case is the end state of a test condition, i.e., it cannot be decomposed or
broken down further. Test Case design techniques for Black box Testing.

· Decision table
· Equivalence Partitioning Method
· Boundary Value Analysis
· Cause Effect Graphing
· State Transition Testing
· Syntax Testing

29. What is a test condition?

A Test Condition is derived from a requirement or specification. It includes all possible


combinations and validations that can be attributed to that requirement/specification.

30. What is the test script?

A Test Script contains the Navigation Steps, Instructions, Data and Expected Results
required to execute the test case(s).
Any test script should say how to drive or swim through out the application even for a new
user.

31. What is the test data?

The value which are given at expected places(fields) in a system to verify its functionality
have been made ready in a piece of document called test data.

32. What is an Inconsistent bug?

The Bug which is not occurring in a definable format or which cannot be caught, even if a
process is followed. It may occur and may not when tested with same scenario.

33. What is the difference between Re-testing and Regression testing?

Retest-To check for a particular bug and its dependencies after it is said to be fixed.
Regression testing: To check for the added or new functionality's effect on the existing
system

34. What are the different types of testing techniques?

· White box
· Black box
· Gray Box

35. What are the different types of test case techniques?

K. Muthuvel Page 103 of 127


Software Testing – made easy

Test Case design techniques for Black box Testing.


· Decision table
· Equivalence Partitioning Method
· Boundary Value Analysis
· Cause Effect Graphing
· State Transition Testing
· Syntax Testing

36. What are the risks involved in testing?

· Resource Risk (A. Human Resource B. Hardware resource C. Software resource)


· Technical risk
· Commercial Risk

37. Differentiate Test bed and Test Environment?

Test bed holds only testing documents which supports testing which includes Test data,
Data guidelines etc.
Test environment includes all supportive elements namely hardware, software, tools,
Browsers, Servers, etc.,

38. What ifs the difference between defect, error, bug, failure, fault?

Error:
“Is an undesirable deviation from requirements?”
Any problem or cause for many problems which stops the system to perform its
functionality is referred as Error
Bug:
Any Missing functionality or any action that is performed by the system which is not
supposed to be performed is a Bug.
“Is an error found BEFORE the application goes into production?”
Any of the following may be the reason for birth of Bug
1. Wrong functionality
2. Missing functionality
3. Extra or unwanted functionality
Defect:
A defect is a variance from the desired attribute of a system or application.
“Is an error found AFTER the application goes into production?”
Defect will be commonly categorized into two types:
1. Defect from product Specification
2. Variance from customer/user expectation.
Failure:
Any Expected action that is suppose to happen if not can be referred as failure or we can
say
Absence of expected response for any request.
Fault:
This generally referred in hardware terminologies. A Problem, which cause the system not
to perform its task or objective.

39. What is the difference between quality and testing?

“Quality is giving more cushions for user to use system with all its expected
characteristics”It is usually said as Journey towards Excellence.
Testing is an activity done to achieve the quality.

K. Muthuvel Page 104 of 127


Software Testing – made easy

40. What is the difference between White & Black Box Testing?

White box: Structural tests verify the structure of the software itself and require complete
access to the object's source code. This is known as ‘white box’ testing because you see
into the internal workings of the code.
Black Box: Functional tests examine the observable behavior of software as evidenced by
its outputs without reference to internal functions. Hence ‘black box’ testing. If the
program consistently provides the desired features with acceptable performance, then
specific source code features are irrelevant. It's a pragmatic and down-to-earth
assessment of software.

41. What is the difference between Quality Assurance and Quality Control?

QA: Study on Process followed in Project development


QC: Study on Project for its Function and Specification

42. What is the difference between Testing and debugging?

Testing is done to find bugs


Debugging is an art of fixing bugs.
Both are done to achieve the quality

43. What is the difference between bug and defect?

Bug:
Any Missing functionality or any action that is performed by the system which is not
supposed to be performed is a Bug.
“Is an error found BEFORE the application goes into production?”
Any of the following may be the reason for birth of Bug
1. Wrong functionality
2. Missing functionality
3. Extra or unwanted functionality
Defect:
A defect is a variance from the desired attribute of a system or application.
“Is an error found AFTER the application goes into production?”
Defect will be commonly categorized into two types:
1. Defect from product Specification
2. Variance from customer/user expectation

44. What is the difference between verification and validation?

Verification:
The process of determining whether of not the products of a given phase of the software
development cycle meets the implementation steps and can be traced to the incoming
objectives established during the previous phase. The techniques for verification are
testing, inspection and reviewing.
In other words we can say Verification as
“Are we Building the Right Product”
A tester uses verification method to ensure the system complies with an organization
standards and processes, relying on review or non executable methods (such as
software, hardware, documentation and personnel)
Validation:
The process of evaluating software at the end of the software development process to
ensure compliance with software requirements. The technique for validation is testing,
inspection and reviewing.
In other words we can say Verification as
“Are we building the Product Right”

K. Muthuvel Page 105 of 127


Software Testing – made easy

Validation physically ensures that the system operates according to plan by Executing the
system functions through series of tests that can be observed or evaluated.

45. What is the difference between functional spec? And Business requirement Specification?

Functional specification will be more technical, It holds properties of a field and


functionality dependencies E.g.: size, Type of data whether numeric or alphabets etc.
Business Requirement Specification will be more business oriented which throws light
more on needs or requirements

46. What is the difference between unit testing and integration testing?

Unit Testing
Integration
Coding Individual Testing
& Unit
Debugging Under

Unit Testing: Testing of single unit of code, module or program. it is usually done by the
developer of the unit. It validates that the software performs as designed. Deliverable of
the unit testing is software unit ready for testing with other system components.
Integration Testing: Testing of related programs, modules or units of code. Validates that
multiple parts of the system perform as per specification.
Deliverable of integration testing is parts of system ready for testing with other portions of
system.

47. What is the difference between Volume & Load?

Testing Type Data User

Increase Till
Load Constant saturation
Point is reached
Increase Till Co
saturation
Volume
Point is
reached

48. What is difference between Volume & Stress?

Volume testing is increasing the volume of data to maximum withstand capacity of the
system.
Stress is the combination of both volume and load, so need not to increase in volume
alone even user can also increased objective here is to check the up to which extend it
can bare the increasing load and volume.

K. Muthuvel Page 106 of 127


Software Testing – made easy

49. What is the difference between Stress & Load Testing?

Stress is the combination of both volume and load, so need not to increase in volume
alone even user can also increased objective here is to check the up to which extend it
can bare the increasing load and volume.
Load Testing is increasing number of user to maximum withstand capacity of the system.

50. What is the difference between Client Server & Web Based Testing?

Client server needs a Client server environment that is a system to Request and another
to respond to its request.
Web Based testing normally goes with 3W sites testing, done to check its stability and
functionality when goes online.

51. What is the difference between Integration & System Testing?

Integration testing

52. What is the Difference between Code Walkthrough & Code Review?

Both are almost same except in one issue that is


Walkthrough need not be done by people inside the team are who have more knowledge
about the system.
Review is highly recommended to be done by people of higher level in team or who have
good knowledge about the application.

53. What is the difference between walkthrough and inspection?

Walkthrough:
In a walk through session, the material being examined is presented by a reviewed and
evaluated by a team of reviewers.
A walk through is generally carried out without any plan or preparation. the aim of this
review is to enhance the process carried out in production environment.
Inspections:
Design and code inspection was first described by FAGUN.
There are three separate inspection performed, they are
· Following design, but prior to implementation.
· Following implementation, but prior to Unit testing.
· Finally inspecting Unit testing, this was not considered to be cost-effective in
discovering errors.

54. What is the Difference between SIT & IST?

· SIT can be done when system is on the process of integration,


· IST need integrated System of various Unit levels of independent functionality
and checks its workability after integration and compares it before integration.

55. What is the Difference between static and dynamic?

· Static testing: Testing performed with expecting any response for specific request
placed at that time. Done Based on structures, Algorithms, logic, etc.,
· Dynamic testing: Performed to the System that responds to any specific request.
More than all that without executing the application this testing cannot be done.

56. What is the difference between alpha testing and beta testing?

K. Muthuvel Page 107 of 127


Software Testing – made easy

Component Alpha testing Beta testing


Test data Simulated Live
Test Environment Controlled Uncontrolled
To Achieve Functionality User needs
Tested by Only testers Testers and End-
Users
Supporting Functional Customer
Document Used Specification Requirement
Specification

57. What are the Minimum requirements to start testing?

· Baseline Documents.
· Stable application.
· Enough hardware and software support E.g. Browsers, Servers, and Tools)
· Optimum maintenance of resource

58. What is Smoke Testing & when it will be done?

A quick-and-dirty test that the major functions of a piece of software work without
bothering with finer details. Originated in the hardware testing practice of turning on a new
piece of hardware for the first time and considering it a success if it does not catch on fire.

59. What is Ad hoc testing? When it can be done?

Appropriate and very often syndicated when tester wants to become familiar with the
product, or in the environment when technical/testing materials are not 100% completed.
It is also largely based on general software product functionality/testing understanding
and the normal 'Human Common Sense'.
This can be performed even with non-availability of of Baseline documents.

60. What is cookie testing?

Cookie is a text file normally written by web applications to store all your login-id,
password validation and details about your session. Cookies will get stored in our
machines (client).Its mainly to verify whether cookies are being written correctly. .
Importance of cookie testing:
· To evaluate the performance of a web application
· To assure the health of www application where more cookies are involved

61. What is security testing?

· To test how the system can defense itself from external attacks.
· How much it can with stand from breaking the system from performing its assigned
task.
Many critical software applications and services have integrated security measures
against malicious attacks. The purpose of security testing of these systems include
identifying and removing software flaws that may potentially lead to security violations,
and validating the effectiveness of security measures. Simulated security attacks can be
performed to find vulnerabilities.

62. What is database testing?

The demonstrate the backend response for front end requests

K. Muthuvel Page 108 of 127


Software Testing – made easy

How backend, which stores and retrieve back the data and supports the front end when in
need is justified database testing.

63. What is the relation ship between Quality & Testing?

Quality is a journey towards excellence; Testing is the way of achieving quality.

64. How do you determine, what to be tested?

The Scope of testing should be created based on the requirements or needs given by the
end user or client, based on these things the testing scope should be decided.

65. How do you go about testing a project?

· System study
· Understanding the application
· Test environment setup

66. What is the Initial Stage of testing?

Right from understanding the application testing starts with clarifying the ambiguities in
the application and continues to Test initiation encloses, Test process, test data, Data
guidelines Preparation and test design which is finally executed

67. What is Web Based Application Testing?

Web Based testing normally goes with 3W sites testing, done to check its stability and
functionality when goes online.

68. What is Client Server Application Testing?

Client server needs a Client server environment that is a system to Request and another
to respond to its request.

69. What is the use of Functional Specification?

Functional Specification is a baseline document prepared in technical perspective, says


how the system should behave in ideal scenario. Tells right from syntax to its functionality
and dependencies
Eg: for a password and user id fields
It should accept <n>number of characters in<Type> of type of data and it gets input from
<x> and gives output to <y>.

70. Why do we prepare test condition, test cases, test script (Before Starting Testing)?

These are test design document which are used to execute the actual testing
Without which execution of testing is impossible ,finally this execution is going to find the
bugs to be fixed so we have prepare this documents.

71. Is it not waste of time in preparing the test condition, test case & Test Script?

No document prepared in any process is waste of time, That too test design documents
which plays vital role in test execution can never be said waste of time as without which
proper testing cannot be done.

72. How do you go about testing of Web Application?

K. Muthuvel Page 109 of 127


Software Testing – made easy

To approach a web application testing, the first attack on the application should be on its
performance behavior as that is very important for a web application and then transfer of
data between web server ,front end server, security server and back end server.

73. How do you go about testing of Client Server Application?

To approach a client server environment we can track back the data transfer, Check the
compatibility and verify the individual behavior and then to compare as client and server.

74. What is meant by Static Testing?

Structure of a program, Program Logic, Condition coverage, Code coverage etc. can be
tested. Analysis of a program carried out without executing the program.

75. Can the static testing be done for both Web & Client Server Application?

Yes, Can be done regardless of type of application, but Depends on the Application’s
individual structure and behavior.

76. In the Static Testing, what all can be tested?

· Functions,
· Conditions
· Loops
· Arrays
· Structures

77. Can test condition, test case & test script help you in performing the static testing?

Static testing will be done based on Functions, Conditions, loops, arrays and structures.
so hardly not needed to have These documents, By keeping this also static testing can be
done.

78. What does dynamic testing mean?

Any dynamic application i.e., the system that responds to request of user is tested by
executing it is called dynamic testing

79. Is the dynamic testing a functional testing?

Yes, Regardless of static or dynamic if applications functionality's are attacked keeping in


mind to achieve the need then it will come under functional testing.

80. Is the Static testing a functional testing?

Yes, Regardless of static or dynamic if applications functionality's is attacked keeping in


mind to achieve the need then it will come under functional testing.

81. What is the functional testing you perform?

I have done Conformance testing, Regression testing, Workability testing, Function


Validation and Field level validation testing.

82. What is meant by Alpha Testing?

K. Muthuvel Page 110 of 127


Software Testing – made easy

Alpha testing is testing of product or system at developer’s site by the customer.

83. What kind of Document you need for going for a Functional testing?

Functional specification is the ultimate document, which expresses all the functionality's of
the application and other documents like user manual and BRS are also need for
functional testing.
Gap analysis document will add value to understand expected and existing system.

84. What is meant by Beta Testing?

User Acceptance testing which is done with the objective of achieving all users needs. In
this testing usually users or testers will involve in performing.
E.g.: a Product after completion given to customers for trail as Beta version and feedback
from users and important suggestions which will add quality will be done before release.

85. At what stage the unit testing has to be done?

After completing coding of individual functionality's unit testing can be done.


Say for E.g.: if an application have 5 functionality's to work together, if they have been
developed individually then unit testing can be carried out before their integration is
suppose to be done.
Who can perform the Unit Testing?
Both developers and testers can perform this unit level testing

86. When will the Verification & Validation be done?


Software
How To use How to use
Development
Verification Validation
Phases
v Verify
Requirements Completeness of Not usable
gathering Requirements
v Verify vendor
capability, if
Applicable
Project Planning Not usable
v Verify
completeness of
project test plan

K. Muthuvel Page 111 of 127


Software Testing – made easy

· Validate
· Verify Correctness Correctness
and completeness of changes
of Interim · Validate
Deliverables. Regression
· Verify contingency · Validate
plan meets user
acceptance
Project
criteria
Implementation
· Validate
Supplier’s
software
Process
correctly
· Validate
Software
interfaces

86A. What is the testing that a tester performs at the end of Unit Testing?

Integration testing will be performed after unit testing to ensure that unit tested modules
get integrated correctly.

87. What are the things, you prefer & Prepare before starting Testing?

Study the application, Understanding the applications expected functionality's,


Preparing Test plan, Ambiguity/Clarification Document and test design Documents.

88. What is Integration Testing?

Integration testing exercises several units that have been combined to form a module,
subsystem, or system. Integration testing focuses on the interfaces between units, to
make sure the units work together. The nature of this phase is certainly 'white box', as we
must have certain knowledge of the units to recognize if we have been successful in
fusing them together in the module.

89. What is Incremental Integration Testing?

Incremental Integration Testing is an approach of testing where we will integrate the


modules top to bottom or on the incrementing scale of intensity.

90. What is meant by System Testing?

The system test phase begins once modules are integrated enough to perform tests in a
whole system environment. System testing can occur in parallel with integration test,
especially with the top-down method.

91. What is meant by SIT?

System Integration Testing done after the completion of Unit level testing. An application
which is integrated together after assuring their individual functionality's.

92. When do you go for Integration Testing?

When all Separate unit in Unit Level testing is assured to do good their performance,
Then Application is recommended for integration after these unit getting integrated,
application can be performed integration testing.

K. Muthuvel Page 112 of 127


Software Testing – made easy

93. Can the System testing be done at any stage?

No, The system as a whole can be tested only if all modules are integrated and all
modules work correctly
System testing should be done before UAT (User Acceptance testing) and Before Unit
Testing.

94. What are stubs & drivers?

Driver programs provide emerging low-level modules with simulated inputs and the
necessary resources to function. Drivers are important for bottom-up testing, where you
have a complete low-level module, but nothing to test it with.
Stubs simulate sub-programs or modules while testing higher-level routines.

95. What is the Concept of Up-Down & Down-Up in Testing in integration testing?

There is two approach in testing an application if the functionality sequence was mapped
and tracked from top to bottom then it is called top down method ,If that was done for
integration testing then it is top down model testing in Integration and vice versa for
Bottom up model

96. What is the final Stage of Integration Testing?

All the individual units integrated together to Perform a task as a system or Part of the
system as expected to do so.

97. Where in the SDLC, the Testing Starts?

It depends upon the Software Model which we follow. If we use Waterfall model then
testing will comes in to picture only after coding is done. If we follow V model then testing
can be started at the design phase itself. UAT test cases can be written from URS/BRS
and System test cases can be written from SRS.

98. What is the Outcome of Integration Testing?

At the completion of integration testing all the unit level functionalities or sub modules are
integrated together and finally it should work as a system as whole as expected.

99. What is meant by GUI Testing?

Testing the front-end user interfaces to applications, which use GUI support systems and
standard such as MS Windows.

100. What is meant by Back-End Testing?

Database Testing is also called as back end testing checking whether database elements
have been accessed by front end whenever required as desired.

101. What are the features, you take care in Prototype testing?

Prototype testing is carrying out testing in same method reputedly to understand the
system behavior; here full coverage of functionality should be taken care With the same
process followed as for Prototype testing.

102. What is Mutation testing & when can it be done?

K. Muthuvel Page 113 of 127


Software Testing – made easy

Mutation testing is a powerful fault-based testing technique for unit level testing. Since it is
a fault-based testing technique, it is aimed at testing and uncovering some specific kinds
of faults, namely simple syntactic changes to a program. Mutation testing is based on two
assumptions: the competent programmer hypothesis and the coupling effect. The
competent programmer hypothesis assumes that competent programmers tend to write
nearly "correct" programs. The coupling effect stated that a set of test data that can
uncover all simple faults in a program is also capable of detecting more complex faults.
Mutation testing injects faults into code to determine optimal test inputs

103. What is Compatibility Testing?

Testing to ensure compatibility of an application with different browsers, Operating


Systems, and hardware platforms. Compatibility testing can be performed manually or
can be driven by an automated functional or regression test suite.

104. What is Usability Testing?

Usability testing is a core skill because it is the principal means of finding out whether a
system (see our definition below) meets its intended purpose. All other skills that we
deploy or cultivate aim to make usability (and, ultimately, use) successful.
It is a Process of Testing the effectiveness, efficiency, and satisfaction with which
specified users could achieve specified goals in the Application. Synonymous with "ease
of use".

105. What is the Importance of testing?

Software Testing is more Oriented to detecting the defects or often equated to finding
bugs. Testing is mainly done to make things go wrong to determine if things happen when
they shouldn't or things don't happen when they should. Testing only demonstrates that
the product performs each function intended & to show the product is free from defect.

106. What is meant by regression Testing?

Regression testing is an expensive but necessary activity performed on modified software


to provide confidence that changes are correct and do not adversely affects other system
components. Four things can happen when a developer attempts to fix a bug. Three of
these things are bad, and one is good:

Change
New Bug No New Bug
Bug
Successful
Bad Good
Change
Unsuccessful
Bad Bad
Change
Because of the high probability that one of the bad outcomes will result from a change to
the system, it is necessary to do regression testing.

107. When we prefer Regression & what are the stages where we go for Regression Testing?

We Prefer regression testing to provide confidence that changes are correct & has not
affected the flow or Functionality of an application which got Modified or bugs got fixed in
it.

K. Muthuvel Page 114 of 127


Software Testing – made easy

Stages where we go for Regression Testing are: -


· Minimization approaches seek to satisfy structural coverage criteria by identifying
a minimal set of tests that must be retested, for identifying whether the application
works fine.
· Coverage approaches are also based on coverage criteria, but do not require
minimization of the test set. Instead, they seek to select all tests that exercise
changed or affected program components.
· Safe attempt instead to select every test that will cause the modified program to
produce different output than original program.

108. What is performance testing?

An important phase of the system test, often-called load, volume or performance test, and
stress tests try to determine the failure point of a system under extreme pressure. Stress
tests are most useful when systems are being scaled up to larger environments or being
implemented for the first time. Web sites, like any other large-scale system that requires
multiple accesses and processing, contain vulnerable nodes that should be tested before
deployment. Unfortunately, most stress testing can only simulate loads on various points
of the system and cannot truly stress the entire network, as the users would experience it.
Fortunately, once stress and load factors have been successfully overcome, it is only
necessary to stress test again if major changes take place.
A drawback of performance testing is that can easily confirm that the system can handle
heavy loads, but cannot so easily determine if the system is producing the correct
information. In other words, processing incorrect transactions at high speed can cause
much more damage and liability than simply stopping or slowing the processing of correct
transactions.
Performance testing can be applied to understand your application or WWW site's
scalability, or to benchmark the performance in an environment of third party products
such as servers and middleware for potential purchase. This sort of testing is particularly
useful to identify performance bottlenecks in high use applications. Performance testing
generally involves an automated test suite as this allows easy simulation of a variety of
normal, peak, and exceptional load conditions.
The following three types highly influence Performance of an application.
Load testing, Volume testing, Stress Testing
Stress testing is the combination of both load and volume.

109. What is the Performance testing; those can be done Manually & Automatically?

This sort of testing is particularly useful to identify performance bottlenecks in high use
applications. Performance testing generally involves an automated test suite as this
allows easy simulation of a variety of normal, peak, and exceptional load conditions.
Manually: - Load, Stress, & Volume are the types of testing which are been done
Manually.
Automated: - Load, Stress, & Volume are the types of testing which are been done
automatically, by using the Automated Skills.

110. What is Volume, Stress & Load Testing?

Volume testing: - Testing the Application under varying loads; keeping the Number of
Users constantly & finding the Response time & the system With Standing Capability or
varying the Load till saturation Point is reached
Load Testing: -Testing the Application under Constant load; keeping varying the Number
of Users & there by finding the Response time & the system With Standing Capability or
varying the Users till saturation Point is reached
Stress Testing: - Testing the Application under varying loads; keeping varying the
Number of Users simultaneously & there by finding the Response time & the system With
Standing Capability or varying the Load & Users till saturation Point is reached

K. Muthuvel Page 115 of 127


Software Testing – made easy

Testing
Data User
Type

Load Constant Increase Till saturation Point is reached

Increase Till saturation


Volume Constant
Point is reached

Increase Till saturation


Stress Increase Till saturation Point is reached
Point is reached

111. What is a Bug?

Bug: - “Is an error found BEFORE the application goes into production?”

112. What is a Defect?

Defect: - “Is an error found AFTER the application goes into production?”

113. What is the defect Life Cycle?

·Test Team (Here the Defect status will be Open)


·Test Lead Authorize the bugs found (Here the Defect Status will be Open)
·Development Team reviewing the Defect (Here the Defect Status will be Open)
·The defect can be Authorized or Unauthorized by the development Team (Here
the Status of the Defect will be Open (For Authorized Defects) & Reject (For
Unauthorized Defects)
· Development Team fixing the Defect (Here the authorized Bugs will get fixed or
differed; it is done again by the Development team. Here the Status after the
Development team fixing the bugs will be (Fixed) & Status will be Differed for the
bugs which got Differed)
· The Fixed bugs will be again Re-tested by the Test Team (Here based on the
Closure of the Bug, the status will be made as closed or if the Defect still remains,
it will be Re-raised again & even the new bugs with status Open will be sent to the
Development team)
The above-mentioned cycle flows on continuously, until all the bugs gets fixed in the
application.

114. What is the Priority in fixing the Bugs?

Priority: - The value will be given to the bugs, by both Testers & Developers (But Mostly
the Development team will take care of this). It mainly deals with, what importance should
be given to each bug by the Developer. (i.e.) like the Critical bugs should be solved first &
then the Major bugs can be taken care.

115. Explain the Severity you rate for the bugs found?

K. Muthuvel Page 116 of 127


Software Testing – made easy

· Emergency
· High (Very High & high)
· Medium
· Low (Very Low & Low)

Testers will rate severity; it is based on the Defect we find in the application. Severity can
be rated as Critical or major or Minor. It is mostly done based on the nature of the defect
found in the Application.

Eg: - When user is not able to proceed or system gets crashes & so that tester is not able
to proceed further testing (These Bugs will be rated as Critical)

E.g.: - When user tries to add an record & then tries to view the same record added & if
the details getting displayed to the fields are not the same which the user provided as the
value to the fields (These Type of Bugs will be rated as Major Bugs)

E.g.: - Mostly the FLV Bugs & some functional bugs (Related the value display etc.) will
be rated as Minor.

116. Difference between UAT & IST?

UAT & IST


UAT: -
1. Done Using BRD
2. Done with the Live Data
3. Testing is done in User Style
4. Testing in done in the Client Place
5. Testing is done by the Real Users or some Third Party Testers

IST: -
1. Done Using FS
2. Done with the Simulated Data
3. Testing is done in a Controlled Way.
4. Testing in done in Offsite
5. Testing is done in the Testers Company

117. What is meant by UAT?

Traditionally, this is where the users ‘get their first crack’ at the software. Unfortunately, by
this time, it's usually too late. If the users have not seen prototypes, been involved with the
design, and understood the evolution of the system, they are inevitably going to be
unhappy with the result. If you can perform every test as user acceptance tests, you have
a much better chance of a successful project
User Acceptance testing is done to achieve the following:-
· User specified requirements have been satisfied
· Functionality is doing as per supporting documents
· Expected performance have been achieved
· End user is comfortable to use the application.

118. What all are the requirements needed for UAT?

· Business Requirement Document is the Documents required for performing the UAT
Testing by the testers.

K. Muthuvel Page 117 of 127


Software Testing – made easy

· Application should be Stable (Means, all the Modules should be tested at least once
after Integrating the Modules)

119. What are the docs required for Performance Testing?

Bench Mark is the Basic Document required for Performance Testing. Where the
documents contains in detail about the Response Time, Transaction Time, Data Transfer
Time, Virtual Memory in which the Application should work.

120. What is risk analysis?

Risk Analysis is a series step that helps the Software or Testing Team to understand &
manage Uncertainty. It is a process of evaluating risks, threats, controls, & vulnerabilities.

Threat: - Which is capable of exploiting vulnerability in the security of a


computer system or application.

Vulnerability: -Is a design, implementation, or operations flaw that may be


exploited by a threat?

Control: -Control is anything that tends to cause the reduction of risk.

121. How to do risk management?

Identifying the Risk Involved in the project & finding Mitigation for the Risk Found will do
risk Management. Risk Mitigation will be a solution for the Risk Identified.

122. What are test closure documents?

· Test Conditions
· Test Case
· Test Plan
· Test Strategy
· Traceability Matrix
· Defect Reports
· Test Closure Document
· Test Data
(The Above Mentioned Deliverables are based on the deliverables accepted by the
Testing Team & mentioned in the Test Strategy)

123. What is Traceability matrix?

Traceability Matrix: -
Through out the testing life cycle of the project Traceability matrix has been maintained to
ensure the Verification & Validation of the testing is complete.

124. What ways can be followed for defect management?

· Reporting the Bugs through the Defect Report (Excel Template)


· Any in-house tool inbuilt in the company may also be used.
· Commonly available tools like TEST DIRECTOR can also be employed

K. Muthuvel Page 118 of 127


Software Testing – made easy

17. Glossary
Testing

“The process of exercising software to verify that it satisfies specified requirements and
to detect errors “

Quality Assurance

“A planned and systematic pattern for all actions necessary to provide adequate
confidence that the item or product conforms to established technical requirements”

Quality Control

“QC is a process by which product quality is compared with applicable standards, and the
action taken when nonconformance is detected.”

Verification

“The process of evaluating a system or component to determine whether the products of


the given development phase satisfy the conditions imposed at the start of that phase.”

Validation

Determination of the correctness of the products of software development with respect to


the user needs and requirements.

Static Testing Techniques

“Analysis of a program carried out without executing the program.”

Review - Definition

Review is a process or meeting during which a work product or set of work products, is
presented to project personnel, managers, users, customers, or other interested parties for
comment or approval.

Walkthrough

“A review of requirements, designs or code characterized by the author of the material


under review guiding the progression of the review. “

Inspection

A group review quality improvement process for written material. It consists of two
aspects; product (document itself) improvement and process improvement (of both
document production and inspection).

K. Muthuvel Page 119 of 127


Software Testing – made easy

Dynamic Testing Techniques

“The process of evaluating a system or component based upon its behaviour during
execution. “

Black Box Testing:

“Test case selection that is based on an analysis of the specification of the component
without reference to its internal workings.”

Equivalence partition testing:

Equivalence partition testing: A test case design technique for a component in which test
cases are designed to execute representatives from equivalence classes.

Equivalence class: A portion of the component's input or output domains for which the
component's behaviour is assumed to be the same from the component's specification.

Boundary Value Analysis

Boundary value: An input value or output value which is on the boundary between
equivalence classes, or an incremental distance either side of the boundary.

Boundary value analysis: A test case design technique for a component in which test
cases are designed which include representatives of boundary values.

Cause and Effect Graphs

“A graphical representation of inputs or stimuli (causes) with their associated outputs


(effects), which can be used to design test cases”

White-Box Testing:

“Test case selection that is based on an analysis of the internal structure of the
component.”

Statement Coverage:

“A test case design technique for a component in which test cases are designed to execute
statements.”

Branch Testing:

Branch Testing: A test case design technique for a component in which test cases are
designed to execute branch outcomes.

K. Muthuvel Page 120 of 127


Software Testing – made easy

Branch : A conditional transfer of control from any statement to any other statement in a
component, or an unconditional transfer of control from any statement to any other
statement in the component except the next statement, or when a component has more
than one entry point, a transfer of control to an entry point of the component.

Path Testing

Path: A sequence of executable statements of a component, from an entry point to an exit


point.

Path testing: A test case design technique in which test cases are designed to execute
paths of a component.

Data Flow-Based Testing:

“Testing in which test cases are designed based on variable usage within the code.”

Unit Testing

“The testing of individual software components.”

Integration Testing

“Testing performed to expose faults in the interfaces and in the interaction between
integrated components”

Incremental Integration Testing

“Integration testing where system components are integrated into the system one at a time
until the entire system is integrated”

Top Down Integration

“An approach to integration testing where the component at the top of the component
hierarchy is tested first, with lower level components being simulated by stubs. Tested
components are then used to test lower level components. The process is repeated until
the lowest level components has been tested.”

Bottom up Integration

“An approach to integration testing where the lowest level components are tested first,
then used to facilitate the testing of higher level components. The process is repeated
until the component at the top of the hierarchy is tested.”

K. Muthuvel Page 121 of 127


Software Testing – made easy

Stubs:

Stubs are program units that are stand-ins² for the other (more complex) program units
that are directly referenced by the unit being tested.

Drivers:

Drivers are programs or tools that allow a tester to exercise/examine in a controlling


manner the unit of software being tested.

Big Bang Integration

“Integration testing where no incremental testing takes place prior to all the system's
components being combined to form the system.”

Validation Testing

Validation testing aims to demonstrate that the software functions in a manner that can be
reasonably expected by the customer.

Configuration review

An audit to ensure that all elements of the software configuration are properly developed,
catalogued, and has necessary detail to support maintenance.

System Testing

“System testing is the process of testing an integrated system to verify that it meets
specified requirements".

Requirement based Testing

“Designing tests based on objectives derived from requirements for the software
component (e.g., tests that exercise specific functions or probe the non-functional
constraints such as performance or security)”

Business-Process based Non-Functional Testing

Testing of those requirements that do not relate to functionality. I.e. performance,


usability, etc. “

Recovery testing

“Testing aimed at verifying the system's ability to recover from varying degrees of
failure.”

K. Muthuvel Page 122 of 127


Software Testing – made easy

Security testing

“Testing whether the system meets its specified security objectives.”

Stress testing

“Testing conducted to evaluate a system or component at or beyond the limits of its


specified requirements.”

Performance testing

“Testing conducted to evaluate the compliance of a system or component with specified


performance requirements.”

Alpha and Beta testing

“Alpha testing: Simulated or actual operational testing at an in-house site not otherwise
involved with the software developers.”

“Beta testing: Operational testing at a site not otherwise involved with the software
developers.”

User Acceptance Testing

“Acceptance testing: Formal testing conducted to enable a user, customer, or other


authorized entity to determine whether to accept a system or component”

Regression Testing and Re-testing

“Retesting of a previously tested program following modification to ensure that faults


have not been introduced or uncovered as a result of the changes made.”

Ad-hoc Testing

“Testing carried out using no recognised test case design technique.”

Load Testing

Load Testing involves stress testing applications under real-world conditions to predict
system behavior and performance and to identify and isolate problems. Load testing
applications can emulate the workload of hundreds or even thousands of users, so that
you can predict how an application will work under different user loads and determine the
maximum number of concurrent users accessing the site at the same time.

K. Muthuvel Page 123 of 127


Software Testing – made easy

Stress and Volume Testing

“Stress Testing: Testing conducted to evaluate a system or component at or beyond the


limits of its specified requirements.”

“Volume Testing: Testing where the system is subjected to large volumes of data. “

Usability Testing

“Testing the ease with which users can learn and use a product.”

Environmental Testing

These tests check the system’s ability to perform at the installation site.

Business Requirement

It describes user’s needs for the application.

Functional Specification

“The document that describes in detail the characteristics of the product with regard to its
intended capability.”

Design Specification

The Design Specification document is prepared based on the functional specification. It


contains the system architecture, table structures and program specifications.

System Specification

The System Specification document is a combination of Functional specification and


design specification.

Error Guessing

“A test case design technique where the experience of the tester is used to postulate what
faults might occur, and to design tests specifically to expose them. “

Error Seeding

“The process of intentionally adding known faults to those already in a computer program
for the purpose of monitoring the rate of detection and removal, and estimating the
number of faults remaining in the program.”

K. Muthuvel Page 124 of 127


Software Testing – made easy

Test Plan

A record of the test planning process detailing the degree of tester indedendence, the test
environment, the test case design techniques and test measurement techniques to be used,
and the rationale for their choice. - BS

“A document describing the scope, approach, resources, and schedule of intended testing
activities. It identifies test items, the features to be tested, the testing tasks, who will do
each task, and any risks requiring contingency planning.” - IEEE

Test Case

“A set of inputs, execution preconditions, and expected outcomes developed for a


particular objective, such as to exercise a particular program path or to verify compliance
with a specific requirement.”

Comprehensive Testing - Round I

All the test scripts developed for testing are executed. Some cases the application may not
have certain module(s) ready for test; hence they will be covered comprehensively in the
next pass. The testing here should not only cover all the test cases but also business cycles
as defined in the application.

Discrepancy Testing - Round II

All the test cases that have resulted in a defect during the comprehensive pass should be
executed. In other words, all defects that have been fixed should be retested. Function
points that may be affected by the defect should also be taken up for testing. This type of
testing is called as Regression testing. Defects that are not fixed will be executed only
after they are fixed.

Sanity Testing - Round III

This is final round in the test process. This is done either at the client's site or at Maveric
depending on the strategy adopted. This is done in order to check if the system is sane
enough for the next stage i.e. UAT or production as the case may be under an isolated
environment. Ideally the defects that are fixed from the previous phases are checked and
freedom testing done to ensure integrity is conducted.

K. Muthuvel Page 125 of 127


Software Testing – made easy

Defect – Definition

“Error: A human action that produces an incorrect result. “

“Fault: A manifestation of an error in software. A fault, if encountered may cause a


failure. “

“Failure: Deviation of the software from its expected delivery or service. “

“A deviation from expectation that is to be tracked and resolved is termed as a defect. “

Defects Classification

Showstopper

A Defect which may be very critical in terms of affecting the schedule, or it may be a
show stopper – that is, it stops the user from using the system further

Major
A Defect where a functionality/data is affected significantly but not cause a show-
stopping condition or a block in the test process cycles.

Minor

A Defect which is isolated or does not stop the user from proceeding, but causes
inconvenience. Cosmetic Errors would also feature in this category

Severity:

How much the Bug found is supposed to affect the systems Function/Performance,
Usually we divide as Emergency, High, Medium, and Low.

Priority:

Which Bug should be solved fist in order of benefit of system’s health? Normally it starts
from Emergency giving first Priority to Low as last Priority.

K. Muthuvel Page 126 of 127


Software Testing – made easy

Test Bed

Before Starting the Actual testing the elements which supports the testing activity such as
Test data, Data guide lines. Are collectively called as test Bed.

Data Guidelines

Data Guidelines are used to specify the data required to populate the test bed and prepare
test scripts. It includes all data parameters that are required to test the conditions derived
from the requirement / specification. The Document, which supports in preparing test
data are called Data guidelines

Test script

A Test Script contains the Navigation Steps, Instructions, Data and Expected Results
required to execute the test case(s).

Any test script should say how to drive or swim through out the application even for a
new user.

Test data

The value which are given at expected places(fields) in a system to verify its functionality
have been made ready in a piece of document called test data.

Test environment

A description of the hardware and software environment in which the tests will be run,
and any other software with which the software under test interacts when under test
including stubs and test drivers.

Traceability Matrix

Through out the testing life cycle of the project Traceability matrix has been maintained
to ensure the Verification & Validation of the testing is complete.

K. Muthuvel Page 127 of 127

You might also like