You are on page 1of 68

TESTING

Software:
A set of executable programs in a computer is called as ‘Software’.

Software project Vs Software product:

If a s/w was developed depends depends on specific customer


requirements then s/w is called as Project.

When one S/W was developed on overall requirement market, then


that s/w is called as ‘product’.

Note:

 Company is owner of the product.


 Customer is owner of project.

Software Development Life Cycle:(SDLC)

To development a project /project, companies are following a process called as


Software development life cycle’.
Old SDLC models:
A).Water fall Model:
This model is follow able in companies to develop s/w project (or) product,
when requirements are clear.

S/w bidding (A proposal to develop new s/w)

(CEO)

Kick off meeting

(BA)

Requirement Gathering

(SA)

Analysis & detailed plan

(TA)

Design

(Programmers)

Coding

(Programmers)

Testing

(Release team)

Release

(CCB)

Maintenance

The above waterfall model is called as sequent ‘Linear sequential model ’.

B).Prototype model:
When customer requirements are not clear, oraganization can follow prototype
model to get clear requirements & develop original s/w.

S/w bidding (A proposal to develop new s/w)

(CEO)

Kick off meetin

(BA)

Requirement Gathering

(SA)

Analysis & detailed plan

(TA)

Design

(Programmers)

Coding

(Programmers)

Testing

(Release team)

Release

(CCB)

Maintenance

C).Incremental model:
When customer requirements are huge, company can develop that s/w,
installment by installment.
L#3 SRGADCTR&M

L#2 SRGADCTR&M

L#1 SRGADCTR&M

Time

D).Spiral model:
When customer requirements are enhancing regularly, companies can
follow special model.

To develop complete s/w version by version.

Analysis design

Requirement gathering coding

Need for enhancements testing

Maintenance Release

E).RAD Model :( Rapid application development)


When customer requirements are similar to old projects/products,
organizations can follow this model to develop new S/W by copying old projects
coding.

Note: In above old SDLC models, testing will come after coding stage only and
testing stage also conducting by same programmers, due to this reason so many
s/ws are not satisfying customer site people.

In advanced SDLC models, companies are conducting multiple stages of testing


and recruiting separate tester’s stages.

Advaced SDLC modes:

V-Model :( William Evans Perry)

V stands for Verification & Validate.

Verification Validation

BRS Acceptance

SRS s/w testing

HLD Integration testing

LLD Unit testing

Coding

BRS will be prepared & reviewed by same BA.

SRS will be prepared & reviewed by same SA.

HLD&LLD s will be prepared and reviewed by same TA.

Programs writing and unit testing is same programmers.

 Programs Interconnection and Integration testing, tested by same


programmers.
Soft ware testing is conducted by testing team.

S/w acceptance by customer site people.

S/w released by release team (few programmers few testers).

S/w maintenance by CCB-change control board (freshers with knowledge on


development & testing).

Note1:from above V-model ,each development stage have testing stage ,but
separate testing team was recruited for s/w testing stage.so,this model is assuring
quality and decreasing cost.

NOTE:-Software testing can be stated as the process of validating and verifying


that a computer program/application/product:

* meets the requirements that guided its design and development,

* works as expected,

* can be implemented with the same characteristics, and satisfies the needs of the
stakeholders.

NOTE:- Verification and Validation both combination is called testing.

NOTE:- Static testing and Dynamic testing both combination is called testing.

NOTE:- A system or any object whole Functional and Non functional testing
combination is also called S/W testing or System testing or Project testing.

NOTE:- Testing is the process of executing a program with the intention of finding
errors.

AGILE MODEL: (Agile Corporation)

In Agile model customer site representation can include with developers


and testers. Customer site representatives are called as ‘Stake holder’ and they
have strong knowledge on development and Testing.

BRS prepared BRS reviewed BRS approved by


By BA by same BA stakeholder

SRS prepared SRS reviewed BRS approved by

By SA by same SA stakeholder

HLL&LLDs prepared HLD&LLDs reviewed HLD&LLDs approved by

By TA by same BA stakeholder

Programs integration unit testing programs integration

By programmers by same programmers approved by stakeholder

S/w testing S/w testing approved

By testing team by stakeholder

S/w Acceptance by Customers

S/w release by release team  Release approved by stake holders

Maintenance by CCBMaintenance approved by stake holders


From the above process of agile model, stakeholders are involving in every stage to
approve deliverables .In this Agile model, terminology is different like shown below.

Interaction (-1) (S/w bidding PIN)

Interaction (0) (BRS, SRS)

Constructive iterations (HLD, LLD, coding&Testing)

Production (release &maintenance)

Retainment (S/W outdated & need for new version)

Note:

1. Agile model is not costly like v-model becaz, stakeholder will not take care by
customer .but unlike v-model, agile model is taking time, becaz approval is extra in
every stage.

Testing stages or Phases in SDLC:


In general, large scale companies are following fish model/Agile model to
develop a quality s/w.But small &medium scale organizations are following V-model
to develop a quality s/w testing stage are common in all SDLC models like Fish,
Agile and V-model.

1. Document Testing:
In this stage business analyst can review BRS.In general, BRS
documents is having customer requirements .So, this document is also called as
CRS (Customer Requirement specification)/User requirements specification.

When BRS was review by BA, corresponding SA can prepare SRS is


having S/W requirements with respect to BRS.
BRS (BA) SRS (SA)
. 2 Inputs
.+ Operation FRS
. 1 Output
. Run on Windows/Linux/Unix
Addition of two
. Easy to use NFRS
numbers . 0.005 sec to finish addition

From the above example “BRS” specifies ‘What to develop’ and “SRS” specifies
‘how to develop’ corresponding SA can review SRS for completeness and
correctness.

When SRS document was base lined ,TA can start design in high level and low
level .High level design can define overall architecture of complete s/w .Due to this
reason,HLD is also called as ‘Architectural design /System design’.

Login Root

Mailing
Chatting

Logout

Low level design is defining internal architecture of specific module. So, one s/w
design consists of one HLD and multiple LLD’s.

User
Used id&pwd

Fig() Login
Error msg DB
Invalid

Valid

Next window
The same TA can review HLD &LLD’s for completeness and correctness.

Case Study on Document testing:

Document Prepared by Review by Testing techniques

BRS BA Same BA Walkthrough,Inspection,peer


review

SRS SA Same SA Walkthrough,Inspection,peer


review

HLLD&LLD’s TA Same TA Walkthrough,Inspection,peer


review

In above table, people followed 3 techniques to verify documents


‘Walkthrough’ is a technique to study a document from first to last for completeness
and correctness.

Inspection means that searching a document for specific factor for completeness
and correctness in that document.

Peer review means that the comparison of two similar documents point to point.

2.Unit testing:
When design documents are baselined, programmers can start coding,
by using development technologies like Java, .net, SAP, PHP, people
soft….etc.

Programmers can write programs w.r.to design document and then


test programs for completeness & correctness .This type of programs testing
is called as ‘Unit Testing’.

Case study on Testing:

component Prepared by Reviewed by Testing


techniques
programs Programmers Same White box testing
programmers technique
From the above table ,programmers are testing programs by using white
box testing techniques in unit testing stae.There are 4 techniques in white
box testing techniques list.
a) Basic paths Coverage:

Programmers can use this technique to validate that program is running


or not without any syntax & runtime errors.

Eg:-

If condition
T F

T F T
If condition
else

If condition

To apply this technique on program in unit testing programmer can follow below
process.

Step1: write a program w.r.to design documents

Step2: Draw flow graph for that program.

Step 3: Count Number of paths in that graph called as ‘Cyclomatic complexity’.

Step 4: Run program more than one time to cover all paths of program execution.

EX: Eg:-

If condition

T F
If condition
else

If condition
b) Control structure coverage:

When program is running without any error, Corresponding programmers can


use this technique to test that program correctness in terms of inputs & outputs,
this technique is also called as ‘Debugging’.

Note: In old days testing means that debugging only .But now debugging is one
technique in huge testing topic.

C) Program Technique Coverage:

When program is correctly working programmer can calculate execution


speed of that program.

Ex: swapping two numbers

a=10

b=20

c=30(temp value)

Here, we are using 2 logics, a=b a=a+b

b=c (or) b=a-b

c=a a=a-b

In the above example first logic is running fastly but need extra memory for ‘c’
variable.

Second logic is not taking extra memory but takes more time to execution of
addition and subtraction.

d) Mutation Coverage (changing):

When a program is running correctly & fastly, programmer can test that program
completeness in testing. Due to this reason, previous techniques called as ‘Program
TestingTechniques’, but this technique is testing on program testing.

Program Basic path coverage

Control/structure coverage Mutation Coverage

Program technique coverage


In this, programmer can follow below process:

3 tests 3 tests 3 tests

Changes Changes

Passed Passed Failed

(Incomplete testing) (Incomplete testing) (Complete testing)

Performing changes in tested intentionally for retesting is called as ‘Debugging’.

3. Integration Testing:
When a related programs writing and their unit testing completed, corresponding
programmers can start those programs interconnect to built a s/w.

Program1 Program2

Unit testing Integration testing Unit testing

From the above diagram, two programs connection is also a program.so,


white box testing techniques are applicable for Integration Testing stage also. Here
white box testing techniques are also called as’ Clear box testing or Glass box
techniques or Open box techniques ’.
While Integrating tested programs, Programmers are following below
approaches: a). top-down approach

b).Bottom-up approach

c).Hybrid approach

d).System approach

a. Top-down approach:

Main

Stub

Sub1

Sub2

From this approach programmers can integrate main module and some of
the sub-modules, becaz remaining sub-modules are under construction.

In the above diagram stub is a temporary program instead of under


constructive sub-program.

b. Bottom-up approach:
We can use this approach to integrate sub-modules without involvement of
main module is under construction.

Here, Programmers can use temporary program instead of under


constructive main modules called as ‘Driver’.
Main
(Under construction)

Driver

Sub1
Integration &Integration Testing

Sub2

C. Hybrid approach:
It is combination of top-down, bottom-up approaches.

Fig ():
Main

Driver

Sub1
Integration &

Integration Testing
Stub

Sub2

Sub3
The above approach also called as’ Sand witch approach’.

Note:

1. The above 3 approaches are temporary programs, here driver is also


known as ‘Calling program’. Stub is also known as ‘Called program’.

D. System approach:
From this approach programmers can start integration after completion of
100% of coding from this approach, programmers no need to use driver and stubs.
This approach is also called as ’Big bang approach’.

4. Software testing:
Now a days .organizations are recruiting separate testing team only for s/w
testing stage.becaz,s/w testing is “bottle neck “ stage, in s/w development and
separate testing team is testing the s/w with ‘openers’.

This s/w testing stage classified into 2 sub stages, such as

a. Functional Testing

b. Non-Functional Testing

a.Functional Testing:
In general testing team job can start with functional testing to validate
corresponding s/w w.r.to customer requirements. Due to this reason functional
testing is also called as ‘Requirements testing’ .during this functional testing,
testing team can conduct below sub test on SUT (software under testing).

i).Behavioral testing

ii).Input domain testing


iii).Error handling testing

iv).Manipulation testing

v).Data base testing

vi).Data volume testing

Vii) Inter-connection testing

i).Behavioral testing:
In general functional testing can start with GUI testing. *.So, this testing is
called as ‘Behavioral testing or Control flow or GUI testing’.

ii).Input domain testing:


when s/w is working and every screen objects are behaving correctly, testing
team can concentrate on every screen i/p objects to test that objects are taking
valid i/ps or not. This i/p domain testing is called as ‘Positive testing’.

iii).Error handling testing:


During this test ,testing team can operate SUT screens objects by giving
invalid i/ps and check coming error message s/prompt messages .This error
handling testing is also called as ‘Negative testing or Destruction testing’.

iv).Manipulation testing:
During this test, testing team can validate correctness of o/p or outcome in every
screen of SUT.this testing is called as ’Output (or) Outcome testing’.

v).Data base testing:


When you perform an operation on SUT.SUT screens are interacting with internal
database to store data or change data for future.

Note: SUT front end screens are already integrated with database by developers.
So, a s/w means combination of screens and DB.

A screen of SUT is called ‘Front end’ and DB of SUT is called as ‘Bank end’.
vi).Data volume testing:
During this test, testing team can test SUT database capacity by inserting
sample data .This testing is also called as ‘Data Capacity tests or Memory capacity
testing’.

Note: SUT database capacity depends on technology used by developers to develop


that database.

In general,My SQL access database can support 2GB,SQL server database


can support 8 GB and Oracle database can support 10 GB.If customer needs huge
data to maintain ,then organizations are providing data ware house(DWH) instead
of database.

Vii) Inter -connection testing:

Front end
DB

Front end
DB

Sometimes our SUT is connecting to other s/w to share other s/w DB.This
resources sharing testing is also called as ‘Integration tests or End-End testing
or Web-service testing or SOA (Service oriented application) testing.

Note: The above is applicable in SUT, which is able to connect to other s/w to share
DB.
2. Non-Functional Testing:
After completion of functional testing, corresponding testing team can
concentrate on non functional testing to validate a s/w with respect to
customer expectations like usability, portability, performance, security…etc.

Due to above definition of NFT, NFT is also called as expectation testing/


characteristics testing. So quality s/w means that s/w can meet customer
requirement & customer expectations. While conducting functional testing,
testing team will aspect incomplete s/w also but at NFT testing team can
allow only complete s/w. Due to this reason NFT is also called as system
testing.

Functional testing Requirements testing

S/W testing

Exceptions/characteristics
Non-functional testing system testing

NFT Classified into bellow sub-tests:

A) Usability testing:
In general, NFT can start with usability testing. During this test,
testing team can observe bellow factors on every screen of s/w under
testing (SUT).

IAQ: 1.ease of use


2. Look&feel user friendliness

3. Short navigations

B) Compatibility testing:
This testing is called as portability testing during this test, testing
team can run SUT on various customer expected platforms like different
operating system, different browsers, other system S/Ws.

C) H/W configurations testing:


This testing is also called as h/w compatibility testing, During this
test, testing team can run SUT by using different customer expected h/w
devices like pointers, scanners, network connections, H/w
configurations(RAM processor, Hard disk drive) etc.

D) Performance testing:
Performance means that speed in processing to calculate a
performance of an SUT, testing team can conduct bellow sub test.

The execution of SUT under customer expected configuration & customer


expected load (number of current users) to estimate speed in processing, is
called as load testing.

NOTE: Performance testing finds the following things:

Speed in processing estimation.


To find peak load.
Identify server crashing.
Identify memory leakage.

Server

SUT
Load (concurrent users)

--The execution of SUT in customer expected configuration & more than


customer expected load by incrementing load internally. To find peak load is
called as’ stress testing’. This stress testing is also called as ‘load capacity
testing’.

--The execution of SUT in customer expected configuration & huge load by


applying suddenly to identify server crashing point is called ‘spike testing’.

--The execution is SUT under customer expected configuration & customer


expected reasonable load continuously or respectively to identify memory
leakages is called as ‘endurance testing or durability testing or memory
leakage testing or soak testing’.

E) Security testing:
This testing is called as penetration testing during this test, testing
team can concentrate on bellow sub test

Whether SUT is allowing valid users with respect to permissions to


access functionalities in that SUT or not called as ‘access controlled testing’

The execution of SUT by using encryption& decryption is called as


encryption/decryption testing.

SUT
Client Server

Original request original request

Encryption decryption

Cipher text
Internet
Hacker (tester)

Note: In above 3 security testing topics, encryption & decryption testing is


typical to conduct, because testers& hacking knowledge due to this reason
most of the organizations are recruiting separate bond based testers for that
topic E-trust team.

F) MultiLonguity testing:
Some s/w are taking inputs and providing outputs in Unicode instead
of ASCII (American standard code for information interchange). Which
testing this type of s/w, testing team can conduct multilanguity testing in
any one of two ways.

Tester English, Spanish, Hindi, French...etc


SUT

(Localization)

Language
Tester English Spanish, French, Arabic SUT
conversion
S/W

(Globalization/Internationalization)

g) Parallel testing:
When SUT is product, testing team cam compare SUT with previous
versions of same s/w or with competitive s/w s in market to identify
weakness & strengths before release.

Note1: In above 7 NFTs, first 4 tests are mandatory. Security testing team
needs hacking knowledge multilanguity testing was needed on SUT
developed in Unicode parallel testing is applicable for products only.

Note2: While conducting FT&NFT on s/w, testing team will be tested by test
management. To confirm that testing team is doing testing on s/w with
respect to companies standards or not. This management on testing team is
called as “compliance testing” or” standard testing”.
V) Acceptance testing:
After completion of s/w testing company management can concentrate
on acceptance testing to collect feedback on that s/w. There are two
techniques in this stage such as alpha testing &beta testing.

Alpha testing Beta testing


1. Suitable for projects 1. Suitable to Products.
2. Invite real customers site people 2. Release trail version of s/w to
to our company and collect feedback market with help of some websites to
from them on s/w collect public opinion.
3. Developers &testers will interact 3. Developers and testers are
with real customer’s site people, reading feedback mails come from
while getting feedback from them. public.

Note: The above alpha & beta test in acceptance testing are also called as
yellow box testing techniques.

VI). Release testing:


When feedback is good, our company is concentrating on s/w release.
If s/w is project-based, then release that s/w to corresponding customer. If
s/w is product-based, then release that s/w to license, company
management can select few developers, few testers & product selected
developers & testers will go to customer site along with selected hardware
people can install s/w& observe below factors.

Complete installation.
Overall functionality.
I/P devices support
Ex: keyboard, mouse, and joystick….etc.
O/p device handling
Ex: Monitor printers, internet connection, scanner etc.
Secondary storage devices support ex: ROM, external hard disk, pen
drive.
OS support.
Co existence with other s/w in customer site.
VII) Testing during maintenance:
After s/w release process completion, release team can provide
training to end users of customer site and then release team people will
come back to company and wait for next project or produce.

While utilizing s/w customer site people are getting failures or getting
need for enhancements. To handle those failures and enhancements,
company management can recruit some trained fresher’s called as CCB
(change control board). This CCB team can do below tasks:

SCR hierarchy:
S/W change request (SCR)

Enhancement Failure (latent bug/

Missing bug)

Conduct impact analysis Conduct route-cause analysis to


to identify coding areas to enhance. Identify bug area of coding

Perform & test enhance Fix bug & Test bug fixing

(Enhancesive maintenance) Improve skills of developer &


tester for upcoming
projects/products

(Correctness maintenance)
During corrective maintains management can calculate efficiency of
development and testing team in previous project/product to improve skills
for upcoming projects/products.

Bug removed efficiency of development and testing


teams = a/a+b

ANo of Bugs fixed before release.


BNo of failures faced by customer site people.

Case study :( Testing phases in SDLC)

Testing Purpose Conducted by Testing


stage/phase techniques
1.documents Reviewing Authors of Walk through
testing BRS,SRS,HLD documents inspection peer
and LLDs review
2.Unit testing Validating Programmers White box testing
programs techniques
3.Integration Validating Programmers White box
testing connection of
programmers
4.s/w testing Validating s/w Testing team Black box
with respect to
customer
requirements &
expectations
5.Acceptance To collect Real customers Yellow box
testing feedback from or model
real customer customers
&model
customers
6.Release testing To validate s/w Release team or Green box
release process onsite team technique
7.Testing during To validate s/w CCB Grey box
maintenance changes techniques (white
+ black box)
*2*Software Testing Lifecycle (STLC):
From previous lessons, most of the organizations are maintaining
separate testing team for s/w testing stage. This testing team can follow a
process called as software testing life cycle.

Test closer
Test initiation Test planning Test execution
Test design

Modified s/w Defect

Test reporting

(EX: In my company s/w testing process starts with test initiation. In that,
my project manager decides to provide which type of test are suites for our
current project. After that we will plan to test by using those tests, and then
we will design the test cases, these to be executed after designing
completed. In this execution, we will find defects, those send to developers.
And then developers will correct those defects and provide the modified
software’s to the testers.)

Form the above mapping in b/w SDLC & STLC, developers and Testers can
sign in/role in into current project/product after completion of SRS
preparation and review by SA.

Note: When Acceptance stage was completed, developers and Testers will
sign-off/roll-off from current project/product, but few developers and Testers
go to onsite for s/w release.

I. Software Test Initiation:


STLC process can starts with Initiation stage .In this stage project manager
or Test manager can select a strategy to be followed for current
project/product testing.

Here, strategy means an approach or methodology. There are 3 types of


strategies in organizations for testing, such as

I.Exhaustive Testing

Ii.Optimal Testing

Iii.Ad-hoc Testing

From s/w testing principles,

Exhaustive testing is impossible and no need also.

Ad-hoc testing is Illegal, becaz this testing is not testing complete s/w.

Due to these two reasons, organizations testing teams are going to


‘Optimal Testing’.

Optimal testing strategy document will be prepared by PM/TM in a


standard format called as”IEEE-829” format.

a).IEEE-829 formatted optimal test strategy:


Project manager (PM) can follow IEEE-829 format for an optimal test
strategy to be for current project/product testing.

1. Scope&Objective: The importance of testing in current


project/product.

2. Business Issues: Budget allocation for testing.


Ex: 100 %( product/project)

64% (Development &maintenance) 36 %( Testing)


3. Test responsibility Matrix (TRM):
List of testing topics to be conducted in current project/product.

Ex:

S/w testing topic Sub-topics Y/N Comments


i.GUI testing Yes No comments
ii.i/p domain testing Yes ’’
iii.Error handling Yes ’’
testing
iv.manipulation Yes ’’
1.Functional Testing testing
v.DB testing Yes ’’
vi.Data volume No Current
testing(SOA) product/project
cannot have
requirement to
connect to other
s/w’s
i.usability testing Yes No comments
ii.compatibity testing Yes No comments
iii.H/w configurations No Testing required,
testing but no resources
iv.Performance testing Yes No comments
2.Non-functional v.security testing No Required, but no
testing hacking
knowledge to
testers
vi.multi-languity No No requirement
testing of multi languity
in current
projects
vii.parallel testing No Current s/w is for
project based no
product based
Note: In general, PM can skip some testing topics in s/w testing .due to no
requirements lack of skills.

4. Roles & Responsibilities:-


Names of different jobs in testing team and responsibilities of each job.

Roles Responsibilities
1.Test Lead .Preparation of test plans.
.Review testcases.
.Involving in defect Tracking.
.Conducting daily/weekly/monthly.
2.Sr.Tester(QA) .Prepare testcases.
.Involving in defect reporting.
.Co-ordinate with Jr .Testers.
3.Jr.Tester .Executing testcases on SUT.
.Detecting defects.
.Defect report preparation.

5. Commucation & Status Reporting:


Requirement negotiation channels in b/w every two jobs in testing team.

*Jr. Tester report status to sr.tester on daily test.

*Sr.tester report status to test lead weekly twice.

*Test lead report status to PM weekly once.


6. Test Automation and Testing Tools:
Need for Automation in current project testing & availability of testing tools
in our company.

Eg:

Testing tools Manually/Automation Tools to be used

1.Functional testing Automation QTP

a.GUI testing

b.i/p domain testing

c.error handling testing

d.Manipulation testing

e.DB testing

f.Data volume testing

g.Inter connection
testing

2.performance testing Automation LR(LoadRunner)

3.remaining Non- Manually --------


functional testing

7. Defect Reporting & Tracking:


Required Negotiation in b/w Developers & testers during defect reporting &
Tracking.

Eg: PM

Test lead Team lead

Tester Developer

8. Configuration Managements:-
To store all development & Testing document for future, PM can
provide a folder a folder structure.

Server

Project/product configuration Repository(CR)


Development base Soft base Test base

Developer Environment Test environment

9).Testing Measurements & Metrics:


To estimate speed of testing team work.PM can define some measurements
and metrics.

Eg:

 How many testcases documents will be prepared by tester per day.

How many testcases documents executed on SUT by Tester per day.

How many defects detected by Tester per day.

10).Risks & Assumptions:-


PM can identify list of challenges will come in testing and specify solutions
to overcome them.

11).Training Need:-
PM can analyze need for training to tester before starting testing.

II.Software Test Planning:-


After receiving test strategy from PM, corresponding test lead can start test
planning process.

A).Testing Team Formation:-


Test planning starts testing team formation by test lead. In this stage test lead
depends on below factors:

1. project/product size (No. Of functional points)

2. Available test duration.

3. Availability testers on bench.

4. Available testing tools.

Case study on Testing Teams formation:

Type of project/product Developer/ Reasonable


Tester testing duration
Client-server &web-based 3:1 4 to 6mnths
project(Banking,insurance,healthcare,E-
commerce,financial services)
Networking,Telecom,H/w devices 1:1 7 to 9mnths
S/w’s(Embedded systems)
Artificial Intelligence 1:7 12 to 15mnths
S/w(Aeronatical,robotics,satilite,Typical
games)
B.Identify Tactical risks:-
After completion of team formation, corresponding test lead can try to identify team
risks like described below:
1. Lack of skills to selected testers.

2. Lack of time (for testing).

3. Lack of resources.

4. Lack of Documentation.

5. Delays in delivery

6. Lack of development team seriousness.

7. Lack of communication.

C.Prepare test plan in IEEE-829 format:-


After completion of team formation & risk analysis corresponding test lead can start
plan preparation in IEEE-829 formation by using MS Word.

The test plan format like shown below:

Test strategy: Attach strategy to plan document.here, strategy provided by PM.

Test Environment: Required H/w &S/w for testers during current project/product
testing.

Entry criteria: (To start test execution and detecting defects)

. Testcases prepared and received.

. SUT from developers.

. Test environment was established.

Suspension criteria (to interrupt testing)

. Test environment a bonded

.Critical defect in SUT.

. So many minor defects in pending (quality gap).

Exit criteria (to stop test execution)

.All modules tested.

.Time exceeded.
.All major bugs are fixed.

Staff and training needs: List of selected testers and list out training topics if
needed.

Responsibilities: work allocation or work breakdown structure for above listed


testers in terms of modules wise testing topics wise.

Schedule: dates and time

Risks and assumptions: List out previously analyzed risks and solutions to
overcome them.

Note: From the above IEEE-829 test plan document format, Test plan consists of 4
components such as, “What to test?”, “how to test?”, “who to test?”, “when to
test?”

D) Review test plan:


After completion of test plan document preparation, corresponding test lead can
submit test plan document to PM for approval and then forward that document to
selected testers to follow.

III).Software Test Design:


Case study :
s/w testing STLC stage Prepared by Reviewed by
document/deliverable

1.test strategy s/w test initiation PM PM

2.test plan s/w test planning Testlead PM

3.test scenario s/w test design Test engineer TL

4.test cases s/w test design Test engineer TL

6.Test data During s/w test Test engineer TL


design

7.test log s/w test execution Test engineer TL

8.testreport s/w test execution Tester TL

9.review report In every stage of Testlead PM


STLC
(daily/weekly/monthly)

10.requirements Test closer Testlead PM


traceability matrix

From the above case study, test engineers can prepare test scenarios, test
case, sql commands, test data in test design stage, & prepare test long & defect
report in test execution stag.

During test design, selected test engineers can follow below steps of
process in s/w test design stage.

A).Understand project/product requirements:


In general, tester’s job can start with test design. In this test design
stage, every test engineers can try to understand the requirements of
command project/product in below ways:
S/W change request (SCR)

Enhancement Study SRS & get Training on Training on SRS

Clarity on SRS SRS by SA by SME

Documents Doubts from SA

Form the above ways, every tester can try to understand all
requirements in SRS, but responsible for selected modules testing.

b) Writing test scenarios & testcases:


After completion of complete SRS study corresponding tester can start
preparation of test of test scenarios & test cases for responsible modules
testing.

Test scenario means that an activity to be tested in a module in


test that activity/scenarios. So, one test scenario consists of multiple test
cases here, every scenario condition are classified into positive & negative
(Invalid). Testing a s/w module with positive conditions called as ‘positive
testing’. Testing a s/w module with negative conditions called as ‘negative
testing or distractive testing’.

To prepare test scenarios (high level test design),&test cases


(low level test design) for responsible modules, corresponding test engineers
can follow below 4methods:

i).functional specs based test design

iii).screens based test design

ii).use cases based test design

iv).non-final specs based test design non-final test design.

I.Functional specs based test design


To prepare test scenarios & cases for responsible modules final testing,
corresponding test engineers can use this method.

BRS

SRS (FRS+NFRS) Scenarios

Testing Testcases along with SQL Cmds

Coding S/W builds (functional testing)

Case study I(functional specs for an insurance


project):
i).final spec (1):
From insurance company requirements, agents can do login
by entering use rid & password user-id is taking alpha numeric’s in is lower
case from 4to 16 positions of data. Password object is taking alphabets in
lowercase from 4 to 8 characters long. After filling user id& password, agent
can click “ok” button to do login sometimes agent can click “cancel” button,
to close long in window.

Prepare test scenarios &test cases for long in module final testing in an
instances project.

1).Test scenario: (invalidate user-id field)


Test cases:
Here, we are using black –box testing techniques, BVA, ECP, DT and
OA.

a).boundary values analysis (BVA):

4 Min Valid
5 min+1 Valid
3 Min-1 Invalid
16 Max Valid
15 Max-1 valid
17 Max+1 Invalid
b) Equivalence classes partitions:

(ECP)-(type

Valid invalid
.alpha Numerics with lower cases .alphabets with upper cases
.special characters
.blank field.
2).password field validation:

a.BVA (size)

4 Min Valid
5 min+1 Valid
3 Min-1 Invalid
8 Max Valid
7 Max-1 Valid
9 Max+1 Invalid
b) Equivalence classes partitions (ECP)-(type

Valid invalid
.alphabets with lower cases .alphabets in upper cases
.special characters
.blank field.
.Alpha Numerics

3).validate login operation by clicking “ok” button:

Test cases:
Decision table (DT)

User id Password Expected output or


outcome after click
“ok”
Valid Valid “next” window
Valid Invalid error massage
In valid valid error massage
In valid valid error massage
Filled with data Blank field error massage
Blank fields Filled with data error massage
Blank fields Blank fields error massage
Remove redundancy/repetition cases by using orthogonal a nays (OA).

4).validate login window closing by clicking “cancel”:

Test cases: decision table.

User id Password Expect output


(or)outcome after click
“cancel” button
Filled Filled Window closed
Filled Blank field Window closed
Blank field Filled Window closed
Blank field Blank field Window closed

2. Functional specification tool(2):


After successful log in insurance agent can select “policy
creation option” in options window to get policy aeration window. This policy
creation window consists of below fields:

i).policy holder name (alphabets)

ii).period (5 to 20 years)

iii).amount (1500 to100000)

after filling above fields agent can click “create” button to get
policy number as output &that number is in 7 digits sometimes, agent can
click “back” button to back options window.
Validate the policy holder name:

Test case:

1).BVA

Min 1char Valid


Min-1 Blank field In valid
Min+1 2 Valid
max 256 char Valid
Max+1 257 In valid
Max-1 255 Valid
2).equivalence classes partition (ECP):

Valid In valid
Alphabets Numerics
Special characters
Blank field
7). Test scenario :( validate the duration/period of policy)

Test cases:

i.BVA: Boundary values analysis on range period (range)

Min 5 Valid
Min-1 4 In Valid
Min+1 6 Valid
Max 20 Valid
Max-1 19 Valid
Max+1 21 In Valid

2. ECP on type:

Valid In valid
Numeric Alphabets
Special chars
Blank fields
8).Test scenario :(validate the amt field)

Test cases:

1. BVA :( boundary values analysis on range)

Min 1500 Valid


Min 1499 In valid
Min 1501 Valid
Max 100000 Valid
Max-1 99999 Valid
Max+1 100001 In valid
2. ECP on type:

Valid In valid
Numeric Alphabets
Special char
Blank field

9).Test scenario :( Validating policy creation operation by clicking


“create” button)

Test cases: Decision table (DT) with OA

Fields Expected output after click “create”


button
All are valid Policy number returned/came
Any one in valid Error message
Any one blank Error message

10. Test scenario (Validate policy number field)

Test cases:

1. BVA (for size)

Min=7 Valid
Max=7 Valid
Max-1=6 Valid
Max+1=8 In valid
2. ECP:
Valid In valid
numeric’s Alphabets
Special chars
Blank field
Duplicate

11).Test scenario :( Validate back button operation by clicking


“back” button)

Test cases: - DT

Fields Expected o/p after click “back” button


1.all fields filled Policy creation window closed and focus back to
options window
2.some fields filled policy creation window closed and focus back to
options window
3.all fields blank policy creation window closed and focus back to
options window
4.after getting policy policy creation window closed and focus back to
number options window
5.after getting error Policy creation window closed and focus back to
message options window.

Using Regular Expression in Test Design:

Regular Expressions are mathematical notations. To save time, while


writing test cases manual testers are using regular expressions. Regular
Expressions are universal Notations, because all the organizations are
interested to follow these expressions.

Ex:

[0-9]one digit

[A-Z]one alphabet in Uppercase


[a-z]one alphabet in lowercase

[A-Za-z]one alphabet in either lowercase o uppercase

[A-Za-z0-9]one alpha-numeric in lowercase/uppercase

[A-Za-z0-9_] (Or) [.] One alpha-numeric (or) _

[. @] one alpha numeric (or) _ (o) @

[a-z]{4}four alphabets in lowercase

[A-z]{4, 8}4 to 8 in alphabets in lower cases

[a-z]{, 8}No to 8 alphabets in lowercase

[a-z]{,} (Or) [a-z]*No to infinite no. of alphabets in lowercase

[a-z]{1,} (or) [a-z]+1 to infinite no. of alphabets in lowercase

[a-z]{,1} (or) [a-z]?No to alphabet in lowercase

[A-Z][a-z]{4}5 characters with first as upper and remaining 4 are


lowercase

[A-Z][a-z]{4, 10}5 to 11 characters with first character as upper and


remaining 4 are lowercase

[A-Z][a-z0-9]{, 5}1 to 6 positions with first as upper and no to 5 as alpha-


Numerics in lowercase

[A-Z][A-Z0-9a-z]*1 to infinite positions with first as upper and remaining


are alpha-Numerics

[\s]one blank space

[\.]one.

([A-Z][a-z]*[\s]?)[\.]Sentence with multiple words and every word start


with upper case and last word ends period

[A-Z]([a-z]*[\s]?){1,}[\.]

Black box testing techniques:


In general ,manual testers are using black box testing techniques .To write optional
testcases for functional testing .Due to this reason ,Black box testing techniques
are also called as ‘functional testing techniques (or) closed box testing techniques
list’:

1. BVA (Boundary values analysis) is used to define boundaries of i/ps and o/ps.

2. ECP (Equivalence Classes Partition) is used to define the type for i/ps and o/ps

3. DT (Decision Table) is used to define mapping in b/w i/ps and o/ps

4. OA (Orthogonal array) is used to remove unwanted mapping (or) repeated


mappings in i/ps and o/ps

5. STF (State Transition flow) is used to write scenarios and cases in order w.r.to
functionalities in SUT

6. EG (Error Guessing) is used to guess the errors in upcoming SUT (based on past
experience)

2. Usecase based Test design:


In general, manual testers are reading functional specifications in SRS to write
test scenarios and testcases for responsible modules functional testing .But testers
are expected usecases instead of functional specs in SRS, becaz usecases are more
elaborate than functional specifications. In general test engineers are using
expecting usecases, while conducting testing in complex requirements project (or)
conducting testing in Outsourced projects.

BRS
Usecases

SRS (FRS+NFRS) Test Scenarios

Testing Testcases along with SQL Cmds

Coding, UT&IT

S/W builds (functional testing)

Usecases 3 :( prepared by SA in IEEE-830)


I.usecase id: uc-money transfer

ii. Usecase description: While doing money transfer, user can specify
amount & destination A/C number.

iii.Actors: While doing money transfer, user can fill below fields:

A. Bank nameLocal (by default), SBI, KVB, HDFC, ICICI

B. Destination A/C no

7 digits number with 1st and 2nd digits are o for HDFC,

9 digits number but doesn’t start with 0 &1 and doesn’t end with 0
for local,

8 digits number with 1st 4 digits are in 1 to 5 for SBI,

9 digits number with doesn’t start with and end with 0 to 1 for
ICICI,

8 digits number with 1st and 2nd digits in b/w 1 to 9 only for ICICI

C.Amount1500 to 100000

D.PreconditionA user can do login for money transfer purpose

E.Events lists

Event Expected O/p (or) Output

Fill fields in money transfer page Successful message for valid data
and click ‘Transfer’ button
Unsuccessful message for Invalid
data.

F.AFD (Activity flow diagram)

User

Login
DB
Money
transfer DB

Valid Invalid

Successful
Unsuccessful
message
message

g).Prototype (dummy/Screen shot)

Money Transfer

Bank name Local

Account number

Amount

Money transfer Logout

h).Post condition: After completion of money transfer user can click


‘Logout’ button.

i).Alternative Events: None

j).Related Usecases: uc_login, uc_logout, uc_mini statement, uc_cheque


deposit.

3. Screens based Test Design:


Sometimes, manual testers are struggling with lack of documentation .In this
situation, manual testers are depending on different ways to understand project
and to prepare test scenarios and testcases.
Customer Discussion Test scenarios

(Prototype/screen shot) (Operate)

Developers to Testcases

Design, UT, IT S/W build (functional testing)

From the above diagram manual testers can try to understand project
requirements by taking with customer site, by getting prototypes from
developer and operating original SUT screens and then manual tester will go
to start test scenarios and test cases for responsible modules functional
testing.

Ministatement

From date (mm/dd/yy)

To date (mm/dd/yy)

Statement Logout

Print
4.Non-functional Specs based Tet design:
After completion of test scenario’s and cases writing for functional
testing, manual testers can start scenarios and cases writing for non-functional
testing like usability testing,compatability testing ,H/W-configuration
testing ,compatability,performance testing.

Here, manual tester can prepare scenarios and cases for specific testing topic to be
apply on complete SUT.To prepare test scenario’s & cases for functional testing at
modules level, manual tester followed 3 methods, such as

1. Functional specs based test design

2. Usecases based test design

3. Screens based test design.

But to prepare test scenario’s and cases for non-functional testing topics at s/w
level, manual testing can follow only one method ,such as

Non-functional specs based test design.

BRS

SRS (FRS+NFRS) Test Scenarios

Design Testcases
Coding, UT&IT (Non-functional testing)

S/W build

Non-functional spec1:(usability expectations)


From customer expectations, online banking s/w all module screens/pages are user
friendly.

Q.Prepare test scenarios (or) cases for usability testing to be conducted on


online-banking project.

Test Scenarios:

1. Validate spelling of labels in throughout pages of OBP.

2. Validate labels meaning in throughout pages of OBP.

3. Validate labels font style uniformness in throughout pages of OBP.

4. Validate labels font size uniformness in throughout pages of OBP.

5. Validate labels color contrast in uniformness in throughout pages of


OBP.

6. Validate labels initcap in throughout pages of OBP.

7. Validate Line spacing uniformness in b/w object and label in


throughout pages of OBP.

8. Validate Line spacing uniformness in b/w object and object in


throughout pages of OBP.

9. Validate alignment of controls/objects in throughout pages of OBP.

10 .Validate date fields w.r.to developer provided formatting symbols in


throughout pages of OBP.

11 . Validate location of functionalities related objects in throughout


screens of OBP.
12 . Validate borders provided ton functionalities related objects in
throughout screens of OBP.

13 .Validate icon symbols and provided functionalities related objects in


throughout screens of OBP.

14 . Validate icon symbol and coming tool tips mapping in throughout


screens of OBP.

15 . Validate keyboard access on all objects in throughout screens of


OBP.

16 . Validate status bar/progress bar like controls in throughout screens


of OBP.To indicate s/w under process.

17 . Validate password hiding, while entry in throughout screens of OBP.

18 . Validate meaning of error messages in throughout screens of OBP.

19 . Validate the existence of “OK” and “Cancel” like buttons in


throughout screens of OBP.

20 . Validate help message provided for throughout screens of OBP.

Note:

1. The above 20 test scenarios (or) cases application on any S/W usability
testing .Due to this reason .The above usability testing. Due to this reason, the
above usability scenarios (or) cases common check points.

2. For more usability test scenarios/cases, we can go to refer “s/w testing concept
and tools”.

2. Compatibility Expectations:

From customer site people expectations, OBP website is able to run in windows
XP, windows Vista, Windows 7,windows 8 like client machines and able to run in
windows 2003 server, windows 2008 sever, Linux Red hat server like server
machines.

Either in client (or) server computer, users can open OBP site by using IE,
Google Crome, Mozilla Firefox, Safari, Opera, Netscape Navigator, Hot java.
Q. Prepare Test Scenarios and Cases for compatibility testing on OBP.

Test scenarios: Validate login functionality in below specified platforms:

Test cases:

Platform component Version type Yes/No

Client side OS cases Windows XP Yes

Windows Vista Yes

Windows 7 Yes

Windows 8 Yes

Other Yes/No

Server side OS cases Windows 2003 server Yes

Windows 2008 server Yes

Linux Red hat Yes

Other Yes/No

Browser Ie Yes

Mozilla Firefox Yes

Google Chrome Yes

Netscape Navigator Yes

Hot java Yes

Safari Yes
Other Yes/No

2. H/W Configuration Expectations:


From customer expectations, OBP s/w is able to run on various type of Networks
like Bus Topology, Ring Topology and Topology.

While running mini statement functionality, OBP s/w will connect to printer like
Inkjet, Dot matrix Laser.

Q. Prepare test Scenarios and cases for H/W configuration testing on OBP.

Test Scenario:

Test cases: H/W Configuration Matrix

H/W Configurations Type/Version Yes/No

Network Bus Yes

Ring Yes

Hub Yes

Others Yes/No

4. Performance Expectation:
From customer expectations, OBP web-site will be used by 1000 users at a time.

Q. Prepare Test Scenarios and cases for performance testing on


OBP.
Test Scenario: Validate Login functionality by applying below load levels.

Test cases: Ability/ Performance Matrix


Load Level Purpose

1000 users Finding OBP speed in processing

>1000 users Finding server crashing

>>1000 users(Huge) Finding server crashing

1000 users continuously Finding memory leakage

IV.Software Test Execution:


a. Formal meeting:
In general software test execution can start with a formal meeting in between
developers and testers with presents of PM, Testlead, Teamlead, BA &SA.In this
meeting corresponding people can discuss about build release, defect reporting and
build version control.

In general, developers can place SUT in “Soft base” folder in server computer.

Server

Project or product configuration Repository

Development Soft base Test base


base
SUT

Developers testers

Development environment Test environment

In general, tester can report defects to developers by using MS.Outlook /Lotus
notes in below process.
PM

Test lead Team lead DTT

Test engineer Developer

In general, Tester can release modified SUT builds with unique version numbers.
These numbers are understandable to testers to distinguish old build and new build.

Developers Testers Customers

1.0

DR

2.0

DR

3.0 S/W 1.0(s/w release version)

S/w build version

b.Defining Test Execution Levels:


After completion of formal meeting with developers corresponding manual tester
can conduct a review meeting to decide test execution levels.

SUT (1.0)Smoke testing real testing (DR) DTT (Accepted) Bug fixing

SUT (2.0)smoke testingRetestSanity testingregression testing

SUT (3.0)Developer bug fixing Accepted DTT DR further real testing


:
:

SUT (n.o)smoke testingretestsanity testregression test

Final regression test

S/W test closure

Note:

1. Smoke testing is common for all build versions.

2. Test execution levels like retesting, sanity testing and regression testing are will
come on IInd version to last version.

3. Final regression testing will come on last version only.

4. Real testing will come on 1st version to last but one version.

C.Establishing Test Environment:


After completion of formal meeting with developers and test execution levels
identification, testing team can conduct review with H/W team (Infra structure
team).

H/W team can provide required H/W and S/W to testers for SUT testing.

Test design (TS, TCD, TD) +Test Environment (H/W, S/W) =

Test bed (or) Test Harness

D.Smoke testing:
In general s/w test execution levels on every s/w build can start with smoke is
testing. In this level, test lead and all testers can gather in on cabin and do tasks:

Download/Launch SUT in one cabin system.


Operate that SUT main module by giving valid data only to confirm that SUT is
working/Not without any runtime errors.

If smoke testing failed on SUT, then test lead can reject that SUT and waiting for
working SUT or stable SUT or testable SUT.

If smoke testing was passed, then tead can think about further testing levels.

Server

Project or product configuration Repository

Development Soft base Test base


base
SUT

SUT (Testers)

Test environment

Note: From the above diagram, smoke is Team level job. Smoke testing is also
called as “testability testing” (or)”Build verification testing”.

E.Real Testing:
In general testers are conducts are conducting real testing on responsible modules
of SUT individually.
Server

Project or product configuration Repository

Development Soft base Test base


base
SUT

SUT T1 T2
SUT

SUT
T3

Test environment

Note: From the above diagram, every test engineer can follow below process to
apply testcases on responsible modules of SUT and detect defect.

Defect means mismatch in b/w testcases expected values and actual values of
SUT.

During real testing, corresponding Test engineer can follow below process.

Down/Launch stable SUT compare expected values in testcases and Actual values
in SUT.
Open previous prepared Testcases.

Operate SUT compare expected values in Testcases and Actual values in SUT.

If Expected is not equal to actual, then tester can stop testing and prepare defect
report in IEEE-829 format by using MS.Excel and forward to DTT using MS.Outlook.

If expected is equal to actual, then go to next case.

If all cases was passed then go to next scenario.

If all scenarios was passed then go to next module.

If modules was passed testing topics.

If all testing topics was finished, then go to test closure.

F.Defect Reporting:
When any test case expected value is not equal to actual value of SUT.During real
testing ,corresponding manual tester can stop real testing ,then prepare defect
report in IEEE-829 format like shown below:

1. Defect id: Unique number /Name.

2. Defect Description: About Defect

3. Build version id: Version number of SUT in ,which module testing this defect
was detected.

4. Module/Feature: Name of module in SUT in which module testing this defect


was detected.

5. Test scenario: Name of failed scenario, in which scenario execution this defect
was detected.

6. Seviarity: The seriousness of defect w.r.to testers job.

High (critical/show stopper):-Not able to continue further testing


without fixing defects.

Medium (major):–Able to continue further testing, but mandatory to


fix defect.

Low (minor):-Able to continue further testing, but mandatory to fix


defect.

7. Priority:-The importance of defect fixing w.r.to customer.


8. Reproduceble:-Yes (defect appears every time)/No (Defect appears rarely)

If Yes, Attach Testcases document.

 If No, Attach Testcases document and screen shot.

9. Status: New (defect repository first time)/Reopen (reporting).

10. Test Environment: Used H/W & S/W, while getting defect in SUT.

11. Defected by: Name of Tester.

12. Defected on: Date and Time.

13. Assigned to: DTT

14. Suggested fix: (Optional)

Tester suggestion to fix defect.

Note:

Tester are using MS-Excel to prepare defect report by following above IEEE-829

Format and Testers can use MS.Outlook to format that except defect report to DTT.

G.Defect Tracking Process :( Accepted/Rejected)


After receiving defect report from tester corresponding DTT members (PM,
Testlead, Team lead) can review that defect like shown below:
DR received by Walk through DR by DTT
DTT from Tester

DR Rejected due to No Accepted


“Lack of information ?
(or) Duplicated to
previously reporting Yes
DR”

DTT categorized DR

No

DTT Assign defect back Yes Testcase


to Tester+Testlead+SA related
defect

No

DTT assign DR back to Testdat


Tester +SA+Test lead Yes
a

No

Yes
DTT assign DR H/W Team Test
environm
ent?
No

DTT assign DR to developers


(coding related defect
(Bug))

Note:1.From the above defect Tracking process ,one defect classified into any one 4
types like Testcase related ,Testdata related, test environment related s/w coding
related defect.2.In general ,DTT can reject one defect due to lack of information
(or) duplicate to other defect.

H.Testcase related defect fixing:


(Run) failed
Defect report to DTT
Testcases SUT
(Real test)

(Tracking)

Modify Tester+ test lead DTT accepted DR as


testcase by Assign testcase related
+SA
tester defect

(Run)(Retesting)

Passed further testing

SUT
Failed Defect report

i.Testdata related defect fixing:


(Run) failed
Defect report to DTT
Testdata SUT
(Real test)
(Tracking)

Testcase with Tester+ test lead DTT accepted DR as


Modified testdata Assign Testdata related
+SA
by tester defect

(Run)(Retesting) passed further testing

SUT
Failed Defect report

J.Test Environment related defect fixing:


(Run) failed
Defect report to DTT
Test SUT
(Real test)
environment
(Tracking)
Testcase with
Testdata in modified
DTT accepted DR as
environment by
H/W team Assign testcase related
tester
defect

(Run)(Retesting)

Passed further testing

SUT
Failed Defect report
K.Coding related Bug fixing:
(Run) failed
Defect report to DTT
Testcases SUT
(Real test)

(Tracking)
Test lead
conducts
‘rout-cause Developer DTT accepted DR as
analysis’ to Assign testcase related
+Test lead+ TA defect
identify wrong
coding areas
of SUT Developers can modify wrong
coding areas to correct and TA
can change HLD& LLD, if
needed

Build SUT with new Conduct unit testing on modified coding


version number & areas and conduct Integration testing
place in soft base in on b/w modified coding areas & related
server computer areas

Send S/W Released Note (SRN)

Email to test lead by developers


After receiving SRN mail from developers, testers can study that mail and identify
Modifications in SUT build. Test lead conduct smoke testing on that new SUT with
presence of related testers, who are testing modules related to modification.

After completion of smoke testing related testers will go to previously failed test
execution called as “Retesting”.

If retesting was failed, then report defect as reopen.

If retesting was passed, then go to Sanity testing by executing previously passed
most related tests to identify side effects of modifications.And then testers can go
to regression testing executing all related testcases w.r.to modifications.

Case study (Smoke test Vs Sanity test):


Smoke test Sanity test

1. To confirm that SUT build is 1.To confirm that SUT build


working/not in Test Environment. modifications impact correctness
on most related areas in most
related areas in that SUT build.

2. By execution fixed test cases 2. By executing test cases related


related to main modules in every to modifications impacted
SUT build released from modules in every modified SUT
developers. build released by developers.

Case study (Retesting Vs Regression testing):


Retesting Regression Testing

1. To confirm that SUT build is 1. To confirms that SUT build


correctly modified not after fixing modifications impact correctness
bugs by developers. on add related areas in that SUT
build.

2. By execution fixed test cases 2. By executing test cases related


related to modified modules in to all modifications impacted
every SUT build released from modules in every modified SUT
developers. build released by developers.

Note:-from the above tables, sanity testing is a sub-set of regression testing.

Smoke testing a common testing stage for any SUT build& executing fixed
testcases.

l).bug life cycle:


A coding-related defect in s/w is called as ‘Bug’ after reporting a new bug to
developers by testers, corresponding developers can fix that bug by changing s/w
coding. And then testers can confirm that bug fixing before going to further real
testing by conducting smoke testing, retesting, sanity test & regression test.

The combination of smoke test, retest, sanity test & regression test is called as
‘confirmation testing’.

In above specified story every by status as changing called as ‘life cycle’.

New/reopen

Deferred
Accepted (opened) Rejected

(Agree)

Fixed
Correctly fixed Closed
In above bug life cycle (BLC), differed means that a bug was accepted to fix, but
fixing was postponed to future releases from the above BLC, one bug final status is
either closed or differed.

Newdiffered

Newrejectedclosed

New rejectedreopened

Newaccepted fixed Close

New accepted fixedreopen

Close

m).Test cycle:
SUT (1.0) Smoke testing real testing (DR) DTT (Accepted) Bug fixing

SUT (2.0) smoke testingRetestSanity testingregression testing

SUT (3.0) Developer bug fixing Accepted DTT DR further real testing
:

SUT (n.o) smoke testingretestsanity testregression test

Final regression test

S/W test closure

From the above diagram, the time gap in b/w two build releases is called
‘test cycle’. In one cycles, developers can fix one/more bugs.

In general, testers will test SUT in week days & report defeats. On week ends can
fix those bugs & release modified SUT 80, one test cycle is taking one week time.

Eg: 8cycles in one project testing.


V. S/W test closure:
After completion of all cycles of testing, testing team can gather
&conduct test closure review meeting. In this meeting, testing team can
concentrate on below factors.

Coverage analysis:

. Modules/ features/ requirements coverage.

. Testing topics coverage (FT+NFT).

bug density calculation:

(No. of bugs come in one module)

Eg:

module %bugs

A 20%

B 20%

C 40%

D 20%

Analysis of differed bugs: Whether differed bugs/ postponable (or) not.

After completion of test closure review meeting, corresponding testers & test lead
can start re-execution of testcases on high bug density modules in final SUT build.

This testing is called ‘final regression (or) post Morton testing (or)
confidence testing (or) reacceptance testing.

If any test case was failed on final SUT, then testing team can contact
developers immediately to fix that golden bug/lucky bug as early as possible.

If time is not enough for fixing golden bug then request customer site
people for later patch.

vi). Acceptance:

After completion of s/w test closure &final regression testing,


corresponding pm can conduct acceptance testing with presence all developers,
testers to collect feedback from customer site people in alpha manner & beta
manner.

vii) Sign off:

After completion of acceptance testing pm can give roll off to


developers& testers including team leads & test lead but pm can select few
developers &testers along with few H/W people to deploy/ release software to
customer site.

Note:

While roll off from current project, test lead can prepare Final test summary report
in IEEE-829 format and submit it to PM.

You might also like