You are on page 1of 44

Software Testing Tools

Test

Manual Testing Concepts

Table of Contents
1. Software
2. Software Testing
2.1. Software Testing Definition
2.2. History of Testing
2.3. Significance Of testing
3. Software Quality
3.1. Software Quality Assurance
3.2. Software Quality Control
4. Software Development Life Cycle (SDLC)
4.1 Pre-SDLC
Sign-in:
Kick of meeting:
PIN (Project Initiation Note)
4.2. Software Development Life Cycle (SDLC)
4.2.1. Initial-Phase/ Requirements phase:
4.2.2. Analysis Phase:
4.2.3. Design Phase:
4.2.4. Coding Phase:
4.2.5. Testing Phase:
4.2.6. Delivery & Maintenance phase
4.3. SDLC Models
4.3.1. WaterFall Model
4.3.2 Incremental Models
4.3.3. Prototype Model:
4.3.4. Spiral Model
4.3.5. V Model
4.3.6. Agile Model
5. Verification and Validation Testing Strategies
5.1 Verification Strategies
5.1.1 Review’s
5.2 Validation Strategies
6. Testing Methodologies:
6.1. Kinds of testing:
Conventional Testing
Unconventional Testing
6.2. Methods of testing
1. White box Testing
2. Black box Testing
3. Gray box Testing
6.3 Levels of Testing
1) Unit level testing
2) Module level testing
3) Integration level testing
i) Top Down Approach (TDA)
STUB
ii) Bottom Up Approach (BUA)
DRIVER
iii) Hybrid Approach
iv) Big Bang Approach
4) System level testing
5) User acceptance testing
6.4. TYPES OF TESTING
1. Sanitary Testing / Build Verification Testing / Build Accepting Testing.
2. Regression Testing
3. Re – Testing:
4. Alpha - Testing:
Advantages:
5. Beta - Testing:
6. Static Testing:
7. Dynamic Testing:
8. Installation Testing:
9. Compatibility Testing:
10. Monkey Testing:
11. Exploratory Testing:
12. Usability Testing:
13. End – To – End Testing:
14. Port Testing:
15. Reliability Testing (or) Soak Testing:
16. Mutation Testing:
17. Security Testing:
18. Adhoc Testing:
19 .Scalability Testing
20. Heuristic Testing
21. Accessibility Testing
22. Performance Testing
23. Load Testing
24. Stress testing
25. Volume Testing
26. Context-driven testing
27. comparison testing
28. Globalization testing
7. ENVIRONMENT
1) Stand alone environment (Or) One – Tire Architecture.
2) Client – Server Environment (Or) Two – Tire Architecture
3) Web Environment
4) Distributed Environment
8. Test Design Techniques
8.1.) White box Testing
a) Basis Path Testing
b) Flow Graph Notation
c) Cyclomatic Complexity
d) Graph Matrices
e) Control Structure Testing
f) Loop Testing
8.2.) Black box Testing
a. Graph Based Testing Methods:
b. Error Guessing:
c. Boundary Value Analysis:
d. Equivalence Partitioning:
8.3. Structural System Testing Techniques
8.4 Functional System Testing Techniques
9. Software Testing Life Cycle (STLC)
9.1. TEST PLANNING:
9.2. TEST DESIGN STAGE:
9.2.1. Test Scenario
9.2.2. Test Case
9.2.3. Requirement Traceability Matrix
9.3. TEST EXECUTION PHASE:
9.4. RESULT ANALYSIS:
9.5. BUG TRACKING AND REPORTING:
9.5.1 Difference between Bug, Defect and Error
9.5.2. Reporting Of bugs
9.5.3. DPD (Defect Profile Document)
9.5.2. Final summary report
9.6. TEST CLOSURE:
10. Defect metrics
11. when to stop testing
12. Maturity Levels
a) SEI
b) CMM
c) ISO
d) IEEE
e) ANSI
1. Software

Instructions or computer programs which when executed provide


desired function and performance.
Characteristics of software:
● Software is logical unlike hardware, which is physical (contains chips,
circuit boards, power supplies etc.,) Hence its characteristics are entirely
different.
● Software does not “wear out” …as do hardware components, from dust,
abuse, temperature and other environmental factors.
● A software component should be built such that it can be reused in
many different programs.

2. Software Testing

2.1. Software Testing Definition

The British Standards Institution, in their standard BS7925-1, define


testing as
“The process of exercising software to verify that it satisfies specified
requirements and to detect faults; the measurement of software quality”
Software testing is a process of exercising and evaluating the system
component by manual or automation, to ensure that the system is
satisfying the stated specifications.
Software Testing can also be stated as
· the process of validating and verifying that a software
program/application/product
(i) meet the business and technical requirements that guided its design
and development;
(ii) works as expected; (iii) can be implemented with the same
characteristics.
· It is a process of identifying the defects.
· It is a process of executing program with the intent of finding an error
2.2. History of Testing

The separation of debugging from testing was initially introduced by


Glen ford J. Myers in 1979.He illustrated the desire of the software
engineering community to separate fundamental development activities,
such as debugging, from that of verification. Dr. Dave Gelperin and Dr.
William C. Hetzel classified in 1988 the phases and goals in software
testing in the following stages:
· Until 1956 - Debugging oriented, where testing was often associated
to debugging: there was no clear difference between testing and
debugging.
· 1957-1978 - Demonstration oriented, where debugging and testing
was distinguished now - in this period it was shown, that software
satisfies the requirements.
· 1979-1982 - Destruction oriented period, where the goal was to find
errors.
· 1983-1987 - Evaluation oriented period, intention here is that during
the software lifecycle a product evaluation is provided and measuring
quality.
· 1988-onward- Prevention oriented period, where tests were to
demonstrate that software satisfies its specification, to detect faults and
to prevent faults.

2.3. Significance Of testing

· To deliver a quality software to client.


· Testing is required to check that the application satisfies the
requirements.
· Testing is required to build a Quality Product.
· Testing will improve the software quality.
· Testing will also reduce the maintenance cost.
· Testing will give confidence for the software development Company
that the software will work satisfactorily in Client environment.
· To keep reliability in your product.
· To withstand in business.
· To satisfy the client requirements.

3. Software Quality

Conformance to explicitly stated functional and performance


requirements, explicitly documented development standards, and implicit
characteristics that are expected of all professionally developed software.
The degree to which a system, component or process meets
specified requirements and customer or user expectations.
3.1. Software Quality Assurance
Software QA involves the entire software development PROCESS -
monitoring and improving the process, making sure that any agreed-upon
standards and procedures are followed, and ensuring that problems are
found and dealt with. It is oriented to ’prevention’.
QA helps establish processes.
3.2. Software Quality Control
This is a department function, which compares the standards to
the product, and takes action when non-conformance is detected for
example testing.

3.3. Differences between QA & QC


Quality Assurance Quality Control
A planned and systematic set of
activities necessary to provide
The process by which product
adequate confidence that
quality is compared with applicable
requirements are properly
standards; and the action taken
established and products or
when nonconformance is detected.
services conform to specified
requirements.
An activity that establishes and An activity which verifies if the
evaluates the processes to produce product meets pre-defined
the products. standards.
Helps establish processes. Implements the process.
Sets up measurements programs to Verifies if specific attribute(s) are in
evaluate processes. a specific product or service
Identifies weaknesses in processes Identifies defects for the primary
and improves them. purpose of correcting defects.
QA is the responsibility of the entire QC is the responsibility of the
team. tester.
Prevents the introduction of issues Detects, reports and corrects
or defects defects
QA evaluates whether or not
QC evaluates if the application is
quality control is working for the
working for the primary purpose of
primary purpose of determining
determining if there is a flaw /
whether or not there is a weakness
defect in the functionalities.
in the process.
QA improves the process that is
QC improves the development of a
applied to multiple products that
specific product or service.
will ever be produced by a process.
QA personnel should not perform QC personnel may perform quality
quality control unless doing it to assurance tasks if and when
validate quality control is working. required.

4. Software Development Life Cycle (SDLC)

A set of guidelines followed to have an optimum development.


A framework that describes the activities performed at each stage of a
software development project.

4.1 Pre-SDLC

PRE-SDLC PROCESS
Sign-in:

It is the process in which the legal agreement is done between the


customer and the development company in such a way that the customer
agrees to give the project to the development Company, the project
development is done with in a specific budget and the project is to be
delivered to the customer on so and so deadline.

Kick of meeting:

It is the first meeting conducted soon after the project came with in
the development company in order to discuss and do the following:
Overview of the project, Nature of the customer, Project team
selection (project manager the development team, quality head and
quality team).
The participants of this meeting are HOO (Head of Operations), Technical
Manager and the Software Quality Manager. Once the project manager is
selected he will release a PIN (Project Initiation Note).

PIN (Project Initiation Note)

PIN means sending e-mail to the CEO of the development company


asking for formal permission to start the project development activities.
Once the PM gets the green signal from CEO the project development
activities will be geared up (started).

4.2. Software Development Life Cycle (SDLC)

It contains following phases.


1. Initial phase / Requirement phase.
2. Analysis phase.
3. Design phase.
4. Coding phase.
5. Testing phase.
6. Delivery and maintenance phase.

4.2.1. Initial-Phase/ Requirements phase:

Task : Interacting with the customer and gathering the


requirements

Roles : Business Analyst and Engagement Manager (EM).

Process: First the BA will take an appointment with the customer,


Collects the requirement template, meets the customer, gathers
requirements and comes back to the company with the requirements
document.
Then the EM will go through the requirements document. Try to find
additional requirements, Get the prototype (dummy, similar to end
product) developed in order to get exact details in the case of unclear
requirements or confused customers, and also deal with any excess cost
of the project.

Proof: BRD, The Requirements document is the proof of completion


of the first phase of SDLC.

Alternate names of the Requirements Document:


(Various companies and environments use different terminologies, but the
logic is same)
FRS: Functional Requirements Specification.
CRS : Customer/Client Requirement Specification,
URS : User Requirement Specifications,
BRS : Business Requirement Specifications,
BDD : Business Design Document,
BD : Business Document.

4.2.2. Analysis Phase:

Tasks : Feasibility Study,


Tentative planning,
Technology selection

Roles : System Analyst (SA)


Project Manager (PM)
Technical manager (TM)

Process: A detailed study on the requirements and judging the


possibilities and scope of the requirements is known as feasibility study. It
is done by the Manager level teams usually. After that in the next step we
move to a temporary scheduling of staff to initiate the project, Select a
suitable technology to develop the project effectively (customer’s choice is
given first preference, If it is feasible).
And Finally the Hardware, software and Human resources
required are listed out in a document to baseline the project.

Proof : The proof document of the analysis phase is SRS (System


Requirement Specification.)

The following is the SRS template of iSpace:


4.2.3. Design Phase:

Tasks: High level Designing (H.L.D)


Low level Designing (L.L.D)

Role: Chief Architect (handle HLD)


Technical lead (involved in LLD)

Process: The Chief Architect will divide the whole project into
modules by drawing some graphical layouts using Unified Modeling
Language (UML). The Technical lead will further divide those modules into
sub modules using UML. And both will be responsible for visioning the GUI
(The screen where user interacts with the application OR Graphical User
Interface.) and developing the Pseudo code (A dummy code or usually,
it’s a set of English Instructions to help the developers in coding the
application.)
Proof: TDD Technical Design Document.

The following are the templates of HLD and LLD

4.2.4. Coding Phase:

Task: Developing the programs

Roles: Developers/programmers

Process: The developers will take the support of technical design


document and will be following the coding standards, while actual source
code is developed. Some of the Industry standard coding methods include
Indentation, Color coding, Commenting etc.
The complete implementation of Design phase is Coding
phase. In this the pseudo code is converted into source code. In this
phase developer will develop the actual code using the source code and
they release the application to the testee, the application is converted into
“.exe” format in the case of client server applications and the packet of
code like WAR files with a URL (web address) in the case of web
application will be given for testing. Once test engineer receive this
testing is performed.
The developers must follow the coding standards in
order to ensure that the program is clear and systematic so that anybody
can enhance it under maintenance of project in feature. Some of the
coding standards are as follows.
1. Four character margins have to be left on the left side.
2. Comments must be placed for each and every specific block of code.
3. Colour coding must be maintained for various types of variables.
4. A single line space must be maintained between two blocks of code
as well as between a comment and a block etc.

Proof: The proof document of the coding phase is Source Code


Document (SCD).

4.2.5. Testing Phase:

Task : Testing

Roles : Test engineers, Quality Assurance team.

Process:

· First, the Requirements document will be received by the testing


department
· The test engineers will review the requirements in order to
understand them.
· While revising, If at all any doubts arise, then the testers will list out
all the unclear requirements in a document named Requirements
Clarification Note (RCN).
· Then they send the RCN to the author of the requirement document
( i.e, Business Analyst ) in order to get the clarifications.
· Once the clarifications are done and sent to the testing team, they will
take the test case template and write the test cases. (Test cases like
example1 above).
· Once the first build is released, they will execute the test cases.
· While executing the test cases, if at all they find any defects, they will
report it in the Defect Profile Document or DPD.
· Since the DPD is in the common repository, the developers will be
notified of the status of the defect.
· Once the defects are sorted out, the development team releases the
next build for testing. And also update the status of defects in the DPD.
· Testers will here check for the previous defects, related defects, new
defects and update the DPD.
Proof: The last two steps are carried out till the product is defect free, so
quality assured product is the proof of the testing phase (and that is why
it is a very important stage of SDLC in modern times).
4.2.6. Delivery & Maintenance phase

Tasks : Delivery
: Post delivery maintenance

Roles : Deployment engineers


Process:

Delivery: The deployment engineers will go to the customer


environment and install the application in the customer's environment &
submit the application along with the appropriate release notes to the
customer.

Maintenance: Once the application is delivered the customer will


start using the application, while using if any problem occurs, then it
becomes a new task, Based on the severity of the issue corresponding
roles and process will be formulated. Some customers may be expecting
continuous maintenance; In that case a team of software engineers will
be taking care of the application regularly.
In this process usually PM, SQM, DM (Deployment
Manager), Development team and testing team are involved. During this
phase the following documents are produced.
i. Certification document
ii. User guide and help stuff
iii. Deployment document (used for installation)
iv. SDN document (Software Delivery Note) Used for letting know
the customer special information about the product.

4.3. STLC Models

Waterfall Model
Incremental model
Prototype Model
Spiral Model
‘V’ Model
Agile Model

4.3.1. WaterFall Model

In a waterfall model, each phase must be completed in its entirety


before the next phase can begin. At the end of each phase, a review
takes place to determine if the project is on the right path and whether or
not to continue or discard the project.
Advantages & Disadvantages:
Easier and simple life cycle and useful when client has fixed requirements
but for a client who is not sure of the requirements and for confused
customers this model does not work.
4.3.2 Incremental Models

The incremental model is an intuitive approach to the waterfall


model. Multiple development cycles take place here, making the life cycle
a “multi-waterfall” cycle. Cycles are divided up into smaller, more easily
managed iterations. Each iteration passes through the requirements,
design, implementation and testing phases. A working version of software
is produced during the first iteration, so you have working software early
on during the software life cycle. Subsequent iterations build on the
initial software produced during the first iteration.
Advantages
a) Generates working software quickly and early during the software life
cycle.
b) More flexible – less costly to change scope and requirements.
c) Easier to test and debug during a smaller iteration.
d) Easier to manage risk because risky pieces are identified and handled
during its iteration.
e) Each iteration is an easily managed milestone.
Disadvantages
a) Each phase of an iteration is rigid and do not overlap with each other.
b) Problems may arise pertaining to system architecture because not all
requirements are gathered up front for the entire software life cycle.

4.3.3. Prototype Model:

For a naïve client, who has no proper idea of the outcome he wants, the
developers initially design a prototype model and customer is approached
for evaluation and once the client approves the actual development is
made.
Advantages
Another advantage of having a prototype modeled software is that the
software is created using lots of user feedbacks. In every prototype
created, users could give their honest opinion about the software. If
something is unfavorable, it can be changed. Slowly the program is
created with the customer in mind.
Disadvantages
There is also a great temptation for most developers to create a prototype
and stick to it even though it has flaws. Since prototypes are not yet
complete software programs, there is always a possibility of a designer
flaw. When flawed software is implemented, it could mean losses of
important resources
4.3.4. Spiral Model

The spiral is a risk-reduction oriented model that breaks a software


project up into mini-projects, each addressing one or more major risks.
After major risks have been addressed, the spiral model terminates as a
waterfall model.

Advantages
a. Dynamic requirement changes present.
b. Time taken to deliver the product is more.
c. Only intended to large and complicated project.
Disadvantages
a. The spiral model is intended for large, expensive and complicated
projects.
b. Requires considerable expertise in risk evaluation and reduction.
c. Complex, relatively difficult to follow strictly.

4.3.5. V Model

V model of software development parallel involves testing in all the


phases. During the initial phase where in requirements are gathered by
the Business Analyst and development team, in V model testers are also
involves and they analyze the requirements and make reviews.
When the development team is involved in design, testers need to write
test cases and prepare RTM (requirements traceability matrix). For high
level design (which consists of overview of the entire process) integration
test cases are written. For a low level design(consists of detailed
description of all modules) unit testing is performed.
While in coding phase, testers need to execute the test cases.
In test execution
i) First unit testing is performed (white box testing). Generally
developers perform unit testing, which needs technical knowledge.
ii) Then system testing (Black box testing) is done which includes Field
validations, GUI, UI, Calculation, Database, URL etc.
iii) Then integration testing an
iv) And finally reporting
Advantages
a) Emphasize planning for verification validation of the product in early
stages of product development.
b) Each deliverable must be deliverable.
c) Easy to use, the errors can be corrected in that phase itself.
Disadvantages
a) Does not easily handle iterations.
b) Does not easily handle dynamic changes in the requirements.
c) It needs lots of money and resources.
4.3.6. Agile Model

Agile methodology is more of people oriented. Agile


methodology helps us to increase productivity and reduce risks. There are
2 popular agile methods- Extreme
programming (XP) and Scrum.
People believe that there is less documentation in Agile.But
agile also includes documentation and it can be used either a small or a
large projects. In agile Development, testing is also integrated throughout
the life cycle. But for the testers, they will not have a good business
requirement. So they have to get the details from the client or through
the developer. The testers will do more of Quality Assurance work than
testing.
Agile Methodology- Characteristics
Ø Frequent Delivery
Ø More Iterations
Ø Test frequently
Ø Less defects

The Requirement functionality may not be covered in one iteration for


releasing the project, but it will be covered in multiple iterations. This idea
is to have a defect free release available at the end of each iteration.

Advantages
a) The team does not have to invest time and effort and finally find that by
the time they delivered the product, the requirement of the customer has
changed.
b) The documentation is crisp and to the point to save time.

Disadvantages
c) There is lack of emphasis on necessary designing and documentation.
d) The project can easily get taken off track if the customer representative
is not clear what final outcome that they want.

5. Verification and Validation Testing Strategies

5.1 Verification Strategies

The Verification Strategies, persons / teams involved in the testing, and


the deliverable of that phase of testing is briefed below:
Verificat Perfor Explana Delivera
ion med tion ble
Strateg By
y
Require Users, Require Reviewed
ments Develop ment and
Reviews ers, Review’s approved
Test help in statemen
Enginee base t of
rs. lining requirem
desired ents.
requirem
ents to
build a
system.
Design Designe Design System
Reviews rs, Test Reviews Design
Enginee help in Documen
rs validatin t,
g if the Hardware
design Design
meets Documen
the t.
requirem
ents and
build an
effective
system.
Code Develop Code Software
Walkthro ers, Walkthro ready for
ughs Subject ughs initial
Speciali help in testing
sts, analyzin by the
Test g the develope
Enginee coding r.
rs. techniqu
es and if
the code
is
meeting
the
coding
standard
s
Code Develop Formal Software
Inspectio ers, analysis ready for
ns Subject of the testing
Speciali program by the
sts, source testing
Test code to team.
Enginee find
rs. defects
as
defined
by
meeting
system
design
specificat
ion.

5.1.1 Review’s

The focus of Review is on a work product (e.g. Requirements document,


Code etc.). After the work product is developed, the Project Leader calls
for a Review. The work product is distributed to the personnel who
involves in the review. The main audience for the review should be the
Project Manager, Project Leader and the Producer of the work product.

There are three general classes of reviews:


A) Informal or Peer Review
B) Semiformal or Walk-Through
C) Format or Inspections

i) Peer Review is generally a one-to-one meeting between the author of a


work product and a peer, initiated as a request for import regarding a
particular artifact or problem. There is no agenda, and results are not
formally reported. These reviews occur on an as needed basis throughout
each phase of a project.

ii) Walkthroughs: The author of the material being reviewed facilitates


walk-Through. The participants are led through the material in one of two
formats; the presentation is made without interruptions and comments
are made at the end, or comments are made throughout. In either case,
the issues raised are captured and published in a report distributed to the
participants. Possible solutions for uncovered defects are not discussed
during the review.

iii) Inspections: A knowledgeable individual called a moderator, who is not


a member of the team or the author of the product under review,
facilitates inspections. A recorder who records the defects found and
actions assigned assists the moderator. The meeting is planned in
advance and material is distributed to all the participants and the
participants are expected to attend the meeting well prepared. The issues
raised during the meeting are documented and circulated among the
members present and the management.

5.2 Validation Strategies

The Validation Strategies, persons / teams involved in the testing, and the
deliverable of that phase of testing is briefed below:

Validation Performed By Explanation Deliverable


Strategy
Unit Testing. Developers / Testing of single Software unit ready
Test Engineers. program, for testing with
modules, or unit other system
of code. component.
Integration Test Engineers. Testing of Portions of the
Testing. integrated system ready for
programs, testing with other
modules, or units portions of the
of code. system.
System Testing. Test Engineers. Testing of entire Tested computer
computer system, based on
system. This kind what was specified
of testing usually to be developed.
includes
functional and
structural
testing.
Production Developers, Testing of the Stable application.
Environment Test Engineers. whole computer
Testing. system before
rolling out to the
UAT.
User Acceptance Users. Testing of Tested and
Testing. computer system accepted system
to make sure it based on the user
will work in the needs.
system
regardless of
what the system
requirements
indicate.
Installation Test Engineers. Testing of the Successfully
Testing. Computer installed
System during application.
the Installation
at the user place.
Beta Testing Users. Testing of the Successfully
application after installed and
the installation at running
the client place. application.

6. Testing Methodologies:

6.1. Kinds of testing:

Depends on what part of (SDLC) is tested, there are basically two


kinds of testing that are evolved
1. Conventional testing(as usual)
2. Unconventional testing

Conventional Testing

It is a sort of testing in which the test engineer will test the application in
the testing phase of SDLC.

Unconventional Testing

It is a sort of testing in which quality assurance people will check each


and every outcome document right from the initial phase of the SDLC.

6.2. Methods of testing

There are 3 methods are there


Ø White Box Testing
Ø Black Box Testing
Ø Gray Box Testing

1. White box Testing

(Or) Glass box Testing (Or) Clear box Testing


It is a method of testing in which one will perform testing on the
structural part of an application. Usually developers are white box testers
perform it.

2. Black box Testing

It is a method of testing in which one will perform testing only on the


functional part of an application without having any structural knowledge.
Usually test engineers perform it.

3. Gray box Testing

It is a method of testing in which one will perform testing on both the


functional part as well as the structural part of an application.

6.3 Levels of Testing

There are 5 levels of testing in a software environment. They are as


follows

1) Unit level testing

If one performs testing on a unit then that level of testing is known as


unit level testing. It is white box testing usually developers perform it.
Unit: - It is defined as a smallest part of an application.

2) Module level testing

If one perform testing on a module that is known as module level


testing. It is black box testing usually test engineers perform it.

3) Integration level testing

Once the modules are developing the developers will develop some
interfaces and integrate the module with the help of those interfaces while
integration they will check whether the interfaces are working fine or not.
It is a white box testing and usually developers or white box testers
perform it.
The developers will be integrating the modules in any one of the following
approaches.

i) Top Down Approach (TDA)

In this approach the parent modules are developed first and then
integrated with child modules.
STUB

While integrating the modules in top down approach if at all any


mandatory module is missing then that module is replace with a
temporary program known as STUB.

ii) Bottom Up Approach (BUA)

In this approach the child modules are developed first and the integrated
that to the corresponding parent modules.

DRIVER

While integrating the modules in bottom up approach if at all any


mandatory module is missing then that module is replace with a
temporary program known as DRIVER.

iii) Hybrid Approach

This approach is a mixed approach of both Top down and Bottom up


approaches.

iv) Big Bang Approach

Once all the modules are ready at a time integrating them finally is known
as big bang approach.

4) System level testing

Once the application is deployed into the environment then if one


performs testing on the system it is known as system level testing it is a
black box testing and usually done by the test engineers.
At this level of testing so many types of testing are done.
Some of those are
System Integration Testing
Load Testing
Performance Testing
Stress Testing etc….

5) User acceptance testing

The same system testing done in the presents of the user is known
as user acceptance testing. It’s a black box testing usually done by the
Test engineers.

6.4. TYPES OF TESTING

1. Build Verification Testing.


2. Regression Testing.
3. Re – Testing.
4. Alpha - Testing.
5. Beta - Testing.
6. Static Testing.
7. Dynamic Testing.
8. Installation Testing.
9. Compatibility Testing.
10. Monkey Testing
11. Exploratory Testing.
12. Usability Testing.
13. End – To – End Testing.
14. Port – Testing.
15. Reliability Testing
16. Mutation Testing.
17. Security Testing.
18. Adhoc Testing.
19. Scalability Testing.
20. Heuristic Testing.
21. Accessibility Testing.
22. Performance Testing
23. Load testing
24. Stress Testing
25. Volume Testing
26. Context Driven Testing
27. Comparison Testing
28. Globalization Testing

1. Sanitary Testing / Build Verification Testing / Build Accepting


Testing.

It is a type of testing in which one will conduct overall testing on the


released build in order to check whether it is proper for further details
testing or not.
Some companies even call it as Sanitary Testing and also Smoke Testing.
But some companies will say that just before the release of the built the
developer’s will conduct the overall testing in order to check whether the
build is proper for detailed testing or not that is known as Smoke Testing
and once the build is released once again the testers will conduct the
overall testing in order to check whether the build is proper for further
detailed testing or not. That is known as Sanitary Testing

2. Regression Testing

It is a type of testing in which one will perform testing on the already


tested functionality again and again this is usually done in scenarios
(Situations).
Scenario 1:
Whenever the defects are raised by the Test Engineer rectified by the
developer and the next build is released to the testing department then
the Test Engineer will test the defect functionality and it’s related
functionalities once again.
Scenario 2:
Whenever some new changes are requested by the customer, those new
features are incorporated by the developers, next built is released to the
testing department then the test engineers will test the related
functionalities of the new features once again which are already tested.
That is also known as regression testing.
Note:
Testing the new features for the first time is new testing but not the
regression testing.

3. Re – Testing:

It is a type of testing in which one will perform testing on the same


function again and again with multiple sets of data in order to come to a
conclusion whether the functionality is working fine or not.

4. Alpha - Testing:

It is a type of testing in which one (I.e., out Test Engineer) will perform
user acceptance testing in our company in the presents of the customer.

Advantages:

If at all any defects are found there is a chance of rectifying


them immediately.

5. Beta - Testing:
It is a type of testing in which either third party testers or end users will
perform user acceptance testing in the client place before actual
implementation.

6. Static Testing:

It is a type of testing in which one will perform testing on an application


or it’s related factors without performing any actions.
Ex: GUI Testing, Document Testing, Code reviewing and etc…

7. Dynamic Testing:

It is a type of testing in which one will perform testing on the application


by performing same action.
Ex: Functional Testing.

8. Installation Testing:

It is a type of testing in which one will install the application in to the


environment by following the guidelines given in the deployment
document and if the installation is successful the one will come to a
conclusion that the guidelines are correct otherwise the guidelines are not
correct.

9. Compatibility Testing:

It is a type of testing in which one may have to install the application into
multiple number of environments prepared with different combinations of
environmental components in order to check whether the application is
suitable with these environments or not. This is use usually done to the
products.

10. Monkey Testing:

It is a type of testing in which one will perform some abnormal actions


intentionally (wanted) on the application in order to check its stability.

11. Exploratory Testing:

It is a type of testing in which usually the domain expert will perform


testing on the application parallel by exploring the functionality without
having the knowledge of requirements.

12. Usability Testing:


It is a type of testing in which one will concentrate on the user
friendliness of the application.

13. End – To – End Testing:

It is a type of testing in which one will perform testing on a complete


transaction from one end to another end.

14. Port Testing:

It is a type of testing in which one will check whether the application is


comfortable or not after deploying it into the original client’s environment.

15. Reliability Testing (or) Soak Testing:

It is a type of testing in which one will perform testing on the application


continuously for long period of time in order to check its stability.

16. Mutation Testing:

It is a type of testing in which one will perform testing by


doing some changes
For example usually the developers will be doing any many changes to
the program and check its performance it is known as mutation testing.

17. Security Testing:

It is a type of testing in which one will usually concentrate on


the following areas.
i) Authentication.
ii) Direct URL Testing.
iii) Firewall Leakage Testing.
i) Authentication Testing:
it is a type of testing in which a Test Engineer will enter different
combinations of user names and passwords in order to check whether
only the authorized persons are accessing the application or not.
ii) Direct URL Testing:
It is a type of testing in which a test engineer will specified the direct
URL’s of secured pages and check whether they are been accessing or
not.
iii) Firewall leakage Testing:
It is a type of testing in which one will enter as one level of user and try
to access the other level unauthorized pages in order to check whether
the firewall is working properly or not.

18. Adhoc Testing:


it is a type of testing in which one will perform testing on the application
in his own style after understanding the requirements clearly.

19 .Scalability Testing

It is a type of testing in which one can perform testing on the application


to check if the application is enhance able / expandable and measurable
without having to do with design changes and environmental alterations.

20. Heuristic Testing

It is a type of testing in which the test engineer performs testing on


the application based on the past experience to ensure through testing
and complete coverage in the testing.

21. Accessibility Testing

It is a type of testing in which the test engineer checks the


application if it has accessibility factor. In other words he checks if the
application is able to serve the abnormal disable users apart from normal
users.

22. Performance Testing

performance testing is testing that is performed, from one


perspective, to determine how fast some aspect of a system performs
under a particular workload. It can also serve to validate and verify other
quality attributes of the system, such as scalability, reliability and
resource usage.

23. Load Testing

Testing an application under heavy loads, such as testing of a web


site under a range of loads to determine at what point the system's
response time degrades or fails.

24. Stress testing

Testing the stability and response time of an application by applying


the load which is more than design load

25. Volume Testing

Testing the stability and response time of an application by passing


the huge volume of data across the application

26. Context-driven testing


Testing driven by an understanding of the environment, culture,
and intended use of software. For example, the testing approach for life-
critical medical equipment software would be completely different than
that for a low-cost computer game.

27. comparison testing

comparing software weaknesses and strengths to competing


products.

28. Globalization testing

Developing the application in multiple languages is


known as globalization. Testing this kind of application is called as
globalization testing. We need to test the application for different
languages example: - English, French, Chinese etc.
It is of two types:-
1. Localization testing.
2. Internationalization testing.
internalization testing
It is the testing done to check the application is working on
different platforms of international languages.
Ex: - Spanish, German etc.

localization testing
It is the testing done to check the application is working on
different platforms of local languages.
Ex: - Win 95/98/2000 etc.

7. ENVIRONMENT

Environment is a combination of 3 layers.


-Presentation Layer
-Business Layer
-Database Layer
Types of Environment
There are 4 types of environments.
i) Stand alone Environment / One – tier Architecture.
ii) Client – Server Environment / Two – tier Architecture.
iii) Web Environment / Three – tier Architecture.
iv) Distributed Environment / N – tier Architecture.

1) Stand alone environment (Or) One – Tire Architecture.

This environment contains all the three layers that is Presentation layer,
Business layered and Database layer in a Single tier.

2) Client – Server Environment (Or) Two – Tire Architecture


In this environment two tiers will be there one tier is for client and other
tier is for Database server. Presentation layer and Business layer will be
present in each and every client and the database will be present in
database server.

3) Web Environment

In this Environment three tiers will be there client resides in one tier,
application server resides in middle tier and database server resides in
the last tier. Every client will have the presentation layer, application
server will have the business layer and database server will have the
database layer.

4) Distributed Environment

It is same as the web Environment but the business logic is distributed


among application server in order to distribute the load.
Web Server: It is software that provides web services to the client.
Application Server: It is a server that holds the business logic.
Ex: Ton tact, Tomcat, Web logic, web Spear etc………

8. Test Design Techniques

8.1.) White box Testing

White box testing strategy deals with the internal logic and structure of
the code. White box testing is also called as glass, structural, open box or
clear box testing. In order to implement white box testing, the tester has
to deal with the code and hence is needed to possess knowledge of coding
and logic i.e. internal working of the code.

a) Basis Path Testing

Each independent path in the code is taken for testing.

b) Flow Graph Notation

The flow graph depicts logical control flow using a diagrammatic notation.
Each structured construct has a corresponding flow graph symbol.

c) Cyclomatic Complexity

Cyclomatic complexity is software metric that provides a quantitative


measure of the logical complexity of a program. In basis path testing
method, the value computed for Cyclomatic complexity defines the
number for independent paths in the basis set of a program and provides
us with an upper bound for the number of tests that must be conducted
to ensure that all statements have been executed at least once.
An independent path is any path through the program that introduces at
least one new set of processing statements or a new condition.
Computing Cyclomatic Complexity
Cyclomatic complexity has a foundation in graph theory and provides us
with extremely useful software metric. Complexity is computed in one of
the three ways:
1. The number of regions of the flow graph corresponds to the Cyclomatic
complexity.
2. Cyclomatic complexity, V (G), for a flow graph, G is defined as
V (G) = E-N+2
Where E, is the number of flow graph edges, N is the number of flow
graph nodes.
3. Cyclomatic complexity, V (G) for a flow graph, G is also defined as:
V (G) = P+1
Where P is the number of predicate nodes contained in the flow graph G.

d) Graph Matrices

The procedure for deriving the flow graph and even determining a set of
basis paths is amenable to mechanization. To develop a software tool that
assists in basis path testing, a data structure, called a graph matrix can
be quite useful.
A Graph Matrix is a square matrix whose size is equal to the number of
nodes on the flow graph. Each row and column corresponds to an
identified node, and matrix entries correspond to connections between
nodes.

e) Control Structure Testing

Described below are some of the variations of Control Structure Testing.

Condition Testing

Condition testing is a test case design method that exercises the logical
conditions contained in a program module.

Data Flow Testing

The data flow testing method selects test paths of a program according to
the locations of definitions and uses of variables in the program.

f) Loop Testing

Loop Testing is a white box testing technique that focuses exclusively on


the validity of loop constructs. Four classes of loops can be defined:
Simple loops, Concatenated loops, nested loops, and unstructured loops.
Simple Loops

The following sets of tests can be applied to simple loops, where ‘n’ is the
maximum number of allowable passes through the loop.
1. Skip the loop entirely.
2. Only one passes through the loop.
3. Two passes through the loop.
4. ‘m’ passes through the loop where m<n.
5. n-1, n, n+1 passes through the loop.

Nested Loops

If we extend the test approach for simple loops to nested loops, the
number of possible tests would grow geometrically as the level of nesting
increases.
1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the
outer loops at their minimum iteration parameter values. Add other tests
for out-of-range or exclude values.
3. Work outward, conducting tests for the next loop, but keeping all other
outer loops at minimum values and other nested loops to “typical” values.
4. Continue until all loops have been tested.

Concatenated Loops

Concatenated loops can be tested using the approach defined for simple
loops, if each of the loops is independent of the other. However, if two
loops are concatenated and the loop counter for loop 1 is used as the
initial value for loop 2, then the loops are not independent.

Unstructured Loops

Whenever possible, this class of loops should be redesigned to reflect the


use of the structured programming constructs.

8.2.) Black box Testing

Black Box Testing is not a type of testing; it instead is a testing strategy.


It does not need any knowledge of internal design or code etc. The types
of testing under this strategy are totally focused on the testing for
requirements and functionality of the work software application. It is also
called as Opaque Testing, Functional/Behavioural Testing and Closed Box
Testing.
Techniques of Black box Testing:
a. Graph Based Testing Methods:
Each and every application is build up of some objects. All such objects
are identified and graph is prepared. From this object graph each object
relationship is identified and test cases written accordingly to discover the
errors.
b. Error Guessing:
This is purely based on previous experience and judgment of tester. Error
Guessing is the art of guessing where errors can be hidden. For this
technique there are no specific tools, writing the test cases that cover all
the application paths.
c. Boundary Value Analysis:
Boundary Value Analysis (BVA) is a test Functional Testing technique
where the extreme boundary values are chosen. Boundary values include
maximum, minimum, just inside/outside boundaries, typical values, and
error values.
Look at output boundaries for test cases too
Test min, min-1, max, max+1, typical values
BVA techniques:
1. Number of variables
For n variables: BVA yields 4n + 1 test cases.
2. Kinds of ranges
Generalizing ranges depends on the nature or type of variables
Advantages of Boundary Value Analysis
1. Robustness Testing - Boundary Value Analysis plus values that go
beyond the limits
2. Min - 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling
Limitations of Boundary Value Analysis:
Boundary value testing is efficient only for variables of fixed values i.e
boundary.
d. Equivalence Partitioning:
Equivalence partitioning is a black box testing method that divides the
input domain of a program into classes of data from which test cases can
be derived.
How this partitioning is performed while testing:
1. If an input condition specifies a range, one valid and one two invalid
classes are defined.
2. If an input condition requires a specific value, one valid and two invalid
equivalence classes are defined.
3. If an input condition specifies a member of a set, one valid and one
invalid equivalence class is defined.
4. If an input condition is Boolean, one valid and one invalid class is
defined.
e. Comparison Testing:
Different independent versions of same software are used to compare to
each other for testing in this method.
Various testing types that fall under the Black Box Testing strategy are:
functional testing, stress testing, recovery testing, volume testing, User
Acceptance Testing (also known as UAT), system testing, Sanity or
Smoke testing, load testing, Usability testing, Exploratory testing, ad-hoc
testing, alpha testing, beta testing etc.

8.3. Structural System Testing Techniques

The following are the structural system testing techniques.

Technique Description Example


Stress Determine system Sufficient disk space
performance with expected allocated.
volumes.
Execution System achieves desired Transaction
level of proficiency. turnaround time
adequate.
Recovery System can be returned to Evaluate adequacy of
an operational status after a backup data.
failure.
Operations System can be executed in a Determine systems
normal operational status. can run using
document.
Compliance System is developed in Standards follow.
accordance with standards
and procedures.
Security System is protected in Access denied.
accordance with importance
to organization.

8.4 Functional System Testing Techniques

The following are the functional system testing techniques.


Technique Description Example
Requirements System performs as Prove system
specified. requirements.
Regression Verifies that anything Unchanged system
unchanged still performs segments function.
correctly.
Error Handling Errors can be prevented or Error introduced into
detected and then corrected. the test.
Manual The people-computer Manual procedures
Support interaction works. developed.
Intersystem’s. Data is correctly passed from Intersystem
system to system. parameters changed.
Control Controls reduce system risk File reconciliation
to an acceptable level. procedures work.
Parallel Old systems and new system Old and new system
are run and the results can reconcile.
compared to detect
unplanned differences.

9. Software Testing Life Cycle (STLC)

Software testing life cycle identifies what test activities to carry


out and when (what is the best time) to accomplish the test activities.
Even though testing differs in various organizations, there is a testing life
cycle.

1. Test Planning
2. Test development
3. Test execution
4. Result Analysis
5. Bug Tracking
6. Reporting

Software testing has its own life cycle that intersects with every stage of
the SDLC. The basic requirements in software testing life cycle is to
control/deal with software testing – Manual, Automated and
Performance.

9.1. TEST PLANNING:

Test plan is a systematic approach to testing a system such as a


machine or software. The plan typically contains a detailed understanding
of what the eventual workflow will be.
Test Plan Document generally includes the following:
i. Introduction
-Objective:
Purpose of the document is specified over here.
-References Document:
The list of all the documents that are .(SRS and Project Plan)
ii. Coverage of Testing
-inscope (Features to be tested):
The list of all the features that are to be tested based on the
Implicit and explicit requirements from the customer will
be mentioned here.- -outscope (Features not to be tested)
The list of all the features that can be skipped from testing phase
are mentioned here, Generally Out of scope features such as incomplete
modules are listed here. If severity is low and time constraints are high
then all the low risk features such as GUI or database style sheets are
skipped. Also the features that are to be incorporated in future are kept
out of testing temporarily.
iii. Test Strategy
-Levels of testing: It’s a project level term which describes testing
procedures in an organization. All the levels such as Unit, module,
Integration, system and UAT (User Acceptance test) are mentioned here
which is to be performed.
-Types of testing: All the various types of testing such as compatibility,
regression and etc are mentioned here with module divisions
-Test design techniques: The List of All the techniques that are followed
and maintained in that company will be listed out here in this section.
Some of the most used techniques are Boundary Value Analysis (BVA)
and Equivalence Class Participation (ECP).
-Configuration Management: All the documents that are generated during
the testing process needs to be updated simultaneously to keep the
testers and developers aware of the proceedings. Also the naming
conventions and declaring new version numbers for the software builds
based on the amount of change is done by the SCM (Software
Configuration Management team) and the details will be listed here.
-Test Metrics: The list of all the tasks that need to be measured and
maintained will be present here. Different metrics for tracing back the
exact requirement and test case depends on the availability of metrics at
the right time.
-Terminology: Testing specific jargons for the project that will be used
internally in the company will be mentioned here in this section.
-Automation Plan: The list of all the features or modules that are planned
for automation testing will be mentioned here. The application only
undergoes automation testing after being declared STABLE by the manual
testing team.
-List of Automated tools: The list of Automated tools, like QTP, Load
runner, Win runner; etc which will be used in this project will be
mentioned along with license details.
iv. Base Criteria:
-Acceptance Criteria: The standards or metrics that need to be achieved
by the testing team before declaring the product fit will be listed here. So
before handing over to the customer, the time at which the testing needs
to be stopped is mentioned here in this section.
-Suspension Criteria: In high risk projects, or huge projects that consists
several modules, It is necessary to minimize the repetitive process to be
efficient. The situations when the testing needs to be suspended or
temporarily halted will be listed here in this section
v. Test deliverables:
The list of all the documents that are to be prepared during the
testing process will be mentioned here in this section. All the copies
of verification documents after each level are submitted to the customer
along with the user manual and product at the end of the project.
Documents include:
1. Test Plan Document
2. Test Scenarios Document
3. Test case Document
4. RTM
5. Defect Reporting Document
6. Final Summary Report

vi.Test environment:
Environmental components and combinations that will be
simulated for testing the product will be made sure that it is very close to
the actual environment when the end user works on the product. All the
details will be mentioned here in this section that is to be used for testing
the application.
vii. Resource planning:
Roles to be performed or in other words who has to do what will be
mentioned clearly here.
viii. Scheduling:
The starting dates and ending dates for each and every task and
module will be listed here in this section.
ix. Staffing and Training:
How much staff is to be recruited and what kind of training is to be
provided to accomplish this project successfully will be described here in a
detailed fashion.

x. Risks and Contingencies:


The list of all the potential risks and the corresponding solution
plans will be listed here
Risks: Example, Resources may leave the organization, license and
update deadlines, customer may change the requirements in terms of
testing or maintenance in middle of the project etc
Contingencies: Maintaining bench strength, Rechecking initial
stages of the whole process, Importance and priority settings for features
to be tested and features to be skipped under time constraints to be listed
and shared clearly.

xi. Assumptions:
Some features or testing methods have to be done mandatorily even
though, the customer does not mention it in his requirements document.
These assumptions are listed out here in this section.

xii. Approval Information


As this document is published and circulated, the relevant and
required authorities will approve the plan and will update this section with
necessary details like date and department etc.

The following is the test plan template of iSpace

9.2. TEST DESIGN STAGE:

This is the most important stage of the testing life cycle, in this
phase of the testing; the testers will develop the test scenarios and test
cases based against the requirements of the customer.

9.2.1. Test Scenario

A set of test cases that ensure that the business process flows are
tested from end to end. They may be independent tests or a series of
tests that follow each other, each dependent on the output of the
previous one.
The following is the test scenario template of iSpace

9.2.2. Test Case

Test Development simply involves writing Test cases from test scenarios.
Every Company follows a particular format for test cases called test case
template
Test cases are broadly divided into two types.
· G.U.I Test Cases.
· Functional test cases.
Functional test cases are further divided into two types.
· Positive Test Cases.
· Negative Test Cases.

Test case template typically consists of

Test Case ID: Test Case number


Test case created on: The date test case created on
Category: category deals with type of the testing done like
functional, GUI, negative testing etc
Test Scenario: A set of test cases that ensure that the business process
flows are tested from end to end. They may be independent tests or a
series of tests that follow each other, each dependent on the output of the
previous one.
Test Data: If you are writing test case then you need input data for any
kind of test. Tester may provide this input data at the time of executing
the test cases. The test data may be any kind of input to application, it
may be in any format like xml test data, system test data, SQL test data
or stress tested.
Generally testers call it as test bed. In test bed all software and
hardware requirements are set using the predefined data values. If you
don’t have the systematic approach for building test data while writing
and executing test cases then there are chances of missing some
important test cases
Execution steps: The steps to be followed to execute the test
case
Expected Result: When the test case is executed what has happened
dictates the expected result
Actual Result: What is actually expected when the test case is executed
Status: The pass or fail criteria
Comments: Any comments can be added.
The following is the test case template

9.2.3. Requirement Traceability Matrix

The Test cases ensure that all the functionalities are being covered
in testing but these should be mapped with business requirements to
ensure that all the requirements are met. For this we need to prepare
RTM, to verify 100% Coverage of Requirements.
Requirement Traceability matrix typically consists of

Requirement ID: Maps to the requirement document Requirement


Description: Brief Description of the requirement
Reference Document: The document from where the requirement is
fetched like BRD,SRS etc
Module Name: The module in the project where the requirement is
under consideration
Test Case ID: The Test case Id from the test case template

Requirement Changes: If any changes are made in the requirements


that has to be notified.
The following is the RTM Template of i Space:

9.3. TEST EXECUTION PHASE:

In this phase test engineers will execute the test cases and log the
defects. During the test execution phase the test engineer will do the
following.
a) He will perform the action that is described in the description column.
b) He will observe the actual behavior of the application.
c) He will document the observed value under the actual value column.
The following is the Test Execution Template of iSpace

9.4. RESULT ANALYSIS:


After the successful execution of test cases, the tester will compare the
expected values with the actual values and declare the result as pass or
fail.

9.5. BUG TRACKING AND REPORTING:

It consists of entire list of defects you have come across while testing
your application. We have a defects meeting where QA, Developers and
PM attend and discuss the defects. If they think it’s a valid defect then QA
lead assigns it to development lead who in turn will assign it to the
developer to fix the defect.

9.5.1 Difference between Bug, Defect and Error

Error: Any mismatches while compiling the code is called Error. A human
action which produces an incorrect result. In simple words error in
programming or coding is an error
Defect: A flow in a component or system that can cause the component
or system to fail to perform its required function. The gaps in functionality
or any mismatches in functionality are called a defect.
Bug: The defects accepted by developers are bugs.
Fault: Fault is similar to a defect.
Failure: Deviation of the component or system from its expected
delivery, service or result.
Issue: It’s same as defect.
Suggestion: The points not covered in BRD but which can improve
quality can be suggested by testers to the developers

9.5.2. Reporting Of bugs

Project Lead

a) Classical Bug Reporting Process:

Test Lead

Mail

TE1 TE2 TE3

Dev1 Dev2 Dev3


Drawbacks: 1.Time consuming
2. Redundancy.
3. No Security.

b) Common Repository Oriented Bug Reporting Process:

PL

TL

Common
repository
Repository

Dev1 Dev2 Dev3


TE1 TE2 TE3

Drawbacks: 1.Time consuming.


2. Redundancy.

c) Bug Tracking Tool Oriented Bug Reporting Process:

TL

PL

TE1 TE2 TE3

Dev1 Dv2 Dve3


BTT

Very Important stage to update the DPD (Defect profile document) and
the let the developers know of the defects.

9.5.3. DPD (Defect Profile Document)

Defect ID: Serial no. of the defect


Defect Description: A brief description of the defect.
Status: Status of the defect Status -
New/Open/Assign/Deferred/Rejected/Re Opened/Closed.
a) New: When the bug is posted for the first time, it states will be new
this means the bug is not yet approved.
b) Open: After a tester has posted a bug, the lead of the tester will
approves the bug is genuine and he changes the state as “OPEN”.
c) Assign: Once the lead changes the state as “OPEN” he assigns the bug
to the corresponding developer or developer team.
d) Deferred: The bug changed to deferred state means the bug is
expected to fix in next release due to various factors.
e) Rejected: If the developer feels the bug is not genuine, he rejects the
bug. The state of the bug changed to “REJECTED”.
f) Reopened: If the bug still exists even after the bug is fixed by the
developer, the tester changes the statues to “RE-OPENED”.
g) Closed: Once the bug is fixed, it is tested by tester. If the tester feels
that the bug no longer exists in the software, he changes the status of the
bug to “CLOSED”.
Severity: Impact of Bug on the application. The different Severities are
Show stopper, Critical, Major, Minor, Cosmetic. It will be decided by the
tester.
a) Show stopper: Stops development and/or testing work.
b) Critical: Crashes, loss of data, severe memory leak.
c) Major: Major loss of function.
d) Minor: Minor loss of function, or other problem where easy work
around is present.
e) Cosmetic: Cosmetic problem like misspelled words or misaligned text,
Background color of the specific field.
Priority: Priority of the Bug as per the client's requirements.
The different priorities are High, Medium and Low.

a) High: Bug must be fixed immediately; the product cannot ship with this
bug.
b) Medium: These are important problems that should be fixed as soon as
possible. The problem should be fixed within the time availab
c) Low: It is not important that these bugs be addressed.
Fix these bugs after all other bugs have been fixed

Test case ID: Test case Number that is written in the test case
Document.
Assigned To: Person (developer) to whom the bug is assigned to fix.

Assigned By: Testing personnel by whom the bug is found or the


Testing person who assigns the bug to developer.

Date of testing: The date on which testing is done.


Remarks: Any remarks can be provided.
The following is the Defect profile document template of iSpace:
Once all the bugs are fixed and complete regression testing is done
then the final summary report is made.

9.5.2. Final summary report

9.6. TEST CLOSURE:

It is the last phase of STLC process. All test cases


execution should be completed and there should not be any high level
severity and high level priority bugs. Once the UAT has done with
successfully then final test summary report is generated and then sign off
the project.
· Client Acceptance
· Replication of product delivery records submission client sign-off.

10. Defect metrics

Software testing defect metrics is used to measure the quantification of a


software related to its development resources and/or development
process. It is usually responsible for quantifying factors like schedule,
work effort, product size, project status and quality performance. Such
metrics is used to estimate that how much of more future work is needed
to improve the software quality before delivering it to the end-user.
This process of testing defect metrics is used to analyze the major causes
of defects and in which phase most of the defects are introduced.

Defect Density:
The defect density is measured by adding up the number of defects
reported by the Software Quality Assurance to the number of defects that
are reported by the peer and dividing it by the actual size (which can be
in either KlOC, SLOC or the function points to measure the size of the
software product).

Test effectiveness:
There are several approaches towards analyzing test effectiveness, one of
them is
t/(t+Uat) and in this formula "t" is the total number of defects reported
during the testing, whereas Uat means the total number of defects that
are reported during the user acceptance testing.

Defect Removal Efficiency:


It's a sleek process that includes the total number of defects removed
divided by the total number of defects inject to multiply by 100 during the
various stages of SDLC.
Effort Variance:
Effort Variance can be calculated as
{(Actual Efforts-Estimated Efforts) / Estimated Efforts} *100.

Schedule Variance:
Just like above formula it is similarly calculated as.
{(Actual Duration - Estimated Duration)/Estimated Duration} *100

Schedule Slippage:
When a task has been delayed from its original baseline schedule then the
amount of time that it has taken is defined as schedule slippage. Its
calculation is as simple as.
(Actual End date - Estimated End date) / (Planned End Date – Planned
Start Date) * 100

Rework Effort Ratio:


(Actual review effort spent in that particular phase / Total actual efforts
spent in that phase) * 100

Requirements Stability Index:


{1 - (Total number of changes /number of initial requirements)}

Requirements Creep:
(Total Number of requirements added/Number of initial requirements) *
100

Weighted Defect Density:


(5*Count of fatal defects)+(3*Count of Major defects)+(1*Count of minor
defects)
Where are the values 5,3,1 corresponds to the severities of the detect?

And at the end we’d like to tell you that purpose of metrics isn't just to
measure the software performance but also to understand the progress of
the application to the organizational goal. Below we will discuss some
parameters of determining metrics of various software packages:
• Duration
• Complexity
• Technology Constraints
• Previous Experience in Same Technology
• Business Domain
• Clarity of the scope of the project
And one of the interesting and beneficial approaches is using the
GQM (Goal.
-Question-Metric) technique. This technique consists of three stages: A
Goal, a set of questions, and a set of corresponding metrics and therefore
it's a hierarchical structure that specifies the purpose of measure, the
object that has to be measured, issues that need to be measured, and
viewpoint from which the measures are taken.

11. when to stop testing

This can be difficult to determine. Many modern software applications are


so complex and run in such an interdependent environment, that
complete testing can never be done. Common factors in deciding when to
stop are...
* Deadlines, e.g. release deadlines, testing deadlines;
* Test cases completed with certain percentage passed;
* Test budget has been depleted;
* Coverage of code, functionality, or requirements reaches a specified
point;
* Bug rate falls below a certain level; or
* Beta or alpha testing period ends.

12. Maturity Levels

a) SEI
'Software Engineering Institute' at Carnegie-Mellon University;
initiated by the U.S. Defense Department to help improve software
development processes.
b) CMM
'Capability Maturity Model', now called the CMMI ('Capability
Maturity Model Integration'), developed by the SEI. It's a model of 5
levels of process 'maturity' that determine effectiveness in delivering
quality software. It is geared to large organizations such as large U.S.
Defense Department contractors. However, many of the QA processes
involved are appropriate to any organization, and if reasonably applied
can be helpful. Organizations can receive CMMI ratings by undergoing
assessments by qualified auditors.
Level 1 - characterized by chaos, periodic panics, and heroic efforts
required by individuals to successfully complete projects. Few if any
processes in place; successes may not be repeatable.
Level 2 - software project tracking, requirements management,
realistic planning, and configuration management processes are in place;
successful practices can be repeated.
Level 3 - standard software development and maintenance
processes are integrated throughout an organization; a Software
Engineering Process Group is is in place to oversee. Software processes,
and training programs are used to ensure understanding and
compliance.
Level 4 - metrics are used to track productivity, processes, and
products. Project performance is predictable, and quality is consistently
high.
Level 5 - the focus is on continuous process improvement. The
impact of new processes and technologies can be predicted and
effectively implemented when required.
c) ISO
'International Organization for Standardization' - The ISO
9001:2008 standard (which provides some clarifications of the previous
standard 9001:2000) concerns quality systems that are assessed by
outside auditors, and it applies to many kinds of production and
manufacturing organizations, not just software. It covers documentation,
design, development, production, testing, installation, servicing, and
other processes. The full set of standards consists of: (a) Q9001-2008 -
Quality Management Systems: Requirements; (b) Q9000-2000 - Quality
Management Systems: Fundamentals and Vocabulary; (c)Q9004-2000 -
Quality Management Systems: Guidelines for Performance Improvements.
To be ISO 9001 certified, a third-party auditor assesses an organization,
and certification is typically good for about 3 years, after which a
complete reassessment is required. Note that ISO certification does not
necessarily indicate quality products - it indicates only that documented
processes are followed.
ISO 9126 is a standard for the evaluation of software quality and
defines six high level quality characteristics that can be used in software
evaluation. It includes functionality, reliability, usability, efficiency,
maintainability, and portability.
d) IEEE
'Institute of Electrical and Electronics Engineers' - among other
things, creates standards such as 'IEEE Standard for Software Test
Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of Software
Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software
Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.

e) ANSI

'American National Standards Institute', the primary industrial


standards body in the U.S.; publishes some software-related standards in
conjunction with the IEEE and ASQ (American Society for Quality).
Other software development/IT management process assessment
methods besides CMMI and ISO 9000 include SPICE, Trillium, TickIT,
Bootstrap, ITIL, MOF, and CobiT.

Posted by Sprit Technologies at 11:38


Email ThisBlogThis!Share to TwitterShare to Facebook

No comments:

Post a Comment

Newer Post Older Post Home


Subscribe to: Post Comments (Atom)
Blog Archive

 ▼ 2012 (4)
o ► December (1)
o ▼ June (3)
 SQL Question & Answers
 Manual Testing Concepts
 100 QTP Questions you should know before interview...

About Me

Sprit Technologies
View my complete profile
Simple template. Powered by Blogger.

You might also like