You are on page 1of 160

What is Software Testing:

Definition:

According to ANSI/IEEE 1059 standard – A process of analyzing a software


item to detect the differences between existing and required conditions (i.e.,
defects) and to evaluate the features of the software item.

Software Testing Types:

Manual Testing: Manual testing is the process of testing the software


manually to find the defects. Tester should have the perspective of an end
users and to ensure all the features are working as mentioned in the
requirement document. In this process, testers execute the test cases and
generate the reports manually without using any automation tools.

Automation Testing: Automation testing is the process of testing the software


using an automation tools to find the defects. In this process, testers execute
the test scripts and generate the test results automatically by using
automation tools. Some of the famous automation testing tools for functional
testing are QTP/UFT and Selenium.

Testing Methods:

Static Testing

Dynamic Testing

Testing Approaches:

1. White Box Testing

2. Black Box Testing

3. Grey Box Testing

Testing Levels:

1. Unit Testing

2. Integration Testing

3. System Testing
4. Acceptance Testing

Types of Black Box Testing:

1. Functionality Testing

2. Non-functionality Testing

Software Development Life Cycle

Software Development Life Cycle (SDLC) aims to produce a high-quality


system that meets or exceeds customer expectations, works effectively and
efficiently in the current and planned information technology infrastructure,
and is inexpensive to maintain and cost-effective to enhance.

Detailed Explanation:

A process followed in software projects is SDLC. Each phase of SDLC


produces deliverables required by the next phase in the life cycle.
Requirements are translated into design. Code is produced according to the
design. Testing should be done on a developed product based on
requirement. Deployment should be done once the testing was completed. It
aims to produce a high-quality system that meets or exceeds customer
expectations, works effectively and efficiently in the current and planned
information technology infrastructure, and is inexpensive to maintain and
cost-effective to enhance.

SDLC - Software Development Life Cycle


A typical Software Development Life Cycle (SDLC) consists of the following
phases:

Requirement Phase:

Requirement gathering and analysis is the most important phase in software


development lifecycle. Business Analyst collects the requirement from the
Customer/Client as per the clients business needs and documents the
requirements in the Business Requirement Specification (document name
varies depends upon the Organization. Some examples are Customer
Requirement Specification (CRS), Business Specification (BS) etc., and
provides the same to Development Team.

Analysis Phase:

Once the requirement gathering and analysis is done the next step is to
define and document the product requirements and get them approved by
the customer. This is done through SRS (Software Requirement Specification)
document. SRS consists of all the product requirements to be designed and
developed during the project life cycle. Key people involved in this phase are
Project Manager, Business Analysist and Senior members of the Team. The
outcome of this phase is Software Requirement Specification.
Design Phase:

It has two steps:

HLD – High Level Design – It gives the architecture of the software product
to be developed and is done by architects and senior developers

LLD – Low Level Design – It is done by senior developers. It describes how


each and every feature in the product should work and how every component
should work. Here, only the design will be there and not the code

The outcome from this phase is High Level Document and Low Level
Document which works as an input to the next phase

Development Phase:

Developers of all levels (seniors, juniors, freshers) involved in this phase.


This is the phase where we start building the software and start writing the
code for the product. The outcome from this phase is Source Code Document
(SCD) and the developed product.

Testing Phase:

When the software is ready, it is sent to the testing department where Test
team tests it thoroughly for different defects. They either test the software
manually or using automated testing tools depends on process defined in
STLC (Software Testing Life Cycle) and ensure that each and every
component of the software works fine. Once the QA makes sure that the
software is error-free, it goes to the next stage, which is Implementation.
The outcome of this phase is the Quality Product and the Testing Artifacts.

Deployment & Maintenance Phase:

After successful testing, the product is delivered/deployed to the customer


for their use. Deployment is done by the Deployment/Implementation
engineers. Once when the customers start using the developed system then
the actual problems will come up and needs to be solved from time to time.
Fixing the issues found by the customer comes in the maintenance phase.
100% testing is not possible – because, the way testers test the product is
different from the way customers use the product. Maintenance should be
done as per SLA (Service Level Agreement)

Types of Software Development Life Cycle Models:

Some of the SDLC Models are as follows:

Waterfall Model

Spiral

V Model

Prototype

Agile

The other related models are Agile Model, Rapid Application Development,
Rational Unified Model, Hybrid Model etc.,

Software Testing Life Cycle:

Software Testing Life Cycle (STLC) identifies what test activities to carry out
and when to accomplish those test activities. Even though testing differs
between Organizations, there is a testing life cycle.
The different phases of Software Testing Life Cycle are:

Every phase of STLC (Software Testing Life Cycle) has a definite Entry and
Exit Criteria.

Requirement Analysis:

Entry criteria for this phase is BRS (Business Requirement Specification)


document. During this phase, test team studies and analyzes the
requirements from a testing perspective. This phase helps to identify whether
the requirements are testable or not. If any requirement is not testable, test
team can communicate with various stakeholders (Client, Business Analyst,
Technical Leads, System Architects etc) during this phase so that the
mitigation strategy can be planned.

Entry Criteria: BRS (Business Requirement Specification)

Deliverables: List of all testable requirements, Automation feasibility report


(if applicable)

Test Planning:
Test planning is the first step of the testing process. In this phase typically
Test Manager/Test Lead involves determining the effort and cost estimates
for the entire project. Preparation of Test Plan will be done based on the
requirement analysis. Activities like resource planning, determining roles and
responsibilities, tool selection (if automation), training requirement etc.,
carried out in this phase. The deliverables of this phase are Test Plan & Effort
estimation documents.

Entry Criteria: Requirements Documents

Deliverables: Test Strategy, Test Plan, and Test Effort estimation document.

Test Design:

Test team starts with test cases development activity here in this phase. Test
team prepares test cases, test scripts (if automation) and test data. Once the
test cases are ready then these test cases are reviewed by peer members or
team lead. Also, test team prepares the Requirement Traceability Matrix
(RTM). RTM traces the requirements to the test cases that are needed to
verify whether the requirements are fulfilled. The deliverables of this phase
are Test Cases, Test Scripts, Test Data, Requirements Traceability Matrix

Entry Criteria: Requirements Documents (Updated version of unclear or


missing requirement)

Deliverables: Test cases, Test Scripts (if automation), Test data.

Test Environment Setup:

This phase can be started in parallel with Test design phase. Test
environment setup is done based on the hardware and software requirement
list. Some cases test team may not be involved in this phase. Development
team or customer provides the test environment. Meanwhile, test team
should prepare the smoke test cases to check the readiness of the given test
environment.

Entry Criteria: Test Plan, Smoke Test cases, Test Data

Deliverables: Test Environment. Smoke Test Results.

Test Execution:

Test team starts executing the test cases based on the planned test cases. If
a test case result is Pass/Fail then the same should be updated in the test
cases. Defect report should be prepared for failed test cases and should be
reported to the Development Team through bug tracking tool (eg., Quality
Center) for fixing the defects. Retesting will be performed once the defect
was fixed. Click here to see the Bug Life Cycle.

Entry Criteria: Test Plan document, Test cases, Test data, Test Environment.

Deliverables: Test case execution report, Defect report, RTM

Test Closure:

The final stage where we prepare Test Closure Report, Test Metrics.

Testing team will be called out for a meeting to evaluate cycle completion
criteria based on Test coverage, Quality, Time, Cost, Software, Business
objectives. Test team analyses the test artifacts (such as Test cases, Defect
reports etc.,) to identify strategies that have to be implemented in future,
which will help to remove process bottlenecks in the upcoming projects. Test
metrics and Test closure report will be prepared based on the above criteria.

Entry Criteria: Test Case Execution report (make sure there are no high
severity defects opened), Defect report

Deliverables: Test Closure report, Test metrics

Bug Life Cycle or Defect Life Cycle:

Bug life cycle is also known as Defect life cycle. In Software Development
process, the bug has a life cycle. The bug should go through the life cycle to
be closed. Bug life cycle varies depends upon the tools (QC, JIRA etc.,) used
and the process followed in the organization.

What is a Software Bug?

Software bug can be defined as the abnormal behavior of the software. Bug
starts when the defect is found and ends when a defect is closed, after
ensuring it is not reproduced.
The different states of a bug in the bug life cycle are as follows:

New: When a tester finds a new defect. He should provide a proper Defect
document to the Development team to reproduce and fix the defect. In this
state, the status of the defect posted by tester is “New”

Assigned: Defects which are in the status of New will be approved (if valid)
and assigned to the development team by Test Lead/Project Lead/Project
Manager. Once the defect is assigned then the status of the bug changes to
“Assigned”

Open: The development team starts analyzing and works on the defect fix

Fixed: When a developer makes the necessary code change and verifies the
change, then the status of the bug will be changed as “Fixed” and the bug is
passed to the testing team.
Test: If the status is “Test”, it means the defect is fixed and ready to do test
whether it is fixed or not.

Verified: The tester re-tests the bug after it got fixed by the developer. If
there is no bug detected in the software, then the bug is fixed and the status
assigned is “verified.”

Closed: After verified the fix, if the bug is no longer exits then the status of
bug will be assigned as “Closed.”

Reopen: If the defect remains same after the retest, then the tester posts
the defect using defect retesting document and changes the status to
“Reopen”. Again the bug goes through the life cycle to be fixed.

Duplicate: If the defect is repeated twice or the defect corresponds the same
concept of the bug, the status is changed to “duplicate” by the development
team.

Deferred: In some cases, Project Manager/Lead may set the bug status as
deferred.

If the bug found during end of release and the bug is minor or not important
to fix immediately

If the bug is not related to current build

If it is expected to get fixed in the next release

Customer is thinking to change the requirement

In such cases the status will be changed as “deferred” and it will be fixed in
the next release.

Rejected: If the system is working according to specifications and bug is just


due to some misinterpretation (such as referring to old requirements or extra
features) then Team lead or developers can mark such bugs as “Rejected”

Some other statuses are:


Cannot be fixed: Technology not supporting, Root of the product issue, Cost
of fixing bug is more

Not Reproducible: Platform mismatch, improper defect document, data


mismatch, build mismatch, inconsistent defects

Need more information: If a developer is unable to reproduce the bug as per


the steps provided by a tester then the developer can change the status as
“Need more information’. In this case, the tester needs to add detailed
reproducing steps and assign bug back to the development team for a fix.
This won’t happen if the tester writes a good defect document.

This is all about Bug Life Cycle / Defect Life Cycle. Some companies use
these bug id’s in RTM to map with the test cases.

Waterfall Model:
Waterfall Model is a traditional model. It is aka Sequential Design Process,
often used in SDLC, in which the progress is seen as flowing downwards like
a waterfall, through the different phases such as Requirement Gathering,
Feasibility Study/Analysis, Design, Coding, Testing, Installation and
Maintenance. Every next phase is begun only once the goal of previous phase
is completed. This methodology is preferred in projects where quality is more
important as compared to schedule or cost. This methodology is best suitable
for short term projects where the requirements will not change. (E.g.
Calculator, Attendance Management)

Advantages:

Requirements do not change nor does design and code, so we get a stable
product.

This model is simple to implement. Requirements are finalized earlier in the


life cycle. So there won’t be any chaos in the next phases.

Required resources to implement this model are minimal compared to other


methodologies

Every phase has specific deliverable’s. It gives high visibility to the project
manager and clients about the progress of the project.

Disadvantages:

Backtracking is not possible i.e., we cannot go back and change requirements


once the design stage is reached.

Change in requirements leads to change in design and code which results


defects in the project due to overlapping of phases.

Customer may not be satisfied, if the changes they required are not
incorporated in the product.

The end result of waterfall model may not be a flexible product

In terms of testing, testers involve only in the testing phase. Requirements


are not tested in the requirement phase. It can’t be modified even though we
identify that there is a bug in the requirement in the testing phase. It goes
on till the end and leads to lot of re-work.

It is not suitable for long term projects where requirements may change time
to time

Waterfall model can be used only when the requirements are very well known
and fixed

Final words: Testing is not just finding bugs. As per the Waterfall Model,
Testers involve only almost at the end of the SDLC. Ages ago the mantra of
testing is just to finding bugs in the software. Things changed a lot now.
There are some other SDLC models implemented. I would post other models
in the upcoming posts in detail with their advantages and disadvantages. It is
up to your team to choose the SDLC model depends on the project you are
working.

Spiral Model:

Spiral Model was first described by Barry W. Boehm (American Software


Engineer) in 1986.

Spiral model works in an iterative nature. It is a combination of both


Prototype development process and Linear development process (waterfall
model). This model place more emphasis on risk analysis. Mostly this model
adpots to the large and complicated projects where risk is high. Every
Iteration starts with a planning and ends with the product evaluation by
client.

Let’s take an example of a product development team (like Microsoft). They


know that there will be a high risk and they face lots of difficulties in the
journey of developing and releasing the product

and also they know that they will release next version of product when the
current version is in existence. They prefer Spiral Model to develop the
product in an iterative nature. They could release one version of the product
to the end user and start developing next version which includes new
enhancements and improvements on previous version (based on the issues
faced by the user in the previous version). Like Microsoft released Windows 8
and improved it based on user feedback and released the next version
(Windows 8.1).

Spiral Model undergoes 4 phases.

Planning Phase – Requirement Gathering, Cost Estimation, Resource


Allocation

Risk Analysis Phase – Strengths and weaknesses of the project

Design Phase – Coding, Internal Testing and deployment

Evaluation Phase – Client Evaluation (Client side Testing) to get the feedback

Advantages:

It allows requirement changes

Suitable for large and complicated projects

It allows better risk analysis

Cost effective due to good risk management

Disadvantages:

Not suitable for small projects

Success of the project depends on risk analysis phase


Have to hire more experienced resource especially for risk analysis

Applications:

Microsoft Office

V Model:

V-model is also known as Verification and Validation (V&V) model. In this


each phase of SDLC must be completed before the next phase starts. It
follows a sequential design process same like waterfall model.

Don’t you think that why do we use this V Model, if it is same as Waterfall
Model. �

Let me mention the next point on why do we need this Verification and
Validation Model.
It overcomes the disadvantages of waterfall model. In the waterfall model,
we have seen that testers involve in the project only at the last phase of the
development process.

In this, test team involves in the early phases of SDLC. Testing starts in early
stages of product development which avoids downward flow of defects which
in turns reduces lot of rework. Both teams (test and development) work in
parallel. The test team works on various activities like preparing test
strategy, test plan and test cases/scripts while the development team works
on SRS, Design and Coding.

Once the requirements were received, both the development and test team
start their activities.

Deliverables are parallel in this model. Whilst, developers are working on SRS
(System Requirement Specification), testers work on BRS (Business
Requirement Specification) and prepare ATP(Acceptance Test Plan) and ATC
(Acceptance Test Cases) and so on.

Testers will be ready with all the required artifacts (such as Test Plan, Test
Cases) by the time developers release the finished product. It saves lots of
time.

Let’s see the how the development team and test team involves in each
phase of SDLC in V Model.

1. Once client sends BRS, both the teams (test and development) start their
activities. The developers translate the BRS to SRS. The test team involves in
reviewing the BRS to find the missing or wrong requirements and writes
acceptance test plan and acceptance test cases.

2. In the next stage, the development team sends the SRS the testing team
for review and the developers start building the HLD (High Level Design
Document) of the product. The test team involves in reviewing the SRS
against the BRS and writes system test plan and test cases.

3. In the next stage, the development team starts building the LLD (Low
Level Design) of the product. The test team involves in reviewing the HLD
(High Level Design) and writes Integration test plan and integration test
cases.

4. In the next stage, the development team starts with the coding of the
product. The test team involves in reviewing the LLD and writes functional
test plan and functional test cases.

5. In the next stage, the development team releases the build to the test
team once the unit testing was done. The test team carries out functional
testing, integration testing, system testing and acceptance testing on the
release build step by step.

Advantages:

Testing starts in early stages of product development which avoids downward


flow of defects and helps to find the defects in the early stages

Test team will be ready with the test cases by the time developers release
the software which in turns saves a lot of time

Testing is involved in every stage of product development. It gives a quality


product.

Total investment is less due to less or no rework.

Disadvantages:

Initial investment is more because test team involves right from the early
stage.

Whenever there is change in requirement, the same procedure continues. It


leads more documentation work.

Applications:

Long term projects, complex applications, When customer is expecting a very


high quality product with in stipulated time frame because every stage is
tested and developers & testers are working in parallel

Automation Testing Vs Manual Testing


In this post, we learn what is Automation Testing vs Manual Testing and
Different types of Testing and difference between Automation Testing and
Manual Testing.

Automation Testing:

Automation testing is the process of testing the software using an


automation tool to find the defects. In this process, executing the test scripts
and generating the results are performed automatically by automation tools.

Some of the popular automation testing tools

HP QTP(Quick Test Professional)/UFT(Unified Functional Testing)

Selenium

LoadRunner

IBM Rational Functional Tester

SilkTest

TestComplete

WinRunner

WATIR

Manual Testing:

Manual testing is the process of testing the software manually to find the
defects. Testers should have the perspective of an end user and to ensure all
the features are working as mentioned in the requirement document. In this
process, testers execute the test cases and generate the reports manually
without using any automation tools.

Types of Manual Testing:

Black Box Testing

White Box Testing

Unit Testing

System Testing
Integration Testing

Acceptance Testing

Black Box Testing: Black Box Testing is a software testing method in which
testers evaluate the functionality of the software under test without looking
at the internal code structure. This can be applied to every level of software
testing such as Unit, Integration, System and Acceptance Testing.

White Box Testing: White Box Testing is also called as Glass Box, Clear Box,
and Structural Testing. It is based on applications internal code structure. In
white-box testing, an internal perspective of the system, as well as
programming skills, are used to design test cases. This testing usually done
at the unit level.

Unit Testing: Unit Testing is also called as Module Testing or Component


Testing. It is done to check whether the individual unit or module of the
source code is working properly. It is done by the developers in developer’s
environment.

System Testing: Testing the fully integrated application to evaluate the


systems compliance with its specified requirements is called System Testing
AKA End to End testing. Verifying the completed system to ensure that the
application works as intended or not.

Integration Testing: Integration Testing is the process of testing the interface


between the two software units. Integration testing is done by three ways.
Big Bang Approach, Top Down Approach, Bottom-Up Approach

Acceptance Testing: It is also known as pre-production testing. This is done


by the end users along with the testers to validate the functionality of the
application. After successful acceptance testing. Formal testing conducted to
determine whether an application is developed as per the requirement. It
allows customer to accept or reject the application. Types of acceptance
testing are Alpha, Beta & Gamma

The Ultimate list of Types of Testing:

Let’s see different Types of Software Testing one by one.


1. Functional testing: In simple words, what the system actually does is
functional testing. To verify that each function of the software application
behaves as specified in the requirement document. Testing all the
functionalities by providing appropriate input to verify whether the actual
output is matching the expected output or not. It falls within the scope of
black box testing and the testers need not concern about the source code of
the application.

2. Non-functional testing: In simple words, how well the system performs is


non-functionality testing. Non-functional testing refers to various aspects of
the software such as performance, load, stress, scalability, security,
compatibility etc., Main focus is to improve the user experience on how fast
the system responds to a request.

3. Manual testing: Manual testing is the process of testing the software


manually to find the defects. Tester should have the perspective of an end
user and to ensure all the features are working as mentioned in the
requirement document. In this process, testers execute the test cases and
generate the reports manually without using any automation tools.

4. Automated testing: Automation testing is the process of testing the


software using an automation tools to find the defects. In this process,
executing the test scripts and generating the results are performed
automatically by automation tools. Some most popular tools to do
automation testing are HP QTP/UFT, Selenium WebDriver, etc.,

5. Black box testing: Black Box Testing is a software testing method in which
testers evaluate the functionality of the software under test without looking
at the internal code structure. This can be applied to every level of software
testing such as Unit, Integration, System and Acceptance Testing.

6. Glass box testing – Refer white box testing

7. White box testing: White Box Testing is also called as Glass Box, Clear
Box, and Structural Testing. It is based on applications internal code
structure. In white-box testing, an internal perspective of the system, as well
as programming skills, are used to design test cases. This testing usually was
done at the unit level.

8. Specification based testing: Refer black-box testing.


9. Structure based testing: Refer white-box testing.

10. Gray box testing: Grey box is the combination of both White Box and
Black Box Testing. The tester who works on this type of testing needs to have
access to design documents. This helps to create better test cases in this
process.

11. Unit testing: Unit Testing is also called as Module Testing or Component
Testing. It is done to check whether the individual unit or module of the
source code is working properly. It is done by the developers in developer’s
environment.

12. Component testing:Refer Unit Testing

13. Module testing: Refer Unit Testing

14. Integration testing: Integration Testing is the process of testing the


interface between the two software units. Integration testing is done by
multiple approaches such Big Bang Approach, Top Down Approach, Bottom-
Up Approach and Hybrid Integration approach.

15. System testing: Testing the fully integrated application to evaluate the
system’s compliance with its specified requirements is called System Testing
AKA End to End testing. Verifying the completed system to ensure that the
application works as intended or not.

16. Acceptance testing: It is also known as pre-production testing. This is


done by the end users along with the testers to validate the functionality of
the application. After successful acceptance testing. Formal testing conducted
to determine whether an application is developed as per the requirement. It
allows the customer to accept or reject the application. Types of acceptance
testing are Alpha, Beta & Gamma.

17. Big bang Integration Testing: Combining all the modules once and
verifying the functionality after completion of individual module testing.
Top down and bottom up are carried out by using dummy modules known as
Stubs and Drivers. These Stubs and Drivers are used to stand-in for missing
components to simulate data communication between modules.

18. Top-down Integration Testing: Testing takes place from top to bottom.
High-level modules are tested first and then low-level modules and finally
integrating the low-level modules to a high level to ensure the system is
working as intended. Stubs are used as a temporary module if a module is
not ready for integration testing.

19. Bottom-up Integration Testing: It is a reciprocate of the Top-Down


Approach. Testing takes place from bottom to up. Lowest level modules are
tested first and then high-level modules and finally integrating the high-level
modules to a low level to ensure the system is working as intended. Drivers
are used as a temporary module for integration testing.

20. Hybrid Integration Testing: Hybrid integration testing is the combination


of both Top-down and bottom up integration testing.

21. Alpha testing: Alpha testing is done by the in-house developers (who
developed the software) and testers. Sometimes alpha testing is done by the
client or outsourcing team with the presence of developers or testers.

22. Beta testing: Beta testing is done by a limited number of end users
before delivery. Usually, it is done in the client place.

23. Gamma Testing: Gamma testing is done when the software is ready for
release with specified requirements. It is done at the client place. It is done
directly by skipping all the in-house testing activities.

24. Equivalence partitioning testing: Equivalence Partitioning is also known as


Equivalence Class Partitioning. In equivalence partitioning, inputs to the
software or system are divided into groups that are expected to exhibit
similar behavior, so they are likely to be proposed in the same way. Hence
selecting one input from each group to design the test cases.
Read more on Equivalence Partitioning Testing Technique…

25. Boundary value analysis testing: Boundary value analysis (BVA) is based
on testing the boundary values of valid and invalid partitions. The Behavior at
the edge of each equivalence partition is more likely to be incorrect than the
behavior within the partition, so boundaries are an area where testing is
likely to yield defects. Every partition has its maximum and minimum values
and these maximum and minimum values are the boundary values of a
partition. A boundary value for a valid partition is a valid boundary value.
Similarly, a boundary value for an invalid partition is an invalid boundary
value.

26. Decision tables testing: Decision Table is aka Cause-Effect Table. This test
technique is appropriate for functionalities which has logical relationships
between inputs (if-else logic). In Decision table technique, we deal with
combinations of inputs. To identify the test cases with decision table, we
consider conditions and actions. We take conditions as inputs and actions as
outputs.

27. Cause-effect graph testing– Refer Decision Table Testing

28. State transition testing: Using state transition testing, we pick test cases
from an application where we need to test different system transitions. We
can apply this when an application gives a different output for the same
input, depending on what has happened in the earlier state.

29. Exhaustive Testing: Testing all the functionalities using all valid and
invalid inputs and preconditions is known as Exhaustive testing.

30. Early Testing: Defects detected in early phases of SDLC are less
expensive to fix. So conducting early testing reduces the cost of fixing
defects.

31. Use case testing: Use case testing is carried out with the help of use case
document. It is done to identify test scenarios to test end to end testing
32. Scenario testing: Scenario testing is a software testing technique which is
based on a scenario. It involves in converting business requirements to test
scenarios for better understanding and achieve end to end testing. A well
designed scenario should be motivating, credible, complex and the outcome
of which is easy to evaluate.

33. Documentation testing: Documentation testing is done to validate the


documented artifacts such as requirements, test plan, traceability matrix,
test cases.

34. Statement coverage testing: Statement coverage testing is a white box


testing technique which is to validate whether each and every statement in
the code is executed at least once.

35. Decision coverage testing/branch coverage testing: Decision coverage or


branch coverage testing is a white box testing technique which is to validate
every possible branch is executed at least once.

36. Path testing: Path coverage testing is a white box testing technique which
is to validate that all the paths of the program are executed at least once.

37. Mutation testing: Mutation testing is a type of white box testing which is
to change (mutate) certain statements in the source code and verify if the
tests are able to find the errors.

38. Loop testing: Loop testing is a white box testing technique which is to
validate the different kind of loops such as simple loops, nested loops,
concatenated loops and unstructured loops.

39. Performance testing: This type of testing determines or validates the


speed, scalability, and/or stability characteristics of the system or application
under test. Performance is concerned with achieving response times,
throughput, and resource-utilization levels that meet the performance
objectives for the project or product.
40. Load testing: It is to verify that the system/application can handle the
expected number of transactions and to verify the system/application
behavior under both normal and peak load conditions.

41. Stress testing: It is to verify the behavior of the system once the load
increases more than its design expectations.

42. Soak testing: Running a system at high load for a prolonged period of
time to identify the performance problems is called Soak Testing.

43. Endurance testing: Refer Soak testing

44. Stability testing: Refer Soak testing

45. Scalability Testing: Scalability testing is a type of non-functional testing.


It is to determine how the application under test scales with increasing
workload.

46. Volume testing: It is toverify that the system/application can handle a


large amount of data

47. Robustness testing: Robustness testing is a type of testing that is


performed to validate the robustness of the application.

48. Vulnerability testing: Vulnerability testing is the process of identifying the


vulnerabilities or weakness in the application.

49. Adhoc testing: Ad-hoc testing is quite opposite to the formal testing. It is
an informal testing type. In Adhoc testing, testers randomly test the
application without following any documents and test design techniques. This
testing is primarily performed if the knowledge of testers in the application
under test is very high. Testers randomly test the application without any test
cases or any business requirement document.
50. Exploratory testing: Usually, this process will be carried out by domain
experts. They perform testing just by exploring the functionalities of the
application without having the knowledge of the requirements.

51. Retesting: To ensure that the defects which were found and posted in the
earlier build were fixed or not in the current build. Say, Build 1.0 was
released. Test team found some defects (Defect Id 1.0.1, 1.0.2) and posted.
Build 1.1 was released, now testing the defects 1.0.1 and 1.0.2 in this build
is retesting.

52. Regression testing: Repeated testing of an already tested program, after


modification, to discover any defects introduced or uncovered as a result of
the changes in the software being tested or in another related or unrelated
software components.

53. Smoke testing: Smoke Testing is done to make sure if the build we
received from the development team is testable or not. It is also called as
“Day 0” check. It is done at the “build level”. It helps not to waste the testing
time to simply testing the whole application when the key features don’t
work or the key bugs have not been fixed yet.

54. Sanity testing: Sanity Testing is done during the release phase to check
for the main functionalities of the application without going deeper. It is also
called as a subset of Regression testing. It is done at the “release level”. At
times due to release time constraints rigorous regression testing can’t be
done to the build, sanity testing does that part by checking main
functionalities.

55. Dynamic testing: Dynamic testing involves in the execution of code. It


validates the output with the expected outcome

56. Static testing: Static Testing involves in reviewing the documents to


identify the defects in the early stages of SDLC.

57. Monkey testing: Perform abnormal action on the application deliberately


in order to verify the stability of the application.
58. Gorilla testing: Gorilla testing is done by testers, sometimes developers
also join hands with testers. It involves testing a system repeatedly to test
the robustness of the system.

59. Usability testing: To verify whether the application is user-friendly or not


and was comfortably used by an end user or not. The main focus in this
testing is to check whether the end user can understand and operate the
application easily or not. An application should be self-exploratory and must
not require training to operate it.

60. Accessibility testing: Accessibility testing is a subset of usability testing.


It aims to discover how easily the people with disabilities (such as visual
Impairments, Physical Impairment, Hearing Impairment, Cognitive
Impairment, Learning Impairment) can use a system.

61. Compatibility testing: It is to deploy and check whether the application is


working as expected in a different combination of environmental
components.

62. Configuration testing: Configuration testing is the process of testing an


application with each one of the supported hardware and software
configurations to find out whether the application can work without any
issues.

63. Localization testing: Localization is a process of adapting globalization


software for a specific region or language by adding local specific
components.

64. Globalization testing: Globalization is a process of designing a software


application so that it can be adapted to various languages and regions
without any changes.

65. Internationalization testing– Refer Globalization testing

66. Positive Testing: It is to determine what system supposed to do. It helps


to check whether the application is justifying the requirements or not.
67. Negative testing: It is to determine what system not supposed to do. It
helps to find the defects from the software.

68. Security testing: Security testing is a process to determine whether the


system protects data and maintains functionality as intended.

69. Penetration testing: Penetration testing is also known as pen testing. It is


a type of security testing. It is performed to evaluate the security of the
system.

70. Database testing: Data base testing is done to validate the data in the UI
is matched with the data stored in the database. It involves in checking the
schema, tables, triggers etc., of the database.

71. Bucket Testing: Bucket testing is a method to compare two versions of an


application against each other to determine which one performs better.

72. A/B testing: Refer Bucket Testing…

73. Split testing– Refer bucket testing…

74. Reliability Testing: Perform testing on the application continuously for


long period of time in order to verify the stability of the application

75. Interface Testing: Interface testing is performed to evaluate whether two


intended modules pass data and communicate correctly to one another.

76. Concurrency testing: Concurrency testing means accessing the


application at the same time by multiple users to ensure the stability of the
system. This is mainly used to identify deadlock issues.
77. Fuzz testing: Fuzz testing is used to identify coding errors and security
loopholes in an application. By inputting massive amount of random data to
the system in an attempt to make it crash to identify if anything breaks in
the application.

78. GUI Testing: Graphical User Interface Testing is to test the interface
between the application and the end user. Mainly testers concern about
appearance of the elements such as fonts and colors conforms to design
specifications.

79. API testing: API stands for Application Programming Interface. API
testingis a type of software testingthat involves testing APIs using some
tools like SOAPUI, PostMan.

80. Agile testing: Agile testing is a type of testing that involves following
principles of agile software development methodology. In this agile testing,
testing is conducted throughout the lifecycle of the continuously evolving
project instead of being confined to a particular phase.

81. End to end testing– Refer system testing…

82. Recovery testing: Recovery testing is performed in order to determine


how quickly the system can recover after the system crash or hardware
failure. It comes under the type of non-functional testing.

83. Risk based testing: Identify the modules or functionalities which are most
likely cause failures and then testing those functionalities.

84. Installation testing: It is to check whether the application is successfully


installed and it is working as expected after installation.

85. Formal Testing: It is a process where the testers test the application by
having pre-planned procedures and proper documentation.
86. Pilot testing: Pilot testing is a testing carried out under a real time
operating conditions by the company in order to gain the confidence of the
client

87. Backend testing: Refer Database testing…

88. Cross-browser testing: Cross Browser Testing is a type of non-functional


test which helps us to ensure that our website or web application works as
expected in various web browsers.

89. Browser compatibility testing: Refer browser compatibility testing…

90. Forward compatibility testing: Forward compatibility testing is to validate


the application under test is working as intended in the later versions of
software’s current version.

91. Backward compatibility testing: Backward compatibility testing is to


validate the application under test is working as intended in the earlier
versions of software’s current version.

92. Downward compatibility testing: Refer Backward compatibility testing…

93. Compliance testing: Compliance testing is a non-functional testing which


is done to validate whether the software meets a defined set of standards.

94. Conformance testing: Refer compliance testing…

95. UI testing: In UI testing, testers aim to test both GUI and Command Line
Interfaces (CLIs)
96. Destructive testing: Destructive testing is a testing technique which aims
to validate the robustness of the application by testing continuously until the
application breaks.

97. Dependency testing: Dependency testing is a testing technique which


examines the requirements of an application for pre-conditions, initial states
and configuration for proper functioning of the application.

98. Crowdsourced testing: Crowdsourced testing is carried out by a


community of expert quality assurance testers through an online platform.

99. ETL testing: ETL (Extract, Transform and Load) testing involves in
validating the data movement from source to destination and verifying the
data count in both source and destination and verifying data extraction,
transformation and also verifying the table relations.

100. Data warehouse testing: Refer ETL testing…

101. Fault injection testing: Fault injection testing is a testing technique in


which fault is intentionally introduced in the code in order to improve the test
coverage.

102. Failover testing: Failover testing is a testing technique that validates a


system’s ability to be able to allocate extra resource during the server failure
and transferring of the processing part to back-up systems

103. All pair testing: All pair testing approach is to test the application with
all possible combination of the values of input parameters.

104. Pairwise Testing: Refer All pair testing

What is Performance Testing?


Performance testing and types of performance testing such as Load Testing,
Volume Testing, Stress Testing, Capacity Testing, Soak/Endurance Testing
and Spike Testing come under Non-functional Testing

In the field of Software Testing, Testers mainly concentrate on Black Box and
White Box Testing. Under the Black Box testing, again there are different
types of testing. The major types of testing are Functionality testing and
Non-functional testing. As I mentioned in the first paragraph of this article,
Performance testing and testing types related to performance testing fall
under Non-functional testing.

In current market performance and responsiveness of applications play an


important role. We conduct performance testing to address the bottlenecks of
the system and to fine tune the system by finding the root cause of
performance issues. Performance testing answers to the questions like how
many users the system could handle, How well the system could recover
when the no. of users crossed the maximum users, What is the response
time of the system under normal and peak loads.

We use tools such as HP LoadRunner, Apache JMeter, etc., to measure the


performance of any System or Application Under Test (AUT). Let’s see what
are these terms in detail below.

Types of Performance Testing:

Performance Testing:

Performance testing determines or validates the speed, scalability, and/or


stability characteristics of the system or application under test. Performance
is concerned with achieving response times, throughput, and resource-
utilization levels that meet the performance objectives for the project or
product.

Capacity Testing:

Capacity Testing is to determine how many users the system/application can


handle successfully before the performance goals become unacceptable. This
allows us to avoid the potential problems in future such as increased user
base or increased volume of data.
Load Testing:

Load Testing is to verify that the system/application can handle the expected
number of transactions and to verify the system/application behaviour under
both normal and peak load conditions (no. of users).

Volume Testing:

Volume Testing is to verify that the system/application can handle a large


amount of data. This testing focuses on Data Base.

Stress Testing:

Stress Testing is to verify the behaviour of the system once the load
increases more than the system’s design expectations. This testing addresses
which components fail first when we stress the system by applying the load
beyond the design expectations. So that we can design more robust system.

Soak Testing:

Soak Testing is aka Endurance Testing. Running a system at high load for a
prolonged period of time to identify the performance problems is called Soak
Testing. It is to make sure the software can handle the expected load over a
long period of time.

Spike Testing:

Spike Testing is to determine the behaviour of the system under sudden


increase of load (a large number of users) on the system.

Levels of Testing:

Software testing is a process, to evaluate the functionality of a software


application with an intent to find whether the developed software met the
specified requirements or not and to identify the defects to ensure that the
product is defect free in order to produce the quality product.
Let’s see what are the levels of testing:

1. Unit Testing

2. Integration Testing

3. System Testing

4. Acceptance Testing

UNIT TESTING:

Unit Testing is done to check whether the individual modules of the source
code are working properly. i.e. testing each and every unit of the application
separately by the developer in developer’s environment. It is AKA Module
Testing or Component Testing

INTEGRATION TESTING:

Integration Testing is the process of testing the connectivity or data transfer


between the couple of unit tested modules. It is AKA I&T Testing or String
Testing

It is sub divided into Top Down Approach, Bottom Up Approach and Sandwich
Approach (Combination of Top Down and Bottom Up). This process is carried
out by using dummy programs called Stubs and Drivers. Stubs and Drivers
do not implement the entire programming logic of the software module but
just simulate data communication with the calling module.

Big Bang Integration Testing:

In Big Bang Integration Testing, the individual modules are not integrated
until all the modules are ready. Then they will run to check whether it is
performing well. In this type of testing, some disadvantages might occur like,
defects can be found at the later stage. It would be difficult to find out
whether the defect arouse in interface or in module.

Top Down Integration Testing

In Top Down Integration Testing, the high level modules are integrated and
tested first. i.e Testing from main module to sub module. In this type of
testing, Stubs are used as temporary module if a module is not ready for
integration testing.

Bottom Up Integration Testing

In Bottom Up Integration Testing, the low-level modules are integrated and


tested first i.e Testing from sub-module to main module. Same like Stubs,
here drivers are used as a temporary module for integration testing.

Stub:

It is called by the Module under Test.

Driver:

It calls the Module to be tested.

SYSTEM TESTING (END TO END TESTING):

It’s a black box testing. Testing the fully integrated application this is also
called as end to end scenario testing. To ensure that the software works in all
intended target systems. Verify thorough testing of every input in the
application to check for desired outputs. Testing of the users experience with
the application.

ACCEPTANCE TESTING:

To obtain customer sign-off so that software can be delivered and payments


received.

Types of Acceptance Testing are Alpha, Beta & Gamma Testing.

Alpha Testing:

Alpha testing is mostly like performing usability testing which is done by the
in-house developers who developed the software. Sometimes this alpha
testing is done by client or outsiders with the presence of developers or
testers.
Beta Testing:

Beta testing is done by limited number of end users before delivery, the
change request would be fixed if the user gives feedback or reports defect.

Gamma Testing:

Gamma testing is done when the software is ready for release with specified
requirements; this testing is done directly by skipping all the in-house testing
activities.

Manual Testing Vs Automation Testing in Software Testing:-

Manual Testing:

Manual testing is the process of testing the software manually to find the
defects. Tester should have the perspective of an end user and to ensure all
the features are working as mentioned in the requirement document. In this
process, testers execute the test cases and generate the reports manually
without using any automation tools.

Advantages:

Manual testing can be done on all kinds of applications

It is preferable for short life cycle products

Newly designed test cases should be executed manually

Application must be tested manually before it is automated

It is preferred in the projects where the requirements change frequently and


for the products where the GUI changes constantly

It is cheaper in terms of initial investment compared to Automation testing

It requires less time and expense to begin productive manual testing

It allows tester to perform adhoc testing

There is no necessity to the tester to have knowledge on Automation Tools

Disadvantages:

Manual Testing is time-consuming mainly while doing regression testing.


Expensive over automation testing in the long run

Automation Testing:

Automation testing is the process of testing the software using an


automation tools to find the defects. In this process, executing the test
scripts and generating the results are performed automatically by automation
tools. Some most popular tools to do automation testing are HP QTP/UFT,
Selenium WebDriver, etc.,

Advantages:

Automation testing is faster in execution

It is cheaper compared to manual testing in a long run

Automated testing is more reliable

Automated testing is more powerful and versatile

It is mostly used for regression testing

It does not require human intervention. Test scripts can be run unattended

It helps to increase the test coverage

Disadvantages:

It is recommended only for stable products

Automation testing is expensive initially

Most of the automation tools are expensive

It has some limitations such as handling captcha, fonts, color

Huge maintenance in case of repeated changes in the requirements

Not all the tools support all kinds of testing. Such as windows, web, mobility,
performance/load testing

If you find any other points which we overlooked, just put it in the
comments. We will include and make this post “Manual Testing Vs
Automation Testing” updated.

BLACK BOX TESTING:

It is also called as Behavioral/Specification-Based/Input-Output Testing


Black Box Testing is a software testing method in which testers evaluate the
functionality of the software under test without looking at the internal code
structure. This can be applied to every level of software testing such as Unit,
Integration, System and Acceptance Testing.

Testers create test scenarios/cases based on software requirements and


specifications. So it is AKA Specification Based Testing.

Tester performs testing only on the functional part of an application to make


sure the behavior of the software is as expected. So it is AKA Behavioral
Based Testing.

The tester passes input data to make sure whether the actual output
matches the expected output. So it is AKA Input-Output Testing.

Black Box Testing Techniques:

Equivalence Partitioning

Boundary Value Analysis

Decision Table

State Transition

Equivalence Partitioning: Equivalence Partitioning is also known as


Equivalence Class Partitioning. In equivalence partitioning, inputs to the
software or system are divided into groups that are expected to exhibit
similar behavior, so they are likely to be proposed in the same way. Hence
selecting one input from each group to design the test cases. Click here to
see detailed post on equivalence partitioning.

Boundary Value Analysis: Boundary value analysis (BVA) is based on testing


the boundary values of valid and invalid partitions. The Behavior at the edge
of each equivalence partition is more likely to be incorrect than the behavior
within the partition, so boundaries are an area where testing is likely to yield
defects. Click here to see detailed post on boundary value analysis.

Decision Table: Decision Table is aka Cause-Effect Table. This test technique
is appropriate for functionalities which has logical relationships between
inputs (if-else logic). In Decision table technique, we deal with combinations
of inputs. To identify the test cases with decision table, we consider
conditions and actions. We take conditions as inputs and actions as outputs.
Click here to see detailed post on decision table.

State Transition: Using state transition testing, we pick test cases from an
application where we need to test different system transitions. We can apply
this when an application gives a different output for the same input,
depending on what has happened in the earlier state. Click here to see
detailed post on state transition technique.

Types of Black Box Testing:

Functionality Testing: In simple words, what the system actually does is


functional testing

Non-functionality Testing: In simple words, how well the system performs is


non-functionality testing

WHITE BOX TESTING:

It is also called as Glass Box, Clear Box, Structural Testing.

White Box Testing is based on applications internal code structure. In white-


box testing an internal perspective of the system, as well as programming
skills, are used to design test cases. This testing usually done at the unit
level.

White Box Testing Techniques:

Statement Coverage

Branch Coverage

Path Coverage

What is Smoke Testing in Software Testing:

Smoke Testing is done to make sure if the build we received from the
development team is testable or not. It is also called as “Day 0” check. It is
done at the “build level”.
It helps not to waste the testing time to simply testing the whole application
when the key features don’t work or the key bugs have not been fixed yet.

What is Sanity Testing in Software Testing:

Sanity Testing is done during the release phase to check for the main
functionalities of the application without going deeper. It is also called as a
subset of Regression testing. It is done at the “release level”.

At times due to release time constraints rigorous regression testing can’t be


done to the build, sanity testing does that part by checking main
functionalities.

Earlier I have posted a detailed post on “Difference between Regression and


Retesting”. If you haven’t gone through it, you can browse by clicking on the
link.

Smoke Testing Vs Sanity Testing

Example to showcase the difference between Smoke and Sanity Testing:

For example: In a project for the first release, Development team releases
the build for testing and the test team tests the build. Testing the build for
the very first time is to accept or reject the build. This we call it as Smoke
Testing. If the test team accepts the build then that particular build goes for
further testing. Imagine the build has 3 modules namely Login, Admin,
Employee. The test team tests the main functionalities of the application
without going deeper. This we call it as Sanity Testing.

Some more differences between Smoke Testing and Sanity Testing:

Smoke Test is done to make sure if the build we received from the
development team is testable or not.

Sanity Test is done during the release phase to check for the main
functionalities of the application without going deeper.

Smoke Testing is performed by both Developers and Testers.


Sanity Testing is performed by Testers alone.

Smoke Testing exercises the entire application from end to end.

Sanity Testing exercises only the particular component of the entire


application.

Smoke Testing, build may be either stable or unstable.

Sanity Testing, build is relatively stable.

What is Regression Testing?

In the interview perspective, you will be asked to define regression testing.

Regression testing definition: Repeated testing of an already tested program,


after modification, to discover any defects introduced or uncovered as a
result of the changes in the software being tested or in another related or
unrelated software components.

In simple words, We do regression testing by re-executing the tests against


the modified application to evaluate whether the modified code breaks
anything which was working earlier. Anytime we do modify an application, we
should do regression testing (we run regression test).

Regression testing gives confidence to the developers that there is no broken


functionality after modifying the production code. It makes sure that there
are no unexpected side effects. Hope you have understood what is regression
tests. Now let’s see when do we do this type of testing.

When do we do Regression Testing?

We do software regression testing whenever the production code is modified.


Usually, we do execute regression tests in the following cases:

Some regression testing examples are here.

1. When new functionalities are added to the application.

Example: A website has a login functionality which allows users to do login


only with Email. Now the new features look like “providing a new feature to
do login using Facebook”.
2. When there is a Change Requirement (In organizations, we call it as CR).

Example: Remember password should be removed from the login page which
is available earlier

3. When there is a Defect Fix.

Example: Imagine Login button is not working in a login page and a tester
reports a bug stating that the login button is broken. Once the bug is fixed by
the developers, testers test it to make sure whether the Login button is
working as per the expected result. Simultaneously testers test other
functionalities which are related to login button

4. When there is a Performance Issue Fix.

Example: Loading the home page takes 5 seconds. Reducing the load time to
2 seconds

5. When there is an Environment change.

Example: Updating the Database from MySQL to Oracle)

So far we have seen what is regression and when do we do regression. Now


let’s see how we do regression testing.

Regression Testing – Manual or Automation?

Regression tests are generally extremely tedious and time-consuming. We do


regression testing after every deployment, so it would make life easy to
automate test cases instead of running manually on each and every time. if
we have thousands of test cases, it’s better to create automation test scripts
for the test cases which we do on every build (i.e., regression testing)

Automated regression testing is the regression testing best practices and it is


the choice of organizations to save lot of time and to run nightly builds.
Regression Testing Tools:

Some of the Regression Testing Tools are:

Selenium

QTP/UFT

SahiPro

Ranorex

TestComplete

Watir

IBM Rational Functional Tester

What is Retesting?

Retesting : To ensure that the defects which were found and posted in the
earlier build were fixed or not in the current build.

Retesting is running the previously failed test cases again on the new
software to verify whether the defects posted earlier are fixed or not.

In simple words, Retesting is testing a specific bug after it was fixed.

Example: Say, Build 1.0 was released. While testing the Build 1.0, Test team
found some defects (example, Defect Id 1.0.1 and Defect Id 1.0.2) and
posted. The test team tests the defects 1.0.1 and 1.0.2 in the Build 1.1 (only
if these two defects are mentioned in the Release Note of the Build 1.1) to
make sure whether the defects are fixed or not.

Process: As per the Bug Life Cycle, once a tester found a bug, the bug is
reported to the Development Team. The status of Bug should be “New”. The
Development Team may accept or reject the bug. If the development team
accepts the bug then they do fix it and release it in the next release. The
status of the bug will be “Ready For QA”. Now the tester verifies the bug to
find out whether it is resolved or not. This testing is known as retesting.
Retesting is a planned testing. We do use same test cases with same test
data which we used in the earlier build. If the bug is not found then we do
change the status of the bug as “Fixed” else we do change the status as “Not
Fixed” and send a Defect Retesting Document to the development team.

When Do We Do Re-testing:

1. When there is a particular bug fix specified in the Release Note:

Once the development team releases the new build, then the test team has
to test the already posted bugs to make sure that the bugs are fixed or not.

2. When a Bug is rejected:

At times, development team refuses few bugs which were raised by the
testers and mention the status of the bug as Not Reproducible. In this case,
the testers need to retest the same issue to let the developers know that the
issue is valid and reproducible.

To avoid this scenario, we need to write a good bug report. Here is a post on
how to write a good bug report.

3. When a Client calls for a retesting:

At times, the Client may request us to do the test again to gain the
confidence on the quality of the product. In this case, test teams do test the
product again.

A product should never be released after modification has been done to the
code with just retesting the bug fixes, we need to do Regression Testing too.

What is the difference between Regression And Retesting

REGRESSION TESTING:

Repeated testing of an already tested program, after modification, to


discover any defects introduced or uncovered as a result of the changes in
the software being tested or in another related or unrelated software
components.

Usually, we do regression testing in the following cases:

New functionalities are added to the application

Change Requirement (In organizations, we call it as CR)

Defect Fixing

Performance Issue Fix

Environment change (E.g.. Updating the DB from MySQL to Oracle)

RETESTING:

To ensure that the defects which were found and posted in the earlier build
were fixed or not in the current build.

Say, Build 1.0 was released. Test team found some defects (Defect Id 1.0.1,
1.0.2) and posted.

Build 1.1 was released, now testing the defects 1.0.1 and 1.0.2 in this build
is retesting.

Example to showcase the difference between Regression and Retesting:

Let’s take two scenarios.

Case 1: Login Page – Login button not working (Bug)

Case 2: Login Page – Added “Stay signed in” checkbox (New feature)

In Case 1, Login button is not working, so tester reports a bug. Once the bug
is fixed, testers test it to make sure whether the Login button is working as
per the expected result.
Earlier I have posted a detailed post on “Bug Report Template”. If you haven’t
gone through it, you can browse by clicking here. Also, you could download
the Sample Bug Report Template / Defect Report Template from here.

In Case 2, tester tests the new feature to ensure whether the new feature
(Stay signed in) is working as intended.

Case 1 comes under Re-testing. Here tester retests the bug which was found
in the earlier build by using the steps to reproduce which were mentioned in
the bug report.

Also in the Case 1, tester tests other functionalities which are related to login
button which we call as Regression Testing.

Case 2 comes under Regression Testing. Here tester tests the new feature
(Stay signed in) and also tests the relevant functionalities. Testing the
relevant functionalities while testing the new features come under Regression
Testing.

Another Example:

Imagine, An Application Under Test has three modules namely Admin,


Purchase and Finance. Finance module depends on Purchase module. If a
tester found a bug on Purchase module and posted. Once the bug is fixed,
the tester needs to do Retesting to verify whether the bug related to the
Purchase is fixed or not and also tester needs to do Regression Testing to test
the Finance module which depends on the Purchase module.

Some other Differences between Regression and Retesting:

Retesting done on failed test cases whereas Regression Testing done on


passed test cases.
Retesting makes sure that the original defect has been corrected whereas
Regression Testing makes sure that there are no unexpected side effects.

Entry Criteria:
The prerequisites that must be achieved before commencing
the testing process.

Exit Criteria:
The conditions that must be met before testing should be
concluded.

Entry and Exit Criteria:


As mentioned earlier, each and every phase of STLC has Entry
and Exit criteria. Let’s see phase wise entry and exit criteria:

Requirement Analysis:
A quality assurance professional has to verify the requirement
documents prior to starting the phases like Planning, Design,
Environment Setup, Execution, Reporting and Closure . We
prepare test artifacts like Test Strategy, Test Plan and other
based on the analysis of requirement documents.

Test Planning:
Test Manager/Test Lead prepares the Test Strategy and Test
Plan documents and testers will get a chance to involve in the
preparation process. It varies company to company.

Entry Criteria: Requirements Documents

Exit Criteria: Test Strategy, Test Plan and Test Effort


estimation document.
Test Design:
In test design phase, testers prepare test scenarios, test
cases/test scripts and test data based on the Requirement
Documents and Test Plan. Here testers involve in the process
of preparing the test cases, reviewing the test cases of peers,
getting the approvals of test cases and storing the test cases
in a repository.

Entry Criteria: Requirements Documents, Test Plan

Exit Criteria: Test cases, Test Scripts (if automation), Test


data.

Test Environment Setup:


In this phase, in most of the companies testers won’t involve
in the process of preparing test environment setup. Most
probably Dev Team or Implementation Team prepares the test
environment. It varies company to company. As a tester, you
will be given an installation document to setup test
environment. Readiness of the given test environment.

Entry Criteria: Test Plan, Smoke Test cases, Test Data

Exit Criteria: Test Environment, Smoke Test Results

Test Execution:
In this phase, testers involve in execution of test cases,
reporting the defects and updating the requirement
traceability matrix.

Entry Criteria: Test Plan document, Test cases, Test data, Test
Environment

Exit Criteria: Test case execution report. Defect report, RTM


Test Closure:
Traditionally this is the final phase of testing. Test lead involves
in preparing Test Metrics and Test Closure Report. Test lead
sends out test closure report once the testing process is
completed

Entry Criteria: Test case Execution report (make sure there


are no high severity defects opened), Defect report
Exit Criteria: Test Closure report, Test metrics

Test Scenario Vs Test Case:


Test Scenario:
Test Scenario gives the idea of what we have to test. Test
Scenario is like a high-level test case.

Test Scenario answers “What to be tested”

Assume that we need to test the functionality of a login page


of Gmail application. Test scenario for the Gmail login page
functionality as follows:

Test Scenario: Verify the login functionality

Test Case:
Test cases are the set of positive and negative executable
steps of a test scenario which has a set of pre-conditions, test
data, expected result, post-conditions and actual results.

Test Case answers “How to be tested”

Assume that we need to test the functionality of a login page


of Gmail application. Test cases for the above login page
functionality as follows:
Test Case 1: Enter valid User Name and valid Password

Test Case 2: Enter valid User Name and invalid Password

Test Case 3: Enter invalid User Name and valid Password

Test Case 4: Enter invalid User Name and invalid Password

Earlier I have posted a detailed post on Test Case with an


explanation. If you have missed it, you could check the post by
clicking here.

Manual testing methods are


classified as follows:
1. Black Box Testing
2. White Box Testing
3. Grey Box Testing

BLACK BOX TESTING:


Black Box Testing is a software testing method in which testers
evaluate the functionality of the software under test without
looking at the internal code structure. This can be applied to
every level of software testing such as Unit, Integration,
System and Acceptance Testing.

Testers create test scenarios/cases based on software


requirements and specifications. So it is AKA Specification
Based Testing, Behavioral Testing and Input-Output Testing.

Tester performs testing only on the functional part of an


application to make sure the behavior of the software is as
expected. So it is AKA Behavioral Based Testing.
The tester passes input data to make sure whether the actual
output matches the expected output. So it is AKA Input-Output
Testing.

There is no obligation on testers to have knowledge on source


code in this process.

Black Box Testing Techniques:


Equivalence Partitioning
Boundary Value Analysis
Decision Table
State Transition Testing

Types of Black Box Testing:


Functionality Testing: In simple words, what the system
actually does is functional testing
Non-functionality Testing: In simple words, how well the
system performs is non-functionality testing

WHITE BOX TESTING:


White Box Testing is based on applications internal code
structure. In white-box testing an internal perspective of the
system, as well as programming skills, are used to design test
cases. This testing usually done at the unit level. It is AKA
Glass Box, Clear Box, Structural Testing, Open Box,
Transparent Box.

White Box Testing Techniques:


Statement Coverage
Branch Coverage
Path Coverage
GREY BOX TESTING:
Grey box is the combination of both White Box and Black Box
Testing.
The tester who works on this type of testing needs to have
access to design documents. This helps to create better test
cases in this process.

What is Verification And


Validation In Software
Testing

Verification And Validation:


In software testing, verification and validation are the
processes to check whether a software system meets the
specifications and that it fulfills its intended purpose or not.
Verification and validation is also known as V & V. It may also
be referred to as software quality control. It is normally the
responsibility of software testers as part of the Software
Development Life Cycle.

VERIFICATION: (Static Testing)


Verification is the process, to ensure that whether we are
building the product right i.e., to verify the requirements which
we have and to verify whether we are developing the product
accordingly or not.

Activities involved here are Inspections, Reviews,


Walkthroughs
In simple words, Verification is verifying the documents

As per IEEE-STD-610:
The process of evaluation software to determine whether the
products of a given development phase satisfy the conditions
imposed at the beginning of that phase.

Am I building a product right? It’s a Low-Level Activity.


Verification is a static method of checking documents and files.

VALIDATION: (Dynamic Testing)


Validation is the process, whether we are building the right
product i.e., to validate the product which we have developed
is right or not.

Activities involved in this is Testing the software application

In simple words, Validation is to validate the actual and


expected output of the software

As per IEEE-STD-610:
The process of evaluating software during or at the end of the
development process to determine whether it satisfies
specified requirements [IEEE-STD-610]

Am I building a right product? It’s a High Level Activity.


Validation is a dynamic process of testing the real product.

This is a brief explanation of Verification And Validation in


Software Testing.
The Complete Guide To Writing
Test Strategy
Test Strategy is a high level document (static document) and
usually developed by project manager. It is a document which
captures the approach on how we go about testing the product
and achieve the goals. It is normally derived from the Business
Requirement Specification (BRS). Documents like Test Plan
are prepared by keeping this document as base.

Even though testing differs between organizations. Almost all


the software development organizations follow Test Strategy
document to achieve the goals and to follow the best practice.

Usually test team starts writing the detailed Test Plan and
continue further phases of testing once the test strategy is
ready. In Agile world, some of the companies are not spending
time on test plan preparation due to the minimal time for each
release but they maintain test strategy document. Maintaining
this document for the entire project leads to mitigate the
unforeseen risks.

This is one of the important documents in test deliverables.


Like other test deliverables, test team shares this with the
stakeholders for better understanding about the scope of the
project, test approaches and other important aspects.

If you are a beginner, you may not get an opportunity to


create a test strategy document but it’s good to know how to
create test strategy document. It will be helpful when you are
handling a QA Team. Once you become a Project Lead or
Project manager you have to develop test strategy document.
Creating an effective test strategy document is a skill which
you must acquire. By writing a test strategy plan you can
define the testing approach of your project. Test strategy
document should be circulated to all the team members so
that every team member will be consistent with the testing
approach. Remember there is no rule to maintain all these
sections in your Test Strategy document. It varies from
company to company. This list gives a fair idea on how to write
a good Test Strategy.

Sections of Test Strategy Document:


Following are the sections of test strategy document:

1. Scope and overview


2. Test Approach
3. Testing tools
4. Industry standards to follow
5. Test deliverables
6. Testing metrics
7. Requirement Traceability Matrix
8. Risk and mitigation
9. Reporting tool
10. Test summary
We have seen what is test strategy document and what it
contains. Let’s discuss each section of Test Strategy in STLC
briefly.

Scope and overview:


In this section, we will mention the scope of testing activities
(what to test and why to test) and mention an overview of the
AUT.

Example: Creating a new Application (Say Google Mail) which


offers email services. Test the functionality of emails and make
sure it gives value to the customer.
Test Approach:
In this section, we usually define the following

; Test levels
; Test types
; Roles and responsibilities
; Environment requirements
Test Levels:
This section lists out the levels of testing that will be
performed during QA Testing. Levels of testing such as unit
testing, integration testing, system testing and user
acceptance testing. Testers are responsible for integration
testing, system testing and user acceptance testing.

Test Types:
This section lists out the testing types that will be performed
during QA Testing.

Roles and responsibilities:


This section describes the roles and responsibilities of Project
Manager, Project Lead, individual testers.

Environment requirements:
This section lists out the hardware and software for the test
environment in order to commence the testing activities.

Testing tools:
This section will describe the testing tools necessary to conduct
the tests
Example: Name of Test Management Tool, Name of Bug
Tracking Tool, Name of Automation Tool

Industry standards to follow:


This section describes the industry standard to produce high
quality system that meets or exceeds customer expectations.
Usually, project manager decides the testing models and
procedures which need to follow to achieve the goals of the
project.

Test deliverables:
This section lists out the deliverables that need to produce
before, during and at the end of testing.

Read more on Test Deliverables here..

Testing metrics:
This section describes the metrics that should be used in the
project to analyze the project status.

Read more on Test Metrics here..

Requirement Traceability Matrix:


Requirement traceability matrix is used to trace the
requirements to the tests that are needed to verify whether
the requirements are fulfilled.

Read more on RTM here..

Risk and mitigation:


Identify all the testing risks that will affect the testing process
and specify a plan to mitigate the risk.
Reporting tool:
This section outlines how defects and issues will be tracked
using a reporting tool.

Test Summary:
This section lists out what kind of test summary reports will be
produced along with the frequency. Test summary reports will
be generated on a daily, weekly or monthly basis depends on
how critical the project is.

Software Test Plan Template with


Detailed Explanation
In this post, we will learn how to write a Software Test Plan
Template. Before that we see what is a Test plan. Test
plan document is a document which contains the plan for all
the testing activities to be done to deliver a quality product.
Test Plan document is derived from the Product Description,
SRS, or Use Case documents for all future activities of the
project. It is usually prepared by the Test Lead or Test
Manager and the focus of the document is to describe what to
test, what not to test, how to test when to test and who will do
what test. Also, it includes the environment and tools needed,
resource allocation, test technique to be followed, risks and
contingencies plan. A test plan is a dynamic document and we
should always keep it up-to-date. Test plan document guides
us how the testing activity should go on. Success of the testing
project completely depends on Test Plan.

Test plan is one of the documents in test deliverables. Like


other test deliverables, the test plan document is also shared
with the stakeholders. The stakeholders get to know the scope,
approach, objectives, and schedule of software testing to be
done.
How To Prepare Effective Test Plan?
Some of the measures are to start preparing the test plan
early in the STLC, keep the test plan short and simple to
understand, and keep the test plan up-to-date

Who Prepare Test Plan Template?


Usually, Test Lead prepares Test Plan and Testers involve in the
process of preparing test plan document. Once the test plan is
well prepared, then the testers write test scenarios and test
cases based on test plan document.

Sections of Test Plan Template:


Following are the sections of test plan document as per IEEE
829 standards.

1. Test Plan Identifier


2. References
3. Introduction
4. Test Items
5. Features To Be Tested
6. Features Not To Be Tested
7. Approach
8. Pass/Fail Criteria
9. Suspension Criteria
10. Test Deliverables
11. Testing Tasks
12. Environmental Needs
13. Responsibilities
14. Staffing and Training Needs
15. Schedule
16. Risks and Contingencies
17. Approvals
Let’s see each component of the Test Plan Document. We are
going to present the Test Plan Document as per IEEE 829
Standards.

Test Plan Identifier:


Test Plan Identifier is a unique number to identify the test
plan.

Example: ProjectName_0001

References:
This section is to specify all the list of documents that support
the test plan which you are currently creating.

Example: SRS (System Requirement Specification), Use Case


Documents, Test Strategy, Project Plan, Project Guidelines etc.,

Introduction:
Introduction or summary includes the purpose and scope of
the project

Example: The objective of this document is to test the


functionality of the ‘ProjectName’

Test Items:
A list of test items which will be tested

Example: Testing should be done on both front end and back


end of the application on the Windows/Linux environments.
Features To Be Tested:
In this section, we list out all the features that will be tested
within the project.

Example: The features which are to be tested are Login Page,


Dashboard, Reports.

Features Not To Be Tested:


In this section, we list out the features which are not included
in the project.

Example: Payment using PayPal features is above to remove


from the application. There is no need to test this feature.

Approach:
The overall strategy of how testing will be performed. It
contains details such as Methodology, Test types, Test
techniques etc.,

Example: We follow Agile Methodology in this project

Pass/Fail Criteria:
In this section, we specify the criteria that will be used to
determine pass or fail percentage of test items.

Example: All the major functionality of the application should


work as intended and the pass percentage of test cases should
be more than 95% and there should not be any critical bugs.

Suspension Criteria:
In this section, we specify when to stop the testing.
Example: If any of the major functionalities are not functional
or system experiences login issues then testing should
suspend.

Test Deliverables:
List of documents need to be delivered at each phase of
testing life cycle. The list of all test artifacts.

Examples: Test Cases, Bug Report

Read more on “Test Deliverables”..

Testing Tasks:
In this section, we specify the list of testing tasks we need to
complete in the current project.

Example: Test environment should be ready prior to test


execution phase. Test summary report needs to be prepared.

Environmental Needs:
List of hardware, software and any other tools that are needed
for a test environment.

Responsibilities:
We specify the list of roles and responsibilities of each test
tasks.

Example: Test plan should be prepared by Test Lead.


Preparation and execution of tests should be carried out by
testers.
Staffing and Training Needs:
Plan training course to improve the skills of resources in the
project to achieve the desired goals.

Schedule:
Complete details on when to start, finish and how much time
each task should take place.

Example: Perform test execution – 120 man-hours, Test


Reporting – 30 man-hours

Risks and Contingencies:


In this section, we specify the probability of risks and
contingencies to overcome those risks.

Example: Risk – In case of a wrong budget estimation, the


cost may overrun. Contingency Plan – Establish the scope
before beginning the testing tasks and pay attention in the
project planning and also track the budget estimates
constantly.

Approvals:
Who should sign off and approve the testing project

Example: Project manager should agree on completion of the


project and determine the steps to proceed further.
Test Case Template With
Explanation | Software
Testing Material
Detailed Explanation – Test Case
Template
A test case template is a document comes under one of
the test artifacts, which allows testers to develop the test
cases for a particular test scenario in order to verify whether
the features of an application are working as intended or
not. Test cases are the set of positive and negative executable
steps of a test scenario which has a set of pre-conditions, test
data, expected result, post-conditions and actual results.

Most of the companies are using test case management tools


such as Quality Center (HP QC), JIRA etc., and some of the
companies still using excel sheets to write test cases.

Assume we need to write test cases for a scenario (Verify the


login of Gmail account).

Here are some test cases.

1. Enter valid User Name and valid Password


2. Enter valid User Name and invalid Password
3. Enter invalid User Name and valid Password
4. Enter invalid User Name and invalid Password

Find the test case template screenshot below:


Let’s discuss the main fields of a test case:

PROJECT NAME: Name of the project the test cases belong to


MODULE NAME: Name of the module the test cases belong to
REFERENCE DOCUMENT: Mention the path of the reference
documents (if any such as Requirement Document, Test Plan,
Test Scenarios etc.,)
CREATED BY: Name of the Tester who created the test cases
DATE OF CREATION: When the test cases were created
REVIEWED BY: Name of the Tester who created the test
cases
DATE OF REVIEW: When the test cases were reviewed
EXECUTED BY: Name of the Tester who executed the test
case
DATE OF EXECUTION: When the test case was executed
TEST CASE ID: Each test case should be represented by a
unique ID. It’s good practice to follow some naming convention
for better understanding and discrimination purpose.
TEST SCENARIO: Test Scenario ID or title of the test
scenario.
TEST CASE: Title of the test case
PRE-CONDITION: Conditions which needs to meet before
executing the test case.
TEST STEPS: Mention all the test steps in detail and in the
order how it could be executed.
TEST DATA: The data which could be used an input for the
test cases.
EXPECTED RESULT: The result which we expect once the test
cases were executed. It might be anything such as Home
Page, Relevant screen, Error message etc.,
POST-CONDITION: Conditions which needs to achieve when
the test case was successfully executed.
ACTUAL RESULT: The result which system shows once the
test case was executed.
STATUS: If the actual and expected results are same, mention
it as Passed. Else make it as Failed. If a test fails, it has to go
through the bug life cycle to be fixed.

Test Scenarios Login Page


[How To Write Test
Scenarios of a Login Page] |
SoftwareTestingMateria
Test Scenarios Login Page
In any application, logging in is the process to access an
application by an individual who has valid user credentials.
Logging in is usually used to enter a specific page, which
trespassers cannot see. In this post, we will see “Test
Scenarios Login Page”. Testing of the Login page is very
important for any application in terms of security aspect. We
will try to cover most widely used Login Page scenarios here.

We usually write test cases for login page for every application
we test. Every login page should have the following elements.

1. ‘Email/Phone Number/Username’ Textbox


2. ‘Password’ Textbox
3. Login Button
4. ‘Remember Me’ Checkbox
5. ‘Keep Me Signed In’ Checkbox
6. ‘Forgot Password’ Link
7. ‘Sign up/Create an account’ Link
8. CAPTCHA
Following are the test cases for User Login Page. The list
consists of both Positive and Negative test scenarios login
page.

Test Cases of a Login Page (Test Scenarios


Login Page):
1. Verify that cursor is focused on “Username” text box on
the page load (login page)
2. Verify that the login screen contains elements such as
Username, Password, Sign in button, Remember password
check box, Forgot password link, and Create an account
link.
3. Verify that tab functionality is working properly or not
4. Verify that Enter/Tab key works as a substitute for the
Sign in button
5. Verify that all the fields such as Username, Password has a
valid placeholder
6. Verify that the labels float upward when the text field is in
focus or filled (In case of floating label)
7. Verify that User is able to Login with Valid Credentials
8. Verify that User is not able to Login with invalid Username
and invalid Password
9. Verify that User is not able to Login with Valid Username
and invalid Password
10. Verify that User is not able to Login with invalid Username
and Valid Password
11. Verify that User is not able to Login with blank Username
or Password
12. Verifythat User is not able to Login with inactive
credentials
13. Verify that clicking on browser back button after successful
login should not take User to log out mode
14. Verify that clicking on browser back button after successful
logout should not take User to logged in mode
15. Verify that there is a limit on the total number of
unsuccessful login attempts (No. of invalid attempts
should be based on business logic. Based on the business
logic, User will be asked to enter captcha and try again or
user will be blocked)
16. Verify that the password is in encrypted form when
entered
17. Verify the password can be copy-pasted
18. Verify that encrypted characters in “Password” field should
not allow deciphering if copied
19. Verify that User should be able to login with the new
password after changing the password
20. Verify that User should not be able to login with the old
password after changing the password
21. Verify that spaces should not be allowed before any
password characters attempted
22. Verify that whether User is still logged in after series of
actions such as sign in, close browser and reopen the
application.
23. Verify that the ways to retrieve the password if the User
forgets the password
24. Verify that “Remember password” checkbox is
unselected by default (depends on business logic, it may
be selected or unselected)
25. Verify that “Keep me logged in” checkbox is unselected by
default (depends on business logic, it may be selected or
unselected)
26. Verify that the timeout of the login session (Session
Timeout)
27. Verify that the logout link is redirected to login/home page
28. Verify that User is redirected to appropriate page after
successful login
29. Verify that User is redirected to Forgot password page
when clicking on Forgot Password link
30. Verify that User is redirected to Create an account page
when clicking on Sign up / Create an account link
31. Verify that validation message is displayed in case when
User leaves Username or Password as blank
32. Verify that validation message is displayed in case of
exceeding the character limit of the Username and
Password fields
33. Verify that validation message is displayed in case of
entering special character in the Username and password
fields
34. Verify whether the login form is revealing any security
information by viewing page source
35. Verify that the login page is vulnerable to SQL injection
36. Verify whether Cross-site scripting (XSS ) vulnerability
work on a login page. XSS vulnerability may be used by
hackers to bypass access controls.
If there is a captcha on the login page (Test Cases
for CAPTCHA):
37. Verify that whether there is a client-side validation when
User doesn’t enter CAPTCHA
38. Verify that the refresh link of CAPTCHA is generating new
CAPTCHA
39. Verify that the CAPTCHA is case sensitive
40. Verify whether the CAPTCHA has audio support to listen
Must Read: Test Scenarios of a Signup form
Writing test cases for an application takes a little practice. A
well-written test case should allow any tester to understand
and execute the tests and make the testing process smoother
and saves a lot of time in long run. Earlier we have posted a
video on How To Write Test Cases. I am concluding this post
“Test Scenarios Login Page / Test Scenarios of Login form”.
Test Scenarios Registration Form
Registration form varies based on business requirement. In
this post, we will see the Test Scenarios Registration form. We
will list all the possible test scenarios of a registration form
(Test Scenarios Registration Page/Test Scenarios Signup form).
We usually write test cases for Registration Form/Signup
form/Signup page for every application we test. Here are the
fields usually used in a registration form.

; User Name
; First Name
; Last Name
; Password
; Confirm Password
; Email Id
; Phone number
; Date of birth
; Gender
; Location
; Terms of use
; Submit
; Login (If you already have an account)
Test Scenarios of a Registration Form:
1. Verify that the Registration form contains Username, First
Name, Last Name, Password, Confirm Password, Email Id,
Phone number, Date of birth, Gender, Location, Terms of
use, Submit, Login (If you already have an account)
2. Verify that tab functionality is working properly or not
3. Verify that Enter/Tab key works as a substitute for the
Submit button
4. Verify that all the fields such as Username, First Name,
Last Name, Password and other fields have a valid
placeholder
5. Verify that the labels float upward when the text field is in
focus or filled (In case of floating label)
6. Verify that all the required/mandatory fields are marked
with * against the field
7. Verify that clicking on submit button after entering all the
mandatory fields, submits the data to the server
8. Verify that system generates a validation message when
clicking on submit button without filling all the mandatory
fields.
9. Verify that entering blank spaces on mandatory fields lead
to validation error
10. Verify that clicking on submit button by leaving optional
fields, submits the data to the server without any
validation error
11. Verify that case sensitivity of Username (usually Username
field should not follow case sensitivity – ‘rajkumar’ &
‘RAJKUMAR’ acts same)
12. Verify that system generates a validation message when
entering existing username
13. Verify that the character limit in all the fields (mainly
username and password) based on business requirement
14. Verifythat the username validation as per business
requirement (in some application, username should not
allow numeric and special characters)
15. Verify that the validation of all the fields are as per
business requirement
16. Verify that the date of birth field should not allow the
dates greater than current date (some applications have
age limit of 18 in that case you have to validate whether
the age is greater than or equal to 18 or not)
17. Verify that the validation of email field by entering
incorrect email id
18. Verify that the validation of numeric fields by entering
alphabets and characters
19. Verify that leading and trailing spaces are trimmed after
clicking on submit button
20. Verify that the “terms and conditions” checkbox is
unselected by default (depends on business logic, it may
be selected or unselected)
21. Verify that the validation message is displayed when
clicking on submit button without selecting “terms and
conditions” checkbox
22. Verify that the password is in encrypted form when
entered
23. Verify whether the password and confirm password are
same or not
Writing test cases for an application takes a little practice. A
well-written test case should allow any tester to understand
and execute the tests and make the testing process smoother
and saves a lot of time in long run. Earlier we have posted a
video on How To Write Test Cases. I am concluding this post
“Test Scenarios Registration form / Test Scenarios of Signup
form”.

Bug Report Template – Detailed


Explanation
Defect report template or Bug report template is one of
the test artifacts. It comes into picture when the test execution
phase is started.

Earlier I have posted a detailed post on “Software Testing Life


Cycle (STLC)”, if you haven’t gone through it, you can
browse “Software Testing Life Cycle (STLC)” here

The purpose of using Defect report template or Bug report


template is to convey the detailed information (like
environment details, steps to reproduce etc.,) about the bug to
the developers. It allows developers to replicate the bug easily.

See the difference between Error, Bug, Defect and Failure here
Components of Bug Report Template:
Let’s discuss the main fields of a defect report and in the
next post, we learn how to write a good bug report.

Defect ID: Add a Defect ID using a naming convention


followed by your team. The Defect ID will be generated
automatically in case of defect management tool.

Title/Summary: Title should be short and simple. It should


contain specific terms related to the actual issue. Be specific
while writing the title.

Assume, you have found a bug in the registration page while


uploading a profile picture that too a particular file format (i.e.,
JPEG file). System is crashing while uploading a JPEG file.

Note: I use this example, throughout this post.

Good: “Uploading a JPEG file (Profile Picture) in the


Registration Page crashes the system”.

Bad: “System crashes”.

Reporter Name: Name of the one who found the defect


(Usually tester’s name but sometimes it might be Developer,
Business Analyst, Subject Matter Expert (SME), Customer)

Defect Reported Date: Mention the date on which you have


found the bug.

Who Detected: Specify the designation of the one who found


the defect. E.g. QA, Developer, Business Analyst, SME,
Customer
How Detected: In this field, you must specify on how you
have detected such as while doing Testing or while doing
Review or while giving Walkthrough etc.,

Project Name: Sometimes, we may work on multiple projects


simultaneously. So, choose the project name correctly. Specify
the name of the project (If it’s a product, specify the product
name)

Release/Build Version: On which release this issue occurs.


Mention the build version details clearly.

Defect/Enhancement: If the system is not behaving as


intended then you need to specify it as a Defect. If its just a
request for a new feature then you must specify it as
Enhancement.

Environment: You must mention the details of Operation


Systems, Browser Details and any other related to the test
environment in which you have encountered the bug.

Example: Windows 8/Chrome 48.0.2564.103

Priority: Priority defines how soon the bug should be fixed.


Usually, the priority of the bug is set by the Managers. Based
on the priority, developers could understand how soon it must
be fixed and set the order in which a bug should be resolved.

Categories of Priority:

; High
; Medium
; Low
;
Severity: Severity talks about the impact of the bug on the
customer’s business. Usually, the severity of the bug is set by
the Managers. Sometimes, testers choose the severity of the
bug but in most cases, it will be selected by Managers/Leads.

Categories of Severity:

; Blocker
; Critical
; Major
; Minor
; Trivial
Status: Specify the status of the bug. If you just found a bug
and about to post it then the status will be “New”. In the
course of bug fixing, the status of the bug will change.

(E.g. New/ Assigned/ Open/ Fixed/ Test/ Verified/ Closed/


Reopen/ Duplicate/ Deferred/ Rejected/ cannot be fixed/ Not
Reproducible/ Need more information)

Description: In the description section, you must briefly


explain what you have done before facing the bug.

Steps to reproduce: In this section, you should describe how


to reproduce the bug in step by step manner. Easy to follow
steps give room to the developers to fix the issue without any
chaos. These steps should describe the bug well enough and
allows developers to understand and act on the bug without
discussing to the one who wrote the bug report. Start with
“opening the application”, include “prerequisites” if any and
write till the step which “causes the bug”.

Good:

i. Open URL “Your URL”


ii. Click on “Registration Page”
iii. Upload “JPEG” file in the profile photo field
Bad:

Upload a file in the registration page.

URL: Mention the URL of the application (If available)

Expected Result: What is the expected output from the


application when you make an action which causes failure.

Good: A message should display “Profile picture uploaded


successfully”

Bad: System should accept the profile picture.

Earlier I have posted a detailed post on “Test Case Template


With Explanation”, if you haven’t gone through it, you can
browse “Test Case Template With Explanation” here.

Actual Result: What is the expected output from the


application when you make an action which causes failure.

Good: “Uploading a JPEG file (Profile Picture) in the


Registration Page crashes the system”.

Bad: System is not accepting profile picture.

Attachments: Attach the screenshots which you had captured


when you faced the bug. It helps the developers to see the
bug which you have faced.

Defect Close Date: The ‘Defect Close Date’ is the date which
needs to be updated once you ensure that the defect is not
reproducible.
Software Test Metrics:
Before starting what is Software Test Metrics and types, I
would like to start with the famous quotes in terms of metrics.

Software test metrics is to monitor and control process and


product. It helps to drive the project towards our planned
goals without deviation.

Metrics answer different questions. It’s important to decide


what questions you want answers to.

Software test metrics are classified into two types

1. Process metrics
2. Product metrics

Process Metrics:
Software Test Metrics used in the process of test preparation
and test execution phase of STLC.

The following are generated during the Test Preparation phase


of STLC:

Test Case Preparation Productivity:


It is used to calculate the number of Test Cases prepared and
the effort spent for the preparation of Test Cases.

Formula:

1Test Case Preparation Productivity = (No of Test Case)/ (Effort spent for Test
Case Preparation)
E.g.:
No. of Test cases = 240

Effort spent for Test case preparation (in hours) = 10

Test Case preparation productivity = 240/80 = 30 test


cases/hour

est Design Coverage:


It helps to measure the percentage of test case coverage
against the number of requirements

Formula:

1Test Design Coverage = ((Total number of requirements mapped to test cases) /


(Total number of requirements)*100
E.g.:
Total number of requirements: 100

Total number of requirements mapped to test cases: 98

Test Design Coverage = (98/100)*100 = 98%

The following are generated during the Test Execution phase of


STLC:

Test Execution Productivity:


It determines the number of Test Cases that can be executed
per hour

Formula:

1(No of Test cases executed)/ (Effort spent for execution of test cases)
E.g.:
No of Test cases executed = 180

Effort spent for execution of test cases = 10

Test Execution Productivity = 180/10 = 18 test cases/hour

Test Execution Coverage:


It is to measure the number of test cases executed against the
number of test cases planed.

Formula:

1Test Execution Coverage = (Total no. of test cases executed / Total no. of
test cases planned to execute)*100
E.g.:
Total no. of test cases planned to execute = 240

Total no. of test cases executed = 160

Test Execution Coverage = (180/240)*100 = 75%

Test Cases Passed:


It is to measure the percentage no. of test cases passed

Formula:

1Test Cases Pass = (Total no. of test cases passed) / (Total no. of test cases
executed) * 100
E.g.:
Test Cases Pass = (80/90)*100 = 88.8 = 89%
Test Cases Failed:
It is to measure the percentage no. of test cases failed

Formula:

1Test Cases Failed = (Total no. of test cases failed) / (Total no. of test
cases executed) * 100
E.g.:
Test Cases Failed= (10/90)*100 = 11.1 = 11%

Test Cases Blocked:


It is to measure the percentage no. of test cases blocked

Formula:

1Test Cases Blocked = (Total no. of test cases blocked) / (Total no. of test
cases executed) * 100
E.g.:
Test Cases Blocked = (5/90)*100 = 5.5 = 6%
Check below video to see “Test Metrics”

Product metric:
Software Test Metrics used in the process of defect analysis
phase of STLC.

Error Discovery Rate:


It is to determine the effectiveness of the test cases.

Formula:

1Error Discovery Rate = (Total number of defects found /Total no. of test cases
executed)*100
E.g.:
Total no. of test cases executed = 240

Total number of defects found = 60

Error Discovery Rate = (60/240)*100 = 25%

Defect Fix Rate:


It helps to know the quality of a build in terms of defect fixing.

Formula:

1Defect Fix Rate = (Total no of Defects reported as fixed - Total no. of


defects reopened) / (Total no of Defects reported as fixed + Total no. of new
Bugs due to fix)*100
E.g.:
Total no of defects reported as fixed = 10

Total no. of defects reopened = 2

Total no. of new Bugs due to fix = 1

Defect Fix Rate = ((10 – 2)/(10 + 1))*100 = (8/11)100 =


72.7 = 73%

Defect Density:
It is defined as the ratio of defects to requirements.

Defect density determines the stability of the application.

Formula:

1Defect Density = Total no. of defects identified / Actual Size (requirements)


E.g.:
Total no. of defects identified = 80

Actual Size= 10

Defect Density = 80/10 = 8

Defect Leakage:
It is used to review the efficiency of the testing process before
UAT.

Formula:

1Defect Leakage = ((Total no. of defects found in UAT)/(Total no. of defects


found before UAT)) * 100
E.g.:
No. of defects found in UAT = 20

No. of Defects found before UAT = 120

Defect Leakage = (20 /120) * 100 = 16.6 = 17%

Defect Removal Efficiency:


It allows us to compare the overall (defects found pre and
post-delivery) defect removal efficiency

Formula:

1Defect Removal Efficiency = ((Total no. of defects found pre-delivery) /(


(Total no. of defects found pre-delivery )+ (Total no. of defects found post-
delivery)))* 100
E.g.:
Total no. of defects found pre-delivery = 80
Total no. of defects found post-delivery = 10

Defect Removal Efficiency = ((80) / ((80) + (10)))*100 =


(80/90)*100 = 88.8 = 89%

Requirements Traceability
Matrix (RTM) |
SoftwareTestingMaterial
Requirements Traceability Matrix (RTM) is used to trace
the requirements to the tests that are needed to verify
whether the requirements are fulfilled.
Requirement Traceability Matrix AKA Traceability
Matrix or Cross Reference Matrix.
Like all other test artifacts, RTM too varies between
organizations. Most of the organizations use just the
Requirement Id’s and Test Case Id’s in the RTM. It is possible
to make some other fields such as Requirement Description,
Test Phase, Test case result, Document Owner etc., It is
necessary to update the RTM whenever there is a change in
requirement.

The following illustration gives you a basic idea


about Requirement Traceability Matrix (RTM).

Assume we have 5 requirements


Assume total test cases identified are 10

Whenever we write new test cases, the same need to be


updated in the RTM

Adding a new test case id TID011 and mapping it to the


requirement id BID005

Types of Requirements Traceability Matrix


(RTM):
Let’s see different types of Traceability Matrix:
; Forward Traceability: Mapping requirements to test
cases is called Forward Traceability Matrix. It is used to
ensure whether the project progresses in the desired
direction. It makes sure that each requirement is tested
thoroughly.
; Backward or Reverse Traceability: Mapping test
cases to requirements is called Backward Traceability
Matrix. It is used to ensure whether the current product
remains on the right track. It makes sure that we are not
expanding the scope of the project by adding functionality
that is not specified in the requirements.
; Bi-directional traceability (Forward +
Backward): Mapping requirements to test cases (forward
traceability) and test cases to requirements (backward
traceability) is called Bi-directional Traceability Matrix. It is
used to ensure that all the specified requirements have
appropriate test cases and vice versa.
Advantage of Requirements Traceability Matrix
(RTM):
1. 100% test coverage
2. It allows to identify the missing functionality easily
3. It allows to identify the test cases which needs to be
updated in case of change in requirement
4. It is easy to track the overall test execution status

Test Deliverables
Test Deliverables are the test artifacts which are given to the
stakeholders of a software project during the SDLC (Software
Development Life Cycle).
A software project which follows SDLC undergoes the different
phases before delivering to the customer. In this process there
will be some deliverables in every phase. Some of the
deliverables are provided before the testing phase commences
and some are provided during the testing phase and rest after
the testing phase is completed.

The following are list of test deliverables:


1. Test Strategy
2. Test Plan
3. Effort Estimation Report
4. Test Scenarios
5. Test Cases/Scripts
6. Test Data
7. Requirement Traceability Matrix (RTM)
8. Defect Report/Bug Report
9. Test Execution Report
10. Graphs and Metrics
11. Test summary report
12. Test incident report
13. Test closure report
14. Release Note
15. Installation/configuration guide
16. User guide
17. Test status report
18. Weekly status report (Project manager to client)

Difference between defect,


bug, error and failure
Let’s see the difference between defect, bug, error and failure.
In general, we use these terms whenever the
system/application acts abnormally. Sometimes we call it’s an
error and sometimes bug and so on. Many of the newbies in
Software Testing industry have confusion in using this.
What is the difference between defect, bug, error and failure is
one the interview question while recruiting a fresher.

Generally, there is a contradiction in the usage of these


terminologies. Usually in Software Development Life Cycle we
use these terms based on the phase.

Note: Both Defect and Bug are the issues in an application but
in which phase of SDLC it was found makes the overall
difference.

What is a defect?
The variation between the actual results and expected results
is known as defect.

If a developer finds an issue and corrects it by himself in the


development phase then it’s called a defect.
What is a bug?
If testers find any mismatch in the application/system in
testing phase then they call it as Bug.

As I mentioned earlier, there is a contradiction in the usage of


Bug and Defect. People widely say the bug is an informal name
for the defect.

What is an error?
We can’t compile or run a program due to coding mistake in a
program. If a developer unable to successfully compile or run a
program then they call it as an error.

What is a failure?
Once the product is deployed and customers find any issues
then they call the product as a failure product. After release, if
an end user finds an issue then that particular issue is called
as failure

Points to know:

If a Quality Analyst (QA) finds a bug, he has to reproduce and


record it using the bug report template.

How to Write Good Bug Report!!


In this post, we show you how to write good bug report. At the
end, we will give a link to down a sample defect report
template. So, let’s get started.
Have you ever seen a rejected bug which has comments as “it
is not reproducible”. Sometimes Dev Team rejects few bugs
due to Bad Bug Report.

Bad Bug Report?

Imagine you are using Mozilla Firefox for testing (I am


mentioning a sample case here). You found an issue that login
button is not working. You have posted the same issue with all
the steps except mentioning the name and version of Browser.
One of the developers opened that report and tried to
reproduce it based on the steps you mentioned in the report.
Here, in this case, the Developer is using Internet Explorer.
The Login button is working properly in their environment. So
the Developer will reject the bug by mentioning the comments
as the bug is not reproducible. You will find the same issue
after you do retest. Again you will report the same issue and
get the same comments from the Dev Team.

You forgot to mention the name and version of Browser in your


bug report. If you forgot some key information to reproduce
the bug in the Developers Environment, you will face the
consequences like this.

It creates a bad impression on you. Action will be taken on you


based on the company for wasting the time and effort.

There is an old saying: “You never get a second chance to


make a first impression.”

Writing good bug report is a skill every tester should have. You
have to give all necessary details to the Dev Team to get your
issue fixed.

Earlier I have posted a detailed post on “Bug Life Cycle”, if you


haven’t gone through it, you can browse “Bug Life Cycle” here
Do you want to get the bug fixed without rejection? So you
have to report it by using a good bug report.

How To Write Good Defect Report?


Let me first mention what are the fields needed in a good bug
report.

Defect ID, Reporter Name, Defect Reported Date, Who


Detected, How Detected, Project Name, Release/Build Version,
Defect/Enhancement, Environment, Priority, Severity, Status,
Description, Steps To Reproduce, URL, Expected Result, Actual
Result.

Earlier I have posted a detailed post on “Bug Report Template


With Detailed Explanation”, click here to get the detailed
explanation on each field and download a sample bug report.

The first thing we should do before writing a bug report is to


reproduce the bug two to three times.

If you are sure that bug exists then ascertain whether the
same bug was posted by someone else or not. Use some
keywords related to your bug and search in the Defect
Tracking Tool. If you did not find an issue which is related to
the bug same like you found then you could start writing a bug
report.

Hold on!!

Why could not we ascertain whether the same issue is


available in the related modules?. If you find that the same
issue is available in the related modules then you could
address those issues in the same bug report. It saves a lot of
time in terms of fixing the issues and writing repeated bug
reports for the same kind of issues.
Start writing a bug report by mentioning all the details in the
fields as mentioned above and write detailed steps to
reproduce

Make a checklist and ensure whether you have passed


all the points before reporting a bug.

i. Have I reproduced the bug 2-3 times.


ii. Have I verified in the Defect Tracking Tool (using
keywords) whether someone else already posted the same
issue.
iii. Have I verified the similar issue in the related modules.
iv. Have I written the detailed steps to reproduce the bug.
v. Have I written proper defect summary.
vi. Have I attached relevant screenshots.
vii. Have I missed any necessary fields in the bug report?

Consolidating all the points on how to


write good bug report:
i. Reproduce the bug 2-3 times.
ii. Use some keywords related to your bug and search in the
Defect Tracking Tool.
iii. Check in similar modules.
iv. Report the problem immediately.
v. Write detailed steps to reproduce the bug.
vi. Write a good defect summary. Watch your language in the
process of writing the bug report, your words should not
offend people. Never use capital letter whilst explaining the
issue.
vii. Advisable to Illustrate the issue by using proper
screenshots.
viii. Proofread your bug report twice or thrice before posting it.
Software Architecture: One-
Tier, Two-Tier, Three Tier, N
Tier
A “tier” can also be referred to as a “layer”.

Three layers involved in the application namely Presentation


Layer, Business Layer and Data Layer. Let’s see each layer in
detail:

Presentation Layer: It is also known as Client layer. Top


most layer of an application. This is the layer we see when we
use a software. By using this layer we can access the
webpages. The main functionality of this layer is to
communicate with Application layer. This layer passes the
information which is given by the user in terms of keyboard
actions, mouse clicks to the Application Layer.
For example, login page of Gmail where an end user could see
text boxes and buttons to enter user id, password and to click
on sign-in.

In a simple words, it is to view the application

Application Layer: It is also known as Business Logic Layer


which is also known as logical layer. As per the gmail login
page example, once user clicks on the login button, Application
layer interacts with Database layer and sends required
information to the Presentation layer. It controls an
application’s functionality by performing detailed processing.
This layer acts as a mediator between the Presentation and the
Database layer. Complete business logic will be written in this
layer.

In a simple words, it is to perform operations on the


application.
Data Layer: The data is stored in this layer. Application layer
communicates with Database layer to retrieve the data. It
contains methods that connects the database and performs
required action e.g.: insert, update, delete etc.

In a simple words, it is to share and retrieve the data.

Types of Software Architecture:


One Tier Architecture:
One Tier application AKA Standalone application
One tier architecture has all the layers such as Presentation,
Business, Data Access layers in a single software package.
Applications which handles all the three tiers such as MP3
player, MS Office are come under one tier application. The data
is stored in the local system or a shared drive.

Two-Tier Architecture:
Two Tier application AKA Client-Server application
The Two-tier architecture is divided into two parts:

1. Client Application (Client Tier)


2. Database (Data Tier)

Client system handles both Presentation and Application layers


and Server system handles Database layer. It is also known as
client server application. The communication takes place
between the Client and the Server. Client system sends the
request to the Server system and the Server system processes
the request and sends back the data to the Client System

Three-Tier Architecture:
Three Tier application AKA Web Based application

The Three-tier architecture is divided into three parts:


1. Presentation layer (Client Tier)
2. Application layer (Business Tier)
2. Database layer (Data Tier)

Client system handles Presentation layer, Application server


handles Application layer and Server system handles Database
layer.

Note: Another layer is N-Tier application. N-Tier application


AKA Distributed application. It is similar to three tier
architecture but number of application servers are increased
and represented in individual tiers in order to distributed the
business logic so that the logic will be distributed.
Seven Principles of
Software Testing | Software
Testing Material
There are seven principles of Software Testing.

1. Testing shows presence of defects


2. Exhaustive testing is impossible
3. Early testing
4. Defect clustering
5. Pesticide paradox
6. Testing is context dependent
7. Absence of error – fallacy

Seven Principles of Software Testing:


1. Testing Shows Presence of Defects:
Testing shows the presence of defects in the software. The
goal of testing is to make the software fail. Sufficient testing
reduces the presence of defects. In case testers are unable to
find defects after repeated regression testing doesn’t mean
that the software is bug-free.

Testing talks about the presence of defects and don’t talk


about the absence of defects.

2. Exhaustive Testing is Impossible:


What is Exhaustive Testing?

Testing all the functionalities using all valid and invalid inputs
and preconditions is known as Exhaustive testing.
Why it’s impossible to achieve Exhaustive Testing?

Assume we have to test an input field which accepts age


between 18 to 20 so we do test the field using 18,19,20. In
case the same input field accepts the range between 18 to 100
then we have to test using inputs such as 18, 19, 20, 21, ….,
99, 100. It’s a basic example, you may think that you could
achieve it using automation tool. Imagine the same field
accepts some billion values. It’s impossible to test all possible
values due to release time constraints.

If we keep on testing all possible test conditions then the


software execution time and costs will rise. So instead of doing
exhaustive testing, risks and priorities will be taken into
consideration whilst doing testing and estimating testing
efforts.

3. Early Testing:
Defects detected in early phases of SDLC are less expensive to
fix. So conducting early testing reduces the cost of fixing
defects.

Assume two scenarios, first one is you have identified an


incorrect requirement in the requirement gathering phase and
the second one is you have identified a bug in the fully
developed functionality. It is cheaper to change the incorrect
requirement compared to fixing the fully developed
functionality which is not working as intended.

4. Defect Clustering:
Defect Clustering in software testing means that a small
module or functionality contains most of the bugs or it has the
most operational failures.

As per the Pareto Principle (80-20 Rule), 80% of issues comes


from 20% of modules and remaining 20% of issues from
remaining 80% of modules. So we do emphasize testing on
the 20% of modules where we face 80% of bugs.

5. Pesticide Paradox:
Pesticide Paradox in software testing is the process of
repeating the same test cases again and again, eventually, the
same test cases will no longer find new bugs. So to overcome
this Pesticide Paradox, it is necessary to review the test cases
regularly and add or update them to find more defects.

6. Testing is Context Dependent:


Testing approach depends on the context of the software we
develop. We do test the software differently in different
contexts. For example, online banking application requires a
different approach of testing compared to an e-commerce site.

7. Absence of Error – Fallacy:


99% of bug-free software may still be unusable, if wrong
requirements were incorporated into the software and the
software is not addressing the business needs.

The software which we built not only be a 99% bug-free


software but also it must fulfill the business needs otherwise it
will become an unusable software.

Black Box Test Design


Techniques | Software
Testing Material
Black Box Test Design Techniques are widely used as a best
practice in the industry. Black box test design techniques are
used to pick the test cases in a systematic manner. By using
these techniques we could save lots of testing time and get the
good test coverage.

Following are the list of Black Box Test


Design Techniques:
These test design techniques are used to derive the test cases
from the Requirement Specification document and also based
on testers expertise:

1. Equivalence Partitioning
2. Boundary Value Analysis
3. Decision Table
4. State Transition
5. Exploratory Testing
6. Error Guessing

Equivalence Partitioning:
It is also known as Equivalence Class Partitioning (ECP).

Using equivalence partitioning test design technique, we divide


the test conditions into class (group). From each group we do
test only one condition. Assumption is that all the conditions in
one group works in the same manner. If a condition from a
group works then all of the conditions from that group work
and vice versa. It reduces lots of rework and also gives the
good test coverage. We could save lots of time by reducing
total number of test cases that must be developed.

For example: A field should accept numeric value. In this


case, we split the test conditions as Enter numeric value, Enter
alpha numeric value, Enter Alphabets, and so on. Instead of
testing numeric values such as 0, 1, 2, 3, and so on.

Click here to see a detailed post on Equivalence Class


Partitioning.
Boundary Value Analysis:
Using Boundary value analysis (BVA), we take the test
conditions as partitions and design the test cases by getting
the boundary values of the partition. The boundary between
two partitions is the place where the behavior of the
application varies. The test conditions on either side of the
boundary are called boundary values. In this we have to get
both valid boundaries (from the valid partitions) and invalid
boundaries (from the invalid partitions).

For example: If we want to test a field which should accept


only amount more than 10 and less than 20 then we take the
boundaries as 10-1, 10, 10+1, 20-1, 20, 20+1. Instead of
using lots of test data, we just use 9, 10, 11, 19, 20 and 21.

Click here to see a detailed post on Boundary Value Analysis.

Decision Table:
Decision Table is aka Cause-Effect Table. This test technique is
appropriate for functionalities which has logical relationships
between inputs (if-else logic). In Decision table technique, we
deal with combinations of inputs. To identify the test cases
with decision table, we consider conditions and actions. We
take conditions as inputs and actions as outputs.

Click here to see a detailed post on Decision Table.

State Transition Testing:


Using state transition testing, we pick test cases from an
application where we need to test different system transitions.
We can apply this when an application gives a different output
for the same input, depending on what has happened in the
earlier state.

Some examples are Vending Machine, Traffic Lights.


Vending machine dispenses products when the proper
combination of coins is deposited.

Traffic Lights will change sequence when cars are moving /


waiting

Click here to see a detailed post on State Transition Testing.

Exploratory Testing:
Usually this process will be carried out by domain experts.
They perform testing just by exploring the functionalities of the
application without having the knowledge of the requirements.

Whilst using this technique, testers could explore and learn the
system. High severity bugs are found very quickly in this type
of testing.

Error Guessing:
Error guessing is one of the testing techniques used to find
bugs in a software application based on tester’s prior
experience. In Error guessing we don’t follow any specific
rules.

Some of the examples are:

; Submitting a form without entering values.


; Entering invalid values such as entering alphabets in the
numeric field.

Equivalence Partitioning
Test Case Design Technique
Equivalence Partitioning is also known as Equivalence Class
Partitioning. In equivalence partitioning, inputs to the software
or system are divided into groups that are expected to exhibit
similar behavior, so they are likely to be proposed in the same
way. Hence selecting one input from each group to design the
test cases.
Each and every condition of particular partition (group) works
as same as other. If a condition in a partition is valid, other
conditions are valid too. If a condition in a partition is invalid,
other conditions are invalid too.

It helps to reduce the total number of test cases from infinite


to finite. The selected test cases from these groups ensure
coverage of all possible scenarios.

Equivalence partitioning is applicable at all levels of testing.

Example on Equivalence Partitioning Test


Case Design Technique:
Example 1:

Assume, we have to test a field which accepts Age 18 – 56

Valid Input: 18 – 56

Invalid Input: less than or equal to 17 (<=17), greater than or


equal to 57 (>=57)
Valid Class: 18 – 56 = Pick any one input test data from 18 –
56

Invalid Class 1: <=17 = Pick any one input test data less than
or equal to 17

Invalid Class 2: >=57 = Pick any one input test data greater
than or equal to 57

We have one valid and two invalid conditions here.

Example 2:

Assume, we have to test a filed which accepts a Mobile


Number of ten digits.

Valid input: 10 digits

Invalid Input: 9 digits, 11 digits

Valid Class: Enter 10 digit mobile number = 9876543210

Invalid Class Enter mobile number which has less than 10


digits = 987654321

Invalid Class Enter mobile number which has more than 11


digits = 98765432109
Boundary Value Analysis
Test Case Design Technique

Boundary value analysis (BVA) is based on testing the


boundary values of valid and invalid partitions. The Behavior at
the edge of each equivalence partition is more likely to be
incorrect than the behavior within the partition, so boundaries
are an area where testing is likely to yield defects.
Every partition has its maximum and minimum values and
these maximum and minimum values are the boundary values
of a partition.

A boundary value for a valid partition is a valid boundary


value. Similarly a boundary value for an invalid partition is an
invalid boundary value.

Tests can be designed to cover both valid and invalid boundary


values. When designing test cases, a test for each boundary
value is chosen.

For each boundary, we test +/-1 in the least significant digit of


either side of the boundary.

Boundary value analysis can be applied at all test levels.

Example on Boundary Value Analysis Test


Case Design Technique:
Assume, we have to test a field which accepts Age 18 – 56
Minimum boundary value is 18

Maximum boundary value is 56

Valid Inputs: 18,19,55,56

Invalid Inputs: 17 and 57

Test case 1: Enter the value 17 (18-1) = Invalid

Test case 2: Enter the value 18 = Valid

Test case 3: Enter the value 19 (18+1) = Valid

Test case 4: Enter the value 55 (56-1) = Valid

Test case 5: Enter the value 56 = Valid

Test case 6: Enter the value 57 (56+1) =Invalid

Example 2:

Assume we have to test a text field (Name) which accepts the


length between 6-12 characters.
Minimum boundary value is 6

Maximum boundary value is 12

Valid text length is 6, 7, 11, 12

Invalid text length is 5, 13

Test case 1: Text length of 5 (min-1) = Invalid

Test case 2: Text length of exactly 6 (min) = Valid

Test case 3: Text length of 7 (min+1) = Valid

Test case 4: Text length of 11 (max-1) = Valid

Test case 5: Text length of exactly 12 (max) = Valid

Test case 6: Text length of 13 (max+1) = Inval

Decision Table Test Case


Design Technique
Decision Table is aka Cause-Effect Table. This test technique is
appropriate for functionalities which has logical relationships
between inputs (if-else logic). In Decision table technique, we
deal with combinations of inputs. To identify the test cases
with decision table, we consider conditions and actions. We
take conditions as inputs and actions as outputs.

Examples on Decision Table Test Case


Design Technique:
Take an example of transferring money online to an account
which is already added and approved.

Here the conditions to transfer money are ACCOUNT ALREADY


APPROVED, OTP (One Time Password) MATCHED, SUFFICIENT
MONEY IN THE ACCOUNT.

And the actions performed are TRANSFER MONEY, SHOW A


MESSAGE AS INSUFFICIENT AMOUNT, BLOCK THE
TRANSACTION INCASE OF SUSPICIOUS TRANSACTION.

Here we decide under what condition the action be performed


Now let’s see the tabular column below.

In the first column I took all the conditions and actions related
to the requirement. All the other columns represent Test
Cases.
T = True, F = False, X = Not possible

From the case 3 and case 4, we could identify that if condition


2 failed then system will execute Action 3. So we could take
either of case 3 or case 4

So finally concluding with the below tabular column.

We write 4 test cases for this requirement.

Take another example – login page validation. Allow user to


login only when both the ‘User ID’ and ‘Password’ are entered
correct.

Here the Conditions to allow user to login are Enter Valid User
Name and Enter Valid Password.

The Actions performed are Displaying home page and


Displaying an error message that User ID or Password is
wrong.
From the case 2 and case 3, we could identify that if one of the
condition failed then the system will display an error message
as Invalid User Credentials.

So I am eliminating one of the test case from case 2 and case


3 and concluding with the below tabular column.

State Transition Test Case


Design Technique
Using state transition testing, we pick test cases from an
application where we need to test different system transitions.
We can apply this when an application gives a different output
for the same input, depending on what has happened in the
earlier state.
Some examples are Vending Machine, Traffic Lights.

Vending machine dispenses products when the proper


combination of coins is deposited.

Traffic Lights will change sequence when cars are moving /


waiting

Example on State Transition Test Case


Design Technique:
Take an example of login page of an application which locks
the user name after three wrong attempts of password.

A finite state system is often shown as a state diagram


t works like a truth table. First determine the states, input data
and output data.

Entering correct password in the first attempt or second


attempt or third attempt, user will be redirected to the home
page (i.e., State – S4).

Entering incorrect correct password in the first attempt, a


message will be displayed as try again and user will be
redirected to state S2 for the second attempt.

Entering incorrect correct password in the second attempt, a


message will be displayed as try again and user will be
redirected to state S3 for the third attempt.

Entering incorrect correct password in the third attempt, user


will be redirected to state S5 and a message will be displayed
as “Account locked. Consult Administrator”.

Likewise, let’s see another example.

Withdrawal of money from ATM. ‘User A’ wants to withdraw


30,000 from ATM. Imagine he could take 10,000 per
transaction and total balance available in the account is
25,000. In the first two attempts, he could withdraw money.
Whereas in the third attempt, ATM shows a message as
“Insufficient balance, contact Bank”. Same Action but due to
change in the state, he couldn’t withdraw the money in the
third transaction.
Difference Between Defect
Severity And Priority In
Software Testing
n this post, we see the difference between Severity and
Priority. Severity and priority are the two things we have to
choose once the bug is found. Whenever we find a bug, we
select the bug severity and bug priority. Usually, Testers select
the severity of the bug and the Project Manager or Project
Lead selects the bug priority. Let’s see bug severity and bug
priority in detail.

What is Severity?
Bug/Defect severity can be defined as the impact of the bug on
customer’s business. It can be Critical, Major or Minor. In
simple words, how much effect will be there on the system
because of a particular defect

What are the types of Severity?


Severity can be categorized into three types:

As mentioned above the type of severity are categorized as


Critical, Major, and Minor

Let’s see how can we segregate a bug into these types:

Critical:

A critical severity issue is an issue where a large piece of


functionality or major system component is completely broken
and there is no workaround to move further.
For example, Due to a bug in one module, we cannot test the
other modules because that blocker bug has blocked the other
modules. Bugs which affects the customers business are
considered as critical

Major:

A major severity issue is an issue where a large piece of


functionality or major system component is completely broken
and there is a workaround to move further.

Minor:

A minor severity issue is an issue that imposes some loss of


functionality, but for which there is an acceptable & easily
reproducible workaround.

For example, font family or font size or color or spelling issue

Trivial:

A trivial severity defect is a defect which is related to


enhancement of the system

What is Priority?
Defect priority can be defined as how soon the defect should
be fixed. It gives the order in which a defect should be
resolved. Developers decide which defect they should take up
next based on the priority. It can be High, Medium or Low.

Most of the times the priority status is set based on the


customer requirement.

Must Read: Difference between Defect, Bug, Error, And Failure


What are the types of Priority?
Priority can be categorized into three types:

As mentioned above the type of severity are categorized as


High, Medium, and Low

Let’s see how can we segregate a bug into these types:

High:

A high priority issue is an issue which has a high impact on the


customers business or an issue which affects the system
severely and the system cannot be used until the issue was
fixed. These kinds of issues must be fixed immediately. Most of
the cases as per the user perspective, the priority of the issue
is set to high priority even though the severity of the issue is
minor.

Medium:

Issues which can be released in the next build comes under


medium priority. Such issues can be resolved along with other
development activities.

Low:

An issue which has no impact on the customer business comes


under low priority.
Some important scenarios which are asked in the interviews on
Severity and Priority:

High Priority & High Severity:


A critical issue where a large piece of functionality or major
system component is completely broken.
For example,
1. Submit button is not working on a login page and customers
are unable to login to the application
2. On a bank website, an error message pops up when a
customer clicks on transfer money button.
3. Application throws an error 500 response when a user tries
to do some action.
500 Status Codes:
The server has problems in processing the request and these
are mainly server errors and not with the request.
These kinds of showstoppers come under High Priority and
High Severity.
There won’t be any workaround and the user can’t do the
further process.

Low Priority & High Severity:


An issue which won’t affects customers business but it has a
big impact in terms of functionality.
For example,
1. Crash in some functionality which is going to deliver after
couple of releases
2. There is a crash in an application whenever a user enters 4
digits in the age field which accepts max 3 digits.

High Priority & Low Severity:


A minor issue that imposes some loss of functionality, but for
which there is an acceptable & easily reproducible workaround.
Testing can proceed without interruption but it
affects customers reputation.

For example,
1. Spelling mistake of a company name on the homepage
2. Company logo or tagline issues

It is important to fix the issue as soon as possible, although it


may not cause a lot of damage.

Low Priority & Low Severity:


A minor issue that imposes some loss of functionality, but for
which there is an acceptable & easily reproducible workaround.
Testing can proceed without interruption.
For example,
1. FAQ page takes a long time to load.
2. Font family or font size or color or spelling issue in the
application or reports (Spelling mistake of company name on
the home page won’t come under this Low Priority and Low
Severity)

These kinds of issues won’t bother the customers much.

Some more points:

1. Development team takes up the high priority defects first


rather than of high severity.
2. Generally, severity is assigned by Tester / Test Lead &
priority is assigned by Developer/Team Lead/Project Lead.
Final Words:

The above are just examples. Selection of severity and priority


may vary depends on project and organization. In Gmail,
composing an email is main functionality, whereas composing
an email feature in a banking (email option to send emails
internally) application is not the main functionality.
Bug Severity And Priority In
Software Testing –
Infographic
PDCA Cycle (Plan Do Check
Act) in Software
Development Life Cycle
What is PDCA Cycle?
PDCA Cycle is an iterative four-step management method used
in business to focus on continuous improvement of processes.
The PDCA cycle consists of four steps namely Plan, Do, Check,
and Act. It is one of the key concepts of quality and it is also
called the Deming 

Some of the cases where we use PDCA Cycle are when


implementing any changes or when a new improvement
project starts or when defining a repetitive process

Let’s see what we do in each stage of PDCA Cycle in SDLC.

PLAN:
Plan a change (either to solve a problem or to improve some
areas) and decide what goal to achieve.

Here we define the objective, strategy and supporting methods


to achieve the goal of our plan.

DO:
To design or revise the business requirement as planned

Here we implement the plan (in terms of putting the plan into
an action) and test its performance
CHECK:
Evaluate the results to make sure whether we reach the goals
as planned

Here we make a checklist to record what went well and what


did not work (lessons learnt)

ACT:
If the changes are not as planned then continue the cycle to
achieve the goal with a different plan.

Agile Scrum Methodology In


Software Development
Agile Scrum Methodology is one of the popular
Agile software development methods. There are some other
Agile software development methods but the popular one
which is using widely is Agile Scrum Methodology. The Agile
Scrum Methodology is a combination of both Incremental and
Iterative model for managing product development
n Scrum, the project is divided into Sprints.

Sprint: Each Sprint has a specified time line (2 weeks to 1


month). This time line will be agreed by a Scrum Team during
the Sprint Planning Meeting. Here, User Stories are split in to
different modules. End result of every Sprint should be
potentially shippable product.
The three important aspects involved in Scrum such
as Roles, Artifacts and Meetings:

ROLES IN AGILE SCRUM METHODOLOGY:


Product Owner:

Product Owner usually represents the Client and acts as a


point of contact from Client side. The one who prioritizes the
list of Product Backlogs which Scrum Team should finish and
release.
Scrum Master acts as a facilitator to the Scrum Development
Team. Clarifies the queries and organizes the team from
distractions and teach team how to use scrum and also
concentrates on Return on Investment (ROI).
Scrum Development Team:

Developer’s, QA’s. Who develops the product. Scrum


development team decides the effort estimation to complete a
Product Backlog Item.

Scrum Team:

A cross-functional, self-organizing group of dedicated people


(Group of Product Owner, Business Analyst, Developer’s and
QA’s). Recommended size of a scrum team is 7 plus or minus 2
(i.e, between 5 to 9 members in a team)
ARTIFACTS IN AGILE SCRUM METHODOLOGY:
User Stories:

User Stories are not like a traditional requirement documents.


In User Stories, stake holder mention what features they
need and what they want to achieve.

Product Backlog:
Product Backlog is a repository where the list of Product
Backlog Items stored and maintained by the Product Owner.
The list of Product Backlog Items are prioritized by the Product
Owner as high and low and also could re-prioritize the product
backlog constantly.

Sprint Backlog:

Group of user stories which scrum development team agreed


to do during the current sprint (Committed Product Backlog
items)

Product Burn down Chart:

A graph which shows how many Product Backlog Items (User


Stories) implemented/not implemented.

Sprint Burn down Chart:

A graph which shows how many Sprints implemented/not


implemented by Scrum Team.
Release Burn down Chart:

A graph which shows List of releases still pending, which


Scrum Team have planned.

Defect Burn down Chart:

A graph which shows how many defects identified and fixed.

Note: Burn Down Charts provide proof that the project is on


track or not.

MEETINGS IN AGILE SCRUM METHODOLOGY:


Sprint Planning Meeting:

The first step of Scrum is Sprint Planning Meeting where the


entire Scrum Team attends. Here the Product Owner selects
the Product Backlog Items (User Stories) from the Product
Backlog.
Most important User Stories at the top of the list and least
important User Stories at the bottom. Scrum Development
Team decides and provides the effort estimation.

Daily Scrum Meeting: (Daily Stand-up)

Daily Scrum is also known as Daily Stand-up meeting. Here


each team member reports to the peer team member on what
he/she did yesterday, what he/she going to do today and what
obstacles are impeding in their progress. Reporting will be
between peers not to Scrum Master or Product Owner. Daily
Scrum will be approximately 15 mins.

Sprint Review Meeting:

In the Sprint Review Meeting, Scrum Development Team


presents a demonstration of a potentially shippable product.
Product Owner declares which items are completed and not
completed. Product Owner adds the additional items to the
product backlog based on the stakeholders feedback.

Sprint Retrospective Meeting:

Scrum Team meets again after the Sprint Review Meeting and
documents the lessons learnt in the earlier sprint such as
“What went well”, “What could be improved”. It helps the
Scrum Team to avoid the mistakes in the next Sprints.

When do we use Agile Scrum Methodology?


The client is not so clear on requirements
The client expects quick releases
The client doesn’t give all the requirements at a time

Conclusion:
In an Agile Scrum Methodology, all the members in a Scrum
Team gathers and finalize the Product Backlog Items (User
Stories) for a particular Sprint and commits time line to
release the product. Based on the Daily Scrum meetings,
Scrum Development Team develops and tests the product and
presents to the Product Owner on Sprint Review Meeting. If
the Product Owner accepts all the developed User Stories then
the Sprint is completed and the Scrum Team goes for the next
Sprint in a same manner.

Principles Agile Software


Development | Software
Testing Material
Agile testing is a software testing practice that follows the
principles agile software development. It is an iterative
software development methodology where requirements keep
changing as per the customer needs. Testing is done in
parallel to the development of an iterative model. Test team
receives frequent code changes from the development team
for testing an application

12 principles Agile Software


Development:
1. Highest priority is to satisfy the customer through early and
continuous delivery of business valuable software
2. Welcome changing requirements, even late in development
3. Deliver working software frequently
4. Business people and developers must work together daily
with transparency throughout the project
5. Build projects around motivated individuals
6. The best form of communication is to do face-to-face
conversation
7. Working software is the primary measure of progress
8. Able to maintain a constant pace
9. Continuous attention to technical excellence
10. Simplicity – the art of maximizing the amount of work not
done – is essential
11. Self-organizing teams
12. At regular intervals, the team reflects on how to become
more effective, then tunes and adjusts its behavior accordingly

Top 20 Agile Testing


Interview Questions |
Software Testing Material
Agile Testing Interview Questions 1 – 10:

1. What is Agile Testing?


Agile testing is a software testing practice that follows the
principles of an agile software development. It is an iterative
software development methodology where requirements keep
changing as per the customer needs. Testing is done in
parallel to the development of an iterative model. Test team
receives frequent code changes from the development team
for testing an application.

2. What is Agile Manifesto?

Agile manifesto defines 4 key points:

i. Individuals and interactions over process and tools


ii. Working software over comprehensive documentation
iii. Customer collaboration over contract negotiation
iv. Responding to change over following a plan

3. What are the principles of Agile Software


Development?

1. Highest priority is to satisfy the customer through early and


continuous delivery of business valuable software
2. Welcome changing requirements, even late in development
3. Deliver working software frequently
4. Business people and developers must work together with
transparency on daily basis throughout the project
5. Build projects around motivated individuals
6. The best form of communication is to do face-to-face
conversation
7. Working software is the primary measure of progress
8. Able to maintain a constant pace
9. Continuous attention to technical excellence
10. Simplicity – the art of maximizing the amount of work not
done – is essential
11. Self-organizing teams
12. At regular intervals, the team reflects on how to become
more effective, then tunes and adjusts its behavior accordingly
4. What are the main roles in Scrum?

Scrum consists of three main roles:

Product Owner: Product Owner usually represents the Client


and acts as a point of contact from Client side. The one who
prioritizes the list of Product Backlogs which Scrum Team
should finish and release.

Scrum Master: Scrum Master acts as a facilitator to the


Scrum Development Team. Clarifies the queries and organizes
the team from distractions and teach the team how to use
scrum and also concentrates on Return on Investment
(ROI). Responsible for managing the sprint.

Scrum Development Team: Developer’s, QA’s. Who develops


the product. Scrum development team decides the effort
estimation to complete a Product Backlog Item.

Scrum Team: A cross-functional, self-organizing group of


dedicated people (Group of Product Owner, Business Analyst,
Developer’s and QA’s). Recommended size of a scrum team is
7 plus or minus 2 (i.e, between 5 to 9 members in a team).

5. What approach do you follow when requirements


change continuously?

In Agile methodology, change in requirement is possible. It’s


not like other traditional methodologies where the
requirements are locked down at the requirement phase. Every
team member should be ready to handle the changes in the
project.

The team should work closely with the Product Owner to


understand the scope of requirement change and to negotiate
to keep the requirement changes to a minimum or to adopt
those changes in next sprint. Based on the requirement
changes Test Team could update the Test Plan and Test Cases
to achieve the deadlines. The team should understand the risk
in the requirement change and prepare a contingency plan. It
is a best practice not to go for the automation process until
requirements are finalized.

6. How is Agile Testing different to other traditional


Software Development Models?

It is one of the common Agile Testing Interview Questions.

In Agile Methodology, testing is not a phase like other


traditional models. It is an activity parallel to development in
the Agile. The time slot for the testing is less in the Agile
compared to the traditional models. The testing team works on
small features in Agile whereas the test team works on
a complete application after development in the traditional
models.

Must Read: Software Development Life Cycle And


Types of SDLC
7. When do we use Agile Scrum Methodology?

i. When the client is not so clear on requirements


ii. When the client expects quick releases
iii. When the client doesn’t give all the requirements at a time

8. What is a Sprint?

In Scrum, the project is divided into Sprints. Each Sprint has a


specified timeline (2 weeks to 1 month). This timeline will be
agreed by a Scrum Team during the Sprint Planning Meeting.
Here, User Stories are split into different modules. The end
result of every Sprint should be a potentially shippable
product.

9. What are Product Backlog and Sprint Backlog?


Product Backlog: Product Backlog is a repository where the
list of Product Backlog Items stored and maintained by the
Product Owner. The list of Product Backlog Items are prioritized
by the Product Owner as high and low and also could re-
prioritize the product backlog constantly.

Sprint Backlog: Group of user stories which scrum


development team agreed to do during the current sprint
(Committed Product Backlog items). It is a subset of the
product backlog.

10. What is the difference between Burn-up and Burn-


down chart?

Burn Down Charts provide proof that the project is on track or


not. Both burn-up and burn-down charts are graphs used to
track the progress of a project.

Burn-up charts represent how much work has been completed


in a project whereas Burn-down chart represents the
remaining work left in a project.

Agile Testing Interview Questions 11 – 20:

11. What are the types of burn-down charts?

There are four popularly used burn down charts in Agile.

i. Product burn down chart


ii. Sprint burn down chart
iii. Release burn down chart
iv. Defect burn down chart

12. What is Product Burn down Chart?

A graph which shows how many Product Backlog Items (User


Stories) implemented/not implemented.
13. What is Sprint Burn down Chart?

A graph which shows how many Sprints implemented/not


implemented by Scrum Team.

14. What is Release Burn down Chart?

A graph which shows List of releases still pending, which


Scrum Team have planned.

15. What is Defect Burn down Chart?

A graph which shows how many defects identified and fixed.

16. What is a Daily Stand-up Meeting?

Daily Stand-up Meeting is a daily routine meeting. It brings


everyone up to date on the information and helps the team to
stay organized.
Each team member reports to the peers the following:

1. What did you complete yesterday?


2. Any impediments in your way?
3. What do you commit to today?
4. When do you think you will be done with that?
In general, it’s not a recorded meeting. Reporting will be
between peers not to Scrum Master or Product Owner. It is
normally timeboxed to a maximum of 15 minutes. It is aka 15
Minute Stand-up Meeting

17. What is a Sprint Planning Meeting?

The first step of Scrum is Sprint Planning Meeting where the


entire Scrum Team attends. Here the Product Owner selects
the Product Backlog Items (User Stories) from the Product
Backlog.
Most important User Stories at the top of the list and least
important User Stories at the bottom. Scrum Development
Team decides and provides the effort estimation.

18. What is a Sprint Review Meeting?

In the Sprint Review Meeting, Scrum Development Team


presents a demonstration of a potentially shippable product.
Product Owner declares which items are completed and not
completed. Product Owner adds the additional items to the
product backlog based on the stakeholder’s feedback.

19. What is a Sprint Retrospective Meeting?

Scrum Team meets again after the Sprint Review Meeting and
documents the lessons learned in the earlier sprint such as
“What went well”, “What could be improved”. It helps the
Scrum Team to avoid the mistakes in the next Sprints.

20. What is a Task Board?

A task board is a dashboard which illustrates the progress that


an agile team is making in achieving their sprint goals.

In general, the columns used in a task board are as follows

i. User Story: Actual Business Requirement (Description)


ii. To Do: All the tasks of current sprint
iii. In Progress: Any task being worked on
iv. To Verify: Tasks pending for verification
v. Done: Tasks which are completed
Top 100 Software Testing
Interview Questions &
Answers | Software Testing
Material
Software Testing Interview Questions – 1-
25:
1. What is Software Testing?

According to ANSI/IEEE 1059 standard – A process of


analyzing a software item to detect the differences between
existing and required conditions (i.e., defects) and to evaluate
the features of the software item. Click here for more details. 

2. What are Quality Assurance and Quality Control?

Quality Assurance: Quality Assurance involves in process-


oriented activities. It ensures the prevention of defects in the
process used to make Software Application. So the defects
don’t arise when the Software Application is being developed.

Quality Control: Quality Control involves in product-oriented


activities. It executes the program or code to identify the
defects in the Software Application.

Must Read: Software QA Interview Questions


3. What is Verification in software testing?

Verification is the process, to ensure that whether we are


building the product right i.e., to verify the requirements which
we have and to verify whether we are developing the product
accordingly or not. Activities involved here are Inspections,
Reviews, Walk-throughs. Click here for more details.
4. What is Validation in software testing?

Validation is the process, whether we are building the right


product i.e., to validate the product which we have developed
is right or not. Activities involved in this is Testing the software
application. Click here for more details.

5. What is Static Testing?

Static Testing involves in reviewing the documents to identify


the defects in the early stages of SDLC.

6. What is Dynamic Testing?

Dynamic testing involves in the execution of code. It validates


the output with the expected outcome.

7. What is White Box Testing?

White Box Testing is also called as Glass Box, Clear Box, and
Structural Testing. It is based on applications internal code
structure. In white-box testing, an internal perspective of the
system, as well as programming skills, are used to design test
cases. This testing usually was done at the unit level. Click
here for more details.

8. What is Black Box Testing?

Black Box Testing is a software testing method in which testers


evaluate the functionality of the software under test without
looking at the internal code structure. This can be applied to
every level of software testing such as Unit, Integration,
System and Acceptance Testing. Click here for more details. 

9. What is Grey Box Testing?


Grey box is the combination of both White Box and Black Box
Testing. The tester who works on this type of testing needs to
have access to design documents. This helps to create better
test cases in this process.

10. What is Positive and Negative Testing?

Positive Testing: It is to determine what system supposed to


do. It helps to check whether the application is justifying the
requirements or not.

Negative Testing: It is to determine what system not


supposed to do. It helps to find the defects from the software.

11. What is Test Strategy?

Test Strategy is a high-level document (static document) and


usually developed by project manager. It is a document which
captures the approach on how we go about testing the product
and achieve the goals. It is normally derived from the Business
Requirement Specification (BRS). Documents like Test Plan
are prepared by keeping this document as a base. Click here
for more details.

12. What is Test Plan and contents available in a Test


Plan?

Test plan document is a document which contains the plan for


all the testing activities to be done to deliver a quality product.
Test Plan document is derived from the Product Description,
SRS, or Use Case documents for all future activities of the
project. It is usually prepared by the Test Lead or Test
Manager.

1. Test plan identifier


2. References
3. Introduction
4. Test items (functions)
5. Software risk issues
6. Features to be tested
7. Features not to be tested
8. Approach
9. Items pass/fail criteria
10. Suspension criteria and resolution requirements
11. Test deliverables
12. Remaining test tasks
13. Environmental needs
14. Staff and training needs
15. Responsibility
16. Schedule
17. Plan risks and contingencies
18. Approvals
19. Glossaries
Click here for more details. 

13. What is Test Suite?

Test Suite is a collection of test cases. The test cases which are
intended to test an application.

14. What is Test Scenario?

Test Scenario gives the idea of what we have to test. Test


Scenario is like a high-level test case.

15. What is Test Case?

Test cases are the set of positive and negative executable


steps of a test scenario which has a set of pre-conditions, test
data, expected result, post-conditions and actual results. Click
here for more details.

16. What is Test Bed?

An environment configured for testing. Test bed consists of


hardware, software, network configuration, an application
under test, other related software.

17. What is Test Environment?

Test Environment is the combination of hardware and software


on which Test Team performs testing.

Example:

; Application Type: Web Application


; OS: Windows
; Web Server: IIS
; Web Page Design: Dot Net
; Client Side Validation: JavaScript
; Server Side Scripting: ASP Dot Net
; Database: MS SQL Server
; Browser: IE/FireFox/Chrome
18. What is Test Data?

Test data is the data that is used by the testers to run the test
cases. Whilst running the test cases, testers need to enter
some input data. To do so, testers prepare test data. It can be
prepared manually and also by using tools.

For example, To test a basic login functionality having a user


id, password fields. We need to enter some data in the user id
and password fields. So we need to collect some test data.
19. What is Test Harness?

A test harness is the collection of software and test data


configured to test a program unit by running it under varying
conditions which involves monitoring the output with expected
output.

20. What is Test Closure?

Test Closure is the note prepared before test team formally


completes the testing process. This note contains the total no.
of test cases, total no. of test cases executed, total no. of
defects found, total no. of defects fixed, total no. of bugs not
fixed, total no of bugs rejected etc.,

21. List out Test Deliverables?

1. Test Strategy
2. Test Plan
3. Effort Estimation Report
4. Test Scenarios
5. Test Cases/Scripts
6. Test Data
7. Requirement Traceability Matrix (RTM)
8. Defect Report/Bug Report
9. Test Execution Report
10. Graphs and Metrics
11. Test summary report
12. Test incident report
13. Test closure report
14. Release Note
15. Installation/configuration guide
16. User guide
17. Test status report
18. Weekly status report (Project manager to client)
Click here for more details.

22. What is Unit Testing?

Unit Testing is also called as Module Testing or Component


Testing. It is done to check whether the individual unit or
module of the source code is working properly. It is done by
the developers in developer’s environment.

23. What is Integration Testing?

Integration Testing is the process of testing the interface


between the two software units. Integration testing is done by
three ways. Big Bang Approach, Top Down Approach, Bottom-
Up Approach

Click here for more details.

24. What is System Testing?

Testing the fully integrated application to evaluate the system’s


compliance with its specified requirements is called System
Testing AKA End to End testing. Verifying the completed
system to ensure that the application works as intended or
not.

25. What is Big Bang Approach?

Combining all the modules once and verifying the functionality


after completion of individual module testing.

Top down and bottom up are carried out by using dummy


modules known as Stubs and Drivers. These Stubs and Drivers
are used to stand-in for missing components to simulate data
communication between modules.
Manual Testing Interview Questions – 26-
50:
26. What is Top-Down Approach?

Testing takes place from top to bottom. High-level modules are


tested first and then low-level modules and finally integrating
the low-level modules to a high level to ensure the system is
working as intended. Stubs are used as a temporary module if
a module is not ready for integration testing.

27. What is Bottom-Up Approach?

It is a reciprocate of the Top-Down Approach. Testing takes


place from bottom to up. Lowest level modules are tested first
and then high-level modules and finally integrating the high-
level modules to a low level to ensure the system is working as
intended. Drivers are used as a temporary module for
integration testing.

28. What is End-To-End Testing?

Refer System Testing.

29. What is Functional Testing?

In simple words, what the system actually does is functional


testing. To verify that each function of the software application
behaves as specified in the requirement document. Testing all
the functionalities by providing appropriate input to verify
whether the actual output is matching the expected output or
not. It falls within the scope of black box testing and the
testers need not concern about the source code of the
application.

30. What is Non-Functional Testing?


In simple words, how well the system performs is non-
functionality testing. Non-functional testing refers to various
aspects of the software such as performance, load, stress,
scalability, security, compatibility etc., Main focus is to improve
the user experience on how fast the system responds to a
request.

31. What is Acceptance Testing?

It is also known as pre-production testing. This is done by the


end users along with the testers to validate the functionality of
the application. After successful acceptance testing. Formal
testing conducted to determine whether an application is
developed as per the requirement. It allows the customer to
accept or reject the application. Types of acceptance testing
are Alpha, Beta & Gamma.

32. What is Alpha Testing?

Alpha testing is done by the in-house developers (who


developed the software) and testers. Sometimes alpha testing
is done by the client or outsourcing team with the presence of
developers or testers.

33. What is Beta Testing?

Beta testing is done by a limited number of end users before


delivery. Usually, it is done in the client place.

34. What is Gamma Testing?

Gamma testing is done when the software is ready for release


with specified requirements. It is done at the client place. It is
done directly by skipping all the in-house testing activities.

35. What is Smoke Testing?


Smoke Testing is done to make sure if the build we received
from the development team is testable or not. It is also called
as “Day 0” check. It is done at the “build level”. It helps not to
waste the testing time to simply testing the whole application
when the key features don’t work or the key bugs have not
been fixed yet.

36. What is Sanity Testing?

Sanity Testing is done during the release phase to check for


the main functionalities of the application without going
deeper. It is also called as a subset of Regression testing. It is
done at the “release level”. At times due to release time
constraints rigorous regression testing can’t be done to the
build, sanity testing does that part by checking main
functionalities.

37. What is Retesting?

To ensure that the defects which were found and posted in the
earlier build were fixed or not in the current build. Say, Build
1.0 was released. Test team found some defects (Defect Id
1.0.1, 1.0.2) and posted. Build 1.1 was released, now testing
the defects 1.0.1 and 1.0.2 in this build is retesting.

38. What is Regression Testing?

Repeated testing of an already tested program, after


modification, to discover any defects introduced or uncovered
as a result of the changes in the software being tested or in
another related or unrelated software components.

Usually, we do regression testing in the following cases:

1. New functionalities are added to the application


2. Change Requirement (In organizations, we call it as CR)
3. Defect Fixing
4. Performance Issue Fix
5. Environment change (E.g., Updating the DB from MySQL
to Oracle)
39. What is GUI Testing?

Graphical User Interface Testing is to test the interface


between the application and the end user.

40. What is Recovery Testing?

Recovery testing is performed in order to determine how


quickly the system can recover after the system crash or
hardware failure. It comes under the type of non-functional
testing.

41. What is Globalization Testing?


Globalization is a process of designing a software application
so that it can be adapted to various languages and regions
without any changes.

42. What is Internationalization Testing (I18N Testing)?

Refer Globalization Testing.

43. What is Localization Testing (L10N Testing)?

Localization is a process of adapting globalization software for


a specific region or language by adding local specific
components.

44. What is Installation Testing?


It is to check whether the application is successfully installed
and it is working as expected after installation.

45. What is Formal Testing?


It is a process where the testers test the application by having
pre-planned procedures and proper documentation.
46. What is Risk Based Testing?

Identify the modules or functionalities which are most likely


cause failures and then testing those functionalities.

47. What is Compatibility Testing?


It is to deploy and check whether the application is working as
expected in a different combination of environmental
components.

48. What is Exploratory Testing?


Usually, this process will be carried out by domain experts.
They perform testing just by exploring the functionalities of the
application without having the knowledge of the requirements.

49. What is Monkey Testing?

Perform abnormal action on the application deliberately in


order to verify the stability of the application.

50. What is Usability Testing?

To verify whether the application is user-friendly or not and


was comfortably used by an end user or not. The main focus in
this testing is to check whether the end user can understand
and operate the application easily or not. An application should
be self-exploratory and must not require training to operate it.

Manual Testing Interview Questions – 51-


75:
51. What is Security Testing?

Security testing is a process to determine whether the system


protects data and maintains functionality as intended.

52. What is Soak Testing?


Running a system at high load for a prolonged period of time
to identify the performance problems is called Soak Testing.

53. What is Performance Testing?

This type of testing determines or validates the speed,


scalability, and/or stability characteristics of the system or
application under test. Performance is concerned with
achieving response times, throughput, and resource-utilization
levels that meet the performance objectives for the project or
product.

54. What is Load Testing?

It is to verify that the system/application can handle the


expected number of transactions and to verify the
system/application behavior under both normal and peak load
conditions.

55. What is Volume Testing?

It is to verify that the system/application can handle a large


amount of data

56. What is Stress Testing?

It is to verify the behavior of the system once the load


increases more than its design expectations.

57. What is Scalability Testing?

Scalability testing is a type of non-functional testing. It is to


determine how the application under test scales with
increasing workload.

58. What is Concurrency Testing?


Concurrency testing means accessing the application at the
same time by multiple users to ensure the stability of the
system. This is mainly used to identify deadlock issues.

59. What is Fuzz Testing?

Fuzz testing is used to identify coding errors and security


loopholes in an application. By inputting massive amount of
random data to the system in an attempt to make it crash to
identify if anything breaks in the application.

60. What is Adhoc Testing?

Ad-hoc testing is quite opposite to the formal testing. It is an


informal testing type. In Adhoc testing, testers randomly test
the application without following any documents and test
design techniques. This testing is primarily performed if the
knowledge of testers in the application under test is very high.
Testers randomly test the application without any test cases or
any business requirement document.

61. What is Interface Testing?

Interface testing is performed to evaluate whether two


intended modules pass data and communicate correctly to one
another.

62. What is Reliability Testing?


Perform testing on the application continuously for long period
of time in order to verify the stability of the application

63. What is Bucket Testing?

Bucket testing is a method to compare two versions of an


application against each other to determine which one
performs better.
64. What is A/B Testing?

Refer Bucket Testing.

65. What is Split Testing?

Refer Bucket Testing.

66. What are the principles of Software Testing?

1. Testing shows presence of defects


2. Exhaustive testing is impossible
3. Early testing
4. Defect clustering
5. Pesticide Paradox
6. Testing is context depending
7. Absence of error fallacy
Click here for more details.

67. What is Exhaustive Testing?

Testing all the functionalities using all valid and invalid inputs
and preconditions is known as Exhaustive testing.

68. What is Early Testing?

Defects detected in early phases of SDLC are less expensive to


fix. So conducting early testing reduces the cost of fixing
defects.

69. What is Defect clustering?

Defect clustering in software testing means that a small


module or functionality contains most of the bugs or it has the
most operational failures.
70. What is Pesticide Paradox?

Pesticide Paradox in software testing is the process of


repeating the same test cases, again and again, eventually,
the same test cases will no longer find new bugs. So to
overcome this Pesticide Paradox, it is necessary to review the
test cases regularly and add or update them to find more
defects.

71. What is Walk Through?

A walkthrough is an informal meeting conducts to learn, gain


understanding, and find defects. The author leads the meeting
and clarifies the queries raised by the peers in the meeting.

72. What is Inspection?

Inspection is a formal meeting lead by a trained moderator,


certainly not by the author. The document under inspection is
prepared and checked thoroughly by the reviewers before the
meeting. In the inspection meeting, the defects found are
logged and shared with the author for appropriate actions. Post
inspection, a formal follow-up process is used to ensure a
timely and corrective action.

73. Who are all involved in an inspection meeting?

Author, Moderator, Reviewer(s), Scribe/Recorder and Manager.

74. What is a Defect?

The variation between the actual results and expected results


is known as a defect. If a developer finds an issue and corrects
it by himself in the development phase then it’s called a
defect. Click here for more details.

75. What is a Bug?


If testers find any mismatch in the application/system in
testing phase then they call it as Bug. Click here for more
details.

Software Testing Interview Questions –


76-100:
76. What is an Error?

We can’t compile or run a program due to a coding mistake in


a program. If a developer unable to successfully compile or run
a program then they call it as an error. Click here for more
details. 

77. What is a Failure?

Once the product is deployed and customers find any issues


then they call the product as a failure product. After release, if
an end user finds an issue then that particular issue is called
as a failure. Click here for more details.

78. What is Bug Severity?

Bug/Defect severity can be defined as the impact of the bug on


customer’s business. It can be Critical, Major or Minor. In
simple words, how much effect will be there on the system
because of a particular defect. Click here for more details.

79. What is Bug Priority?

Defect priority can be defined as how soon the defect should


be fixed. It gives the order in which a defect should be
resolved. Developers decide which defect they should take up
next based on the priority. It can be High, Medium or
Low. Most of the times the priority status is set based on the
customer requirement. Click here for more details.
80. Tell some examples of Bug Severity and Bug
Priority?

High Priority & High Severity: Submit button is not working


on a login page and customers are unable to login to the
application

Low Priority & High Severity: Crash in some functionality


which is going to deliver after couple of releases

High Priority & Low Severity: Spelling mistake of a


company name on the homepage

Low Priority & Low Severity: FAQ page takes a long time to
load

Click here for more details.

81. What is the difference between a Standalone


application, Client-Server application and Web
application?

Standalone application:

Standalone applications follow one-tier architecture.


Presentation, Business, and Database layer are in one system
for a single user.

Client-Server Application:

Client-server applications follow two-tier architecture.


Presentation and Business layer are in a client system and
Database layer on another server. It works majorly in Intranet.

Web Application:
Web server applications follow three-tier or n-tier architecture.
The presentation layer is in a client system, a Business layer is
in an application server and Database layer is in a Database
server. It works both in Intranet and Internet.

82. What is Bug Life Cycle?

Bug life cycle is also known as Defect life cycle. In Software


Development process, the bug has a life cycle. The bug should
go through the life cycle to be closed. Bug life cycle varies
depends upon the tools (QC, JIRA etc.,) used and the process
followed in the organization. Click here for more details.

83. What is Bug Leakage?

A bug which is actually missed by the testing team while


testing and the build was released to the Production. If now
that bug (which was missed by the testing team) was found by
the end user or customer then we call it as Bug Leakage.

84. What is Bug Release?

Releasing the software to the Production with the known bugs


then we call it as Bug Release. These known bugs should be
included in the release note.

85. What is Defect Age?

Defect age can be defined as the time interval between date of


defect detection and date of defect closure.

Defect Age = Date of defect closure – Date of defect detection

Assume, a tester found a bug and reported it on 1 Jan 2016


and it was successfully fixed on 5 Jan 2016. So the defect age
is 5 days.
86. What is Error Seeding?

Error seeding is a process of adding known errors intendedly in


a program to identify the rate of error detection. It helps in the
process of estimating the tester skills of finding bugs and also
to know the ability of the application (how well the application
is working when it has errors.)

87. What is Showstopper Defect?

A showstopper defect is a defect which won’t allow a user to


move further in the application. It’s almost like a crash.

Assume that login button is not working. Even though you


have a valid username and valid password, you could not
move further because the login button is not functioning.

88. What is HotFix?

A bug which needs to handle as a high priority bug and fix it


immediately.

89. What is Boundary Value Analysis?

Boundary value analysis (BVA) is based on testing the


boundary values of valid and invalid partitions. The Behavior at
the edge of each equivalence partition is more likely to be
incorrect than the behavior within the partition, so boundaries
are an area where testing is likely to yield defects. Every
partition has its maximum and minimum values and these
maximum and minimum values are the boundary values of a
partition. A boundary value for a valid partition is a valid
boundary value. Similarly, a boundary value for an invalid
partition is an invalid boundary value. Click here for more
details.

90. What is Equivalence Class Partition?


Equivalence Partitioning is also known as Equivalence Class
Partitioning. In equivalence partitioning, inputs to the software
or system are divided into groups that are expected to exhibit
similar behavior, so they are likely to be proposed in the same
way. Hence selecting one input from each group to design the
test cases. Click here for more details.

91. What is Decision Table testing?

Decision Table is aka Cause-Effect Table. This test technique is


appropriate for functionalities which has logical relationships
between inputs (if-else logic). In Decision table technique, we
deal with combinations of inputs. To identify the test cases
with decision table, we consider conditions and actions. We
take conditions as inputs and actions as outputs. Click here for
more details.

92. What is State Transition?

Using state transition testing, we pick test cases from an


application where we need to test different system transitions.
We can apply this when an application gives a different output
for the same input, depending on what has happened in the
earlier state. Click here for more details.

93. What is an entry criteria?

The prerequisites that must be achieved before commencing


the testing process. Click here for more details.

94. What is an exit criteria?

The conditions that must be met before testing should be


concluded. Click here for more details.

95. What is SDLC?


Software Development Life Cycle (SDLC) aims to produce a
high-quality system that meets or exceeds customer
expectations, works effectively and efficiently in the current
and planned information technology infrastructure, and is
inexpensive to maintain and cost-effective to enhance.

Click here for more details.

96. What are the different available models of SDLC?

1. Waterfall
2. Spiral
3. V Model
4. Prototype
5. Agile
97. What is STLC?

STLC (Software Testing Life Cycle) identifies what test


activities to carry out and when to accomplish those test
activities. Even though testing differs between Organizations,
there is a testing life cycle. Click here for more details.

98. What is RTM?

Requirements Traceability Matrix (RTM) is used to trace the


requirements to the tests that are needed to verify whether
the requirements are fulfilled. Requirement Traceability Matrix
AKA Traceability Matrix or Cross Reference Matrix. Click here
for more details.

99. What is Test Metrics?

Software test metrics is to monitor and control process and


product. It helps to drive the project towards our planned
goals without deviation. Metrics answer different questions. It’s
important to decide what questions you want answers to. Click
here for more details.

100. When to stop testing? (Or) How do you decide


when you have tested enough?

There are many factors involved in the real-time projects to


decide when to stop testing.

1. Testing deadlines or release deadlines


2. By reaching the decided pass percentage of test cases
3. The risk in the project is under acceptable limit
4. All the high priority bugs, blockers are fixed
5. When acceptance criteria is met

You might also like