You are on page 1of 24

Software Verification and Validation

sWhat is Verification?
Verification is a process that determines the quality of the software. Verification
includes all the activities associated with producing high quality software, i.e.: testing,
inspection, design analysis, specification analysis, and so on. Verification is a
relatively objective process, in that if the various processes and documents are
expressed precisely enough, no subjective judgement should be needed in order to
verify software.
Advantages of Software:

 Verification helps in lowering the number of the defects that may be


encountered in the later stages of development.
 Verifying the product at the starting phase of the development will help in
understanding the product in a more comprehensive way.
 Verification reduces the chances of failures in the software application or
product.
 Verification helps in building the product as per the customer specifications
and needs.

What is Validation?
Validation is a process in which the requirements of the customer are actually met by
the software functionality. Validation is done at the end of the development process
and takes place after verifications are completed.
Advantages of Verification:

 During verification if some defects are missed, then during the validation
process they can be caught as failures.
 If during verification some specification is misunderstood and development
has already occurred then during the validation process the difference
between the actual result and expected result can be identified and corrective
action taken.
 Validation is done during testing like feature testing, integration testing,
system testing, load testing, compatibility testing, stress testing, etc.
 Validation helps in building the right product as per the customer’s
requirement which in turn will satisfy their business process needs.

How Do Verification and Validation Differ?


The distinction between the two terms is largely due to the role of specifications.
Validation is the process of checking whether the specification captures the
customer’s requirements, while verification is the process of checking that the
software meets specifications.
Verification includes all the activities associated with the producing high quality
software. It is a relatively objective process in that no subjective judgement should be
needed in order to verify software.
In contrast, validation is an extremely subjective process. It involves making
subjective assessments of how well the (proposed) system addresses a real-world
need. Validation includes activities such as requirements modelling, prototyping and
user evaluation
What is Static Testing?
Static Testing is a type of software testing in which software application is tested
without code execution. Manual or automated reviews of code, requirement
documents and document design are done in order to find the errors. The main
objective of static testing is to improve the quality of software applications by finding
errors in early stages of software development process.
Static testing involves manual or automated reviews of the documents. This review is
done during an initial phase of testing to catch Defect early in STLC. It examines
work documents and provides review comments. It is also called Non-execution
testing or verification testing.
Examples of Work documents-

 Requirement specifications
 Design document
 Source Code
 Test Plans
 Test Cases
 Test Scripts
 Help or User document
 Web Page content           

What is Dynamic Testing?


Under Dynamic Testing, a code is executed. It checks for functional behavior of
software system, memory/cpu usage and overall performance of the system. Hence
the name “Dynamic”
The main objective of this testing is to confirm that the software product works in
conformance with the business requirements. This testing is also called an Execution
technique or validation testing.
Dynamic testing executes the software and validates the output with the expected
outcome. Dynamic testing is performed at all levels of testing and it can be either
black or white box testing.

KEY DIFFERENCE
Static testing was done without executing the program whereas Dynamic testing is
done by executing the program.
Static testing checks the code, requirement documents, and design documents to
find errors whereas Dynamic testing checks the functional behavior of software
system, memory/CPU usage and overall performance of the system.
Static testing is about the prevention of defects whereas Dynamic testing is about
finding and fixing the defects.
Static testing does the verification process while Dynamic testing does the validation
process.
Static testing is performed before compilation whereas Dynamic testing is performed
after compilation.
Static testing techniques are structural and statement coverage while Dynamic
testing techniques are Boundary Value Analysis & Equivalence Partitioning.
Validation planning; documentation for validation
Validation is the documented process of demonstrating that a system or process
meets a defined set of requirements. There are a common set of validation
documents used to provide this evidence. A validation project usually follows this
process:
Validation Planning – The decision is made to validate the system. A project lead is
identified, and validation resources are gathered.
Requirement Gathering – System Requirements are identified. Requirements are
documented in the appropriate specifications. Specification documents are reviewed
and approved.
System Testing – Testing Protocols are written, reviewed, and approved. The
protocol is executed to document that the system meets all requirements.
System Release – The Summary Report is written and system is released to the
end-users for use.
Change Control – If changes need to be made after validation is complete, Change
Control ensures that the system changes does not affect the system in unexpected
ways.
Different Types Of Software Testing

Given below is a list of some common types of Software Testing:


Functional Testing types include:

 Unit Testing
 Integration Testing
 System Testing
 Sanity Testing
 Smoke Testing
 Interface Testing
 Regression Testing
 Beta/Acceptance Testing
Non-functional Testing types include:

 Performance Testing
 Load Testing
 Stress Testing
 Volume Testing
 Security Testing
 Compatibility Testing
 Install Testing
 Recovery Testing
 Reliability Testing
 Usability Testing
 Compliance Testing
 Localization Testing
Different Kinds of Testing and their definitions:
#1) Alpha Testing

It is the most commonly used testing in the Software industry. The objective of this
testing is to identify all possible issues or defects before releasing it into the market
or to the user.

Alpha Testing will be carried out at the end of the software development phase but
before the Beta Testing. Still, minor design changes may be made as a result of such
testing.

Alpha Testing will be conducted at the developer’s site. An in-house virtual user


environment can be created for this type of testing.

#2) Acceptance Testing

An Acceptance Test is performed by the client and it verifies whether the end to end
flow of the system is as per the business requirements or not and if it is as per the
needs of the end-user.

Client accepts the software only when all the features and functionalities work as
expected. This is the last phase of testing, after which the software goes into
production. This is also called User Acceptance Testing (UAT).

#3) Ad-hoc Testing

The name itself suggests that this testing is performed on an ad-hoc basis i.e., with
no reference to the test case and also without any plan or documentation in place for
this type of testing.

The objective of this testing is to find the defects and break the application by
executing any flow of the application or any random functionality.

Ad-hoc Testing is an informal way of finding defects and can be performed by


anyone in the project. It is difficult to identify defects without a test case but
sometimes it is possible that defects found during ad-hoc testing might not have been
identified using the existing test cases.

#4) Accessibility Testing

The aim of Accessibility Testing is to determine whether the software or application is


accessible for disabled people or not.

Here, disability means deafness, color blindness, mentally disabled, blind, old age
and other disabled groups. Various checks are performed such as font size for
visually disabled, color and contrast for color blindness, etc.

#5) Beta Testing


Beta Testing is a formal type of Software Testing which is carried out by the
customer. It is performed in the Real Environment before releasing the product to the
market for the actual end-users.

Beta Testing is carried out to ensure that there are no major failures in the software
or product and it satisfies the business requirements from an end-user perspective.
Beta Testing is successful when the customer accepts the software.

Usually, this testing is typically done by the end-users or others. This is the final
testing done before releasing the application for commercial purposes. Usually, the
Beta version of the software or product released is limited to a certain number of
users in a specific area.

So the end-user actually uses the software and shares the feedback with the
company. The company then takes necessary action before releasing the software
worldwide.

#6) Back-end Testing

Whenever an input or data is entered on the front-end application, it is stored in the


database and the testing of such database is known as Database Testing or
Backend Testing.

There are different databases like SQL Server, MySQL, and Oracle, etc. Database
Testing involves testing of table structure, schema, stored procedure, data structure
and so on.

In Back-end Testing, GUI is not involved, the testers are directly connected to the
database with proper access and testers can easily verify data by running a few
queries on the database.

There can be issues identified like data loss, deadlock, data corruption etc during this
back-end testing and these issues are critical to fix before the system goes live into
the production environment

#7) Browser Compatibility Testing

This is a sub-type of Compatibility Testing (which is explained below) and is


performed by the testing team.

Browser Compatibility Testing is performed for web applications and ensures that the
software can run with a combination of different browsers and operating systems.
This type of testing also validates whether a web application runs on all versions of
all browsers or not.

#8) Backward Compatibility Testing

It is a type of testing which validates whether the newly developed software or


updated software works well with the older version of the environment or not.

Backward Compatibility Testing checks whether the new version of the software
works properly with the file format created by an older version of the software; it also
works well with data tables, data files, and data structure created by the older version
of that software.

If any of the software is updated then it should work well on top of the previous
version of that software.

#9) Black Box Testing

Internal system design is not considered in this type of testing. Tests are based on
the requirements and functionality.

Detailed information about the advantages, disadvantages, and types of Black Box
testing can be found here.

#10) Boundary Value Testing

This type of testing checks the behavior of the application at boundary level.

Boundary Value Testing is performed to check if defects exist at boundary values.


Boundary Value Testing is used for testing a different range of numbers. There is an
upper and lower boundary for each range and testing is performed on these
boundary values.

If testing requires a test range of numbers from 1 to 500 then Boundary Value
Testing is performed on values at 0, 1, 2, 499, 500 and 501.

#11) Branch Testing

This is a type of White box Testing and is carried out during Unit Testing. Branch
Testing, the name itself, suggests that the code is tested thoroughly by traversing at
every branch.

#12) Comparison Testing

Comparison of a product’s strengths and weaknesses with its previous versions or


other similar products is termed as Comparison Testing.

#13) Compatibility Testing

This is a testing type in which it validates how software behaves and runs in a
different environment, web servers, hardware, and network environment.

Compatibility testing ensures that software can run on a different configuration,


different databases, different browsers, and their versions. Compatibility testing is
performed by the testing team.

#14) Component Testing

This is mostly performed by developers after the completion of unit testing.


Component Testing involves testing of multiple functionalities as a single code and its
objective is to identify if any defect exists after connecting those multiple
functionalities with each other.

#15) End-to-End Testing

Similar to system testing, End-to-End Testing involves testing of a complete


application environment in a situation that mimics real-world use, such as interacting
with a database, using network communications, or interacting with other hardware,
applications, or systems if appropriate.

#16) Equivalence Partitioning

It is a testing technique and a type of Black Box Testing. During this Equivalence


Partitioning, a set of groups are selected and a few values or numbers are picked up
for testing. It is understood that all values from that group generate the same output.

The aim of this testing is to remove redundant test cases within a specific group
which generate the same output but not any defect.

Suppose the application accepts values between -10 and +10, then using
equivalence partitioning the values picked for testing are zero, one positive value,
and one negative value. So the Equivalence Partitioning for this testing is  -10 to -1,
0, and 1 to 10.

#17) Example Testing

This means real-time testing. Example Testing includes real-time scenarios, it also
involves scenarios based on the experience of the testers.

#18) Exploratory Testing

Exploratory Testing is informal testing performed by the testing team. The objective
of this testing is to explore the application and look for defects that exist in the
application.

Sometimes it may happen that during this testing a major defect discovered can even
cause a system failure. During Exploratory Testing, it is advisable to keep a track of
what flow you have tested and what activity you did before the start of a specific flow.

Exploratory Testing techniques are performed without documentation or test cases.

#20) Functional Testing

This type of testing ignores the internal parts and focuses only on the output to check
if it is as per the requirement or not.

This is a black-box type testing that is geared towards the functional requirements of
an application. For detailed information about Functional Testing, you can check
it here.
#21) Graphical User Interface (GUI) Testing

The objective of this GUI Testing is to validate the GUI as per the business
requirement. The expected GUI of the application is mentioned in the Detailed
Design Document and GUI mockup screens.

GUI Testing includes the size of the buttons and input fields present on the screen,
alignment of all text, tables, and content in the tables.

It also validates the menu of the application, after selecting different menu and menu
items, it validates that the page does not fluctuate and the alignment remains the
same after hovering the mouse on the menu or sub-menu.

#22) Gorilla Testing

Gorilla Testing is a testing type performed by a tester and sometimes by the


developer as well.

In Gorilla Testing, one module or the functionality in the module is tested thoroughly
and heavily. The objective of this testing is to check the robustness of the application.

#23) Happy Path Testing

The objective of Happy Path Testing is to test an application successfully on a


positive flow.

It does not look for negative or error conditions. The focus is only on valid and
positive inputs through which the application generates the expected output.

#24) Incremental Integration Testing

Incremental Integration Testing is a Bottom-up approach for testing i.e continuous


testing of an application when a new functionality is added.

Application functionality and modules should be independent enough to test


separately. This is done by programmers or by testers.

#25) Install/Uninstall Testing

Installation and Uninstallation Testing is done on full, partial, or upgraded


install/uninstall processes on different operating systems under different hardware or
software environments.

#26) Integration Testing

Testing of all integrated modules to verify the combined functionality after integration
is termed as Integration Testing.

Modules are typically code modules, individual applications, client and server
applications on a network, etc. This type of testing is especially relevant to the
client/server and distributed systems.
#27) Load Testing

It is a type of Non-Functional Testing and the objective of Load Testing is to check


how much load or maximum workload a system can handle without any performance
degradation.

Load Testing helps to find the maximum capacity of the system under specific load
and any issues that cause software performance degradation. Load testing is
performed using tools like JMeter, LoadRunner, WebLoad, Silk performer, etc.

#28) Monkey Testing

Monkey Testing is carried out by a tester assuming that if the monkey uses the
application then how random input and values will be entered by the Monkey without
any knowledge or understanding of the application.

The objective of Monkey Testing is to check if an application or system gets crashed


by providing random input values/data. Monkey Testing is performed randomly and
no test cases are scripted and it is not necessary to be aware of the full functionality
of the system.

#29) Mutation Testing

Mutation Testing is a type of white box testing in which the source code of one of the
programs is changed and verifies whether the existing test cases can identify these
defects in the system.

The change in the program source code is very minimal so it does not impact the
entire application, only the specific area having the impact and the related test cases
should be able to identify those errors in the system.

#30) Negative Testing

Testers have the mindset of “attitude to break” and using Negative Testing they
validate if the system or application breaks.

Negative Testing technique is performed using incorrect data, invalid data or input. It
validates if the system throws an error of invalid input and behaves as expected.

#31) Non-Functional Testing

This is the type of testing for which every organization has a separate team which is
usually called a Non-Functional Test (NFT) team or Performance team.

Non-Functional Testing involves testing of non-functional requirements such as Load


Testing, Stress Testing, Security, Volume, Recovery Testing, etc. The objective of
NFT testing is to ensure whether the response time of software or application is quick
enough as per the business requirement.

It should not take much time to load any page or system and should be sustained
during peak load.
#32) Performance Testing

This term is often used interchangeably with ‘stress’ and ‘load’ testing.

Performance Testing is done to check whether the system meets the performance
requirements. Different performance and load tools are used to do this testing.

#33) Recovery Testing

It is a type of testing which validates how well the application or system recovers
from crashes or disasters.

Recovery Testing determines if the system is able to continue its operation after a
disaster. Assume that the application is receiving data through a network cable and
suddenly that network cable has been unplugged.

Sometime later, plug in the network cable; then the system should start receiving
data from where it lost the connection due to network cable being unplugged.

#34) Regression Testing

Testing an application as a whole for the modification of any module or functionality


is termed as Regression Testing.

It is difficult to cover all the systems in Regression Testing, so typically Automation


Testing Tools are used for these types of testing.

#35) Risk-Based Testing (RBT)

For Risk-Based Testing, the functionalities or requirements are tested based on their


priority. Risk-Based Testing includes testing of highly critical functionality, which has
the highest impact on business and in which the probability of failure is very high.

Priority decisions are based on business needs, so once priority is set for all
functionalities, then high priority functionality or test cases are executed first followed
by medium and then low priority functionalities.

Low priority functionality may be tested or not tested based on the available time.
Risk-Based Testing is carried out if there is insufficient time available to test the
entire software and the software needs to be implemented on time without any delay.

This approach is followed only by the discussion and approval of the client and senior
management of the organization.

#36) Sanity Testing

Sanity Testing is done to determine if a new software version is performing well


enough to accept it for a major testing effort or not.

If an application is crashing for initial use then the system is not stable enough for
further testing. Hence a build or an application is assigned to fix it.
#37) Security Testing

It is a type of testing performed by a special team of testers. A system can be


penetrated by any hacking method.

Security Testing is done to check how the software, application or website is secure
from internal and external threats. This testing includes how much software is secure
from malicious programs, viruses and how secure & strong the authorization and
authentication processes are.

It also checks how software behaves for any hackers attack & malicious programs
and how software is maintained for data security after such a hacker attack.

#38) Smoke Testing

Whenever a new build is provided by the development team, then the Software
Testing team validates the build and ensures that no major issue exists.

The testing team will ensure that the build is stable and a detailed level of testing will
be carried out further. Smoke Testing checks for no show stopper defects exist in the
build which will prevent the testing team from testing the application in detail.

If the testers find that the major critical functionality is broken down at the initial stage
itself then the testing team can reject the build and inform accordingly to the
development team. Smoke Testing is carried out to a detailed level of any Functional
or Regression Testing.

#39) Static Testing

Static Testing is a type of testing which is executed without any code. The execution
will be performed on the documentation during the testing phase.

It involves reviews, walkthroughs, and inspection of the deliverables of the project.


Static Testing does not execute the code instead, the code syntax and naming
conventions are checked.

Static Testing is also applicable for test cases, test plans, and design documents. We
need to perform static testing with the testing team as the defects identified during
this type of testing are cost-effective from a project perspective.

#40) Stress Testing

This testing is done when a system is stressed beyond its specifications in order to
check how and when it fails.

This is performed under heavy load like putting large number beyond storage
capacity, complex database queries, continuous input to the system or database
load.

#41) System Testing


Under System Testing technique, the entire system is tested as per the
requirements. It is a Black-box type Testing that is based on the overall requirement
specifications and covers all the combined parts of the system.

#42) Unit Testing

Testing of an individual software component or module is termed as Unit Testing.

It is typically done by the programmer and not by testers, as it requires detailed


knowledge of the internal program design and code. It may also require developing
test driver modules or test harnesses.

#43) Usability Testing

Under Usability Testing, the User-Friendliness Check is done.

The application flow is tested to see if a new user can understand the application
easily or not. Proper help is documented if a user gets stuck at any point. Basically,
system navigation is checked in this testing.

#44) Vulnerability Testing

The testing, which involves identifying weaknesses in the software, hardware and the
network, is known as Vulnerability Testing. In malicious programs, the hacker can
take control of the system, if it is vulnerable to such kind of attacks, viruses, and
worms.

We need to check if those systems undergo Vulnerability Testing before production.


It may identify critical defects and flaws in security.

#45) Volume Testing

Volume Testing is a type of Non-Functional Testing performed by the Performance


Testing team.

The software or application undergoes a huge amount of data and Volume Testing
checks the system behavior and response time of the application when the system
came across such a high volume of data.

This high volume of data may impact the system’s performance and speed of
processing time.

#46) White Box Testing

White Box Testing is based on the knowledge about the internal logic of an
application’s code.

It is also known as Glass box Testing. Internal software and code work should be
known to perform this type of testing. Under this, tests are based on the coverage of
code statements, branches, paths, conditions, etc.
SOFTWARE TESTING Fundamentals (STF) is a platform to gain (or refresh)
basic knowledge in the field of Software Testing. If we are to ‘cliche’ it, the site
is of the testers, by the testers, and for the testers. Our goal is to build a resourceful
repository of Quality Content on Quality. YES, you found it: the not-so-ultimate-but-
fairly-comprehensive site for software testing enthusiasts. Have fun!

Why Software Testing?

Software that does not work correctly can lead to many problems such as:

 Delay / Loss of time


 Futility / Loss of effort
 Wastage / Loss of money
 Shame / Loss of business reputation
 Injury or death

Testing helps in ensuring that software work correctly and reduces the risk of
software failure, thereby avoiding the problems mentioned above.

Software Testing Goals

The three main goals of Software Testing are:

 Defect Detection: Find defects / bugs in the software during all stages of its
development (earlier, the better).
 Defect Prevention: As a consequence of defect detection, help anticipate and
prevent defects from occurring at later stages of development or from recurring
in the future.
 User Satisfaction: Ensure customers / users are satisfied that their requirements
(explicit or implicit) are met.

What is a Test Plan?


A Test Plan refers to a detailed document that catalogs the test strategy, objectives,
schedule, estimations, deadlines, and the resources required for completing that
particular project. Think of it as a blueprint for running the tests needed to ensure the
software is working properly – controlled by test managers.

Components of a Test Plan


 Scope: Details the objectives of the particular project. Also, it details user
scenarios to be used in tests. If necessary, the scope can specify what scenarios
or issues the project will not cover.
 Schedule: Details start dates and deadlines for testers to deliver results.
 Resource Allocation: Details which tester will work on which test.
 Environment: Details the nature, configuration, and availability of the test
environment.
 Tools: Details what tools are to be used for testing, bug reporting, and other
relevant activities.
 Defect Management: Details how bugs will be reported, to whom and what
each bug report needs to be accompanied by. For example, should bugs be
reported with screenshots, text logs, or videos of their occurrence in the code?
 Risk Management: Details what risks may occur during software testing, and
what risks the software itself may suffer if released without sufficient testing.
 Exit Parameters: Details when testing activities must stop. This part describes
the results that are expected from the QA operations, giving testers a benchmark
to compare actual results to.

How to create a Test Plan?


Creating a Test Plan involves the following steps:

. Product Analysis
. Designing Test Strategy
. Defining Objectives
. Establish Test Criteria
. Planning Resource Allocation
. Planning Setup of Test Environment
. Determine test schedule and estimation
. Establish Test Deliverables

1. Product Analysis

Start with learning more about the product being tested, the client, and the end-users
of similar products. Ideally, this phase should focus on answering the following
questions:

 Who will use the product?


 What is the main purpose of this product?
 How does the product work?
 What are the software and hardware specifications?

In this stage, do the following:

 Interview clients, designers, and developers


 Review product and project documentation
 Perform a product walkthrough
2. Designing Test Strategy

The Test Strategy document is developed by the test manager and defines the


following:

 Project objectives and how to achieve them.


 The amount of effort and cost required for testing.

More specifically, the document must detail out:

 Scope of Testing: Contains the software components (hardware, software,


middleware) to be tested and also those that will not be tested.
 Type of Testing: Describes the types of tests to be used in the project. This is
necessary since each test identifies specific types of bugs.
 Risks and Issues: Describes all possible risks that may occur during testing –
tight deadlines, insufficient management, inadequate or erroneous budget
estimate – as well as the effect of these risks on the product or business.
 Test Logistics: Mentions the names of testers (or their skills) as well as the tests
to be run by them. This section also includes the tools and the schedule laid out
for testing.

3. Defining Objectives

This phase defines the goals and expected results of test execution. Since all testing
intends to identify as many defects as possible, the objects must include:

 A list of all software features – functionality, GUI, performance standards- that


must be tested.
 The ideal result or benchmark for every aspect of the software that needs
testing. This is the benchmark to which all actual results will be compared.

4. Establish Test Criteria

Test Criteria refers to standards or rules governing all activities in a testing project.
The two main test criteria are:

 Suspension Criteria: Defines the benchmarks for suspending all tests. For
example, if QA team members find that 50% of all test cases have failed, then all
testing is suspended until the developers resolve all of the bugs that have been
identified so far.
 Exit Criteria: Defines the benchmarks that signify the successful completion of a
test phase or project. The exit criteria are the expected results of tests and must
be met before moving on to the next stage of development. For example, 80% of
all test cases must be marked successful before a particular feature or portion of
the software can be considered suitable for public use.

5. Planning Resource Allocation

This phase creates a detailed breakdown of all resources required for project
completion. Resources include human effort, equipment, and all infrastructure
required for accurate and comprehensive testing.
This part of the test plan decides the measure of resources (number of testers and
equipment) the project requires. This also helps test managers formulate a correctly
calculated schedule and estimation for the project.

6. Planning Setup of Test Environment

The test environment refers to software and hardware setup on which QAs run their
tests. Ideally, test environments should be real devices so that testers can monitor
software behavior in real user conditions. Whether it is manual testing or automation
testing, nothing beats real devices, installed with real browsers and operating
systems are non-negotiable as test environments. Do not compromise your test
results with emulators or simulators.

Try Testing on Real Device Cloud for Free

7. Determining Test Schedule and Estimation

For test estimation, break down the project into smaller tasks and allocate time and
effort required for each.

Then, create a schedule to complete these tasks in the designated time with the
specific amount of effort.
Creating the schedule, however, does require input from multiple perspectives:

 Employee availability, number of working days, project deadlines, daily resource


availability.
 Risks associated with the project which has been evaluated in an earlier stage.

8. Establish Test Deliverables

Test Deliverables refer to a list of documents, tools, and other equipment that must
be created, provided, and maintained to support testing activities in a project.

A different set of deliverables is required before, during, and after testing.

Deliverables required before testing

Documentation on

 Test Plan
 Test Design

Deliverables required during testing

 Documentation on
 Test Scripts
 Simulators or Emulators (in early stages)
 Test Data
 Error and execution logs

Deliverables required after testing


Documentation on

 Test Results
 Defect Reports
 Release Notes

A test plan in software testing is the backbone on which the entire project is built.
Without a sufficiently extensive and well-crafted plan, QAs are bound to get confused
with vague, undefined goals and deadlines. This hinders fast and accurate testing
unnecessarily, slowing down results, and delaying release cycles.

Black Box Testing and Techniques


Black Box Testing- is also known as behavioral, opaque-box, closed-box,
specification-based or eye-to-eye testing.It is a Software Testing method that
analyzes the functionality of a software/application without knowing much about the
internal structure/design of the item that is being tested and compares the input value
with the output value.

The main focus of Black Box Testing is on the functionality of the system as a whole.
The term ‘Behavioral Testing’ is also used for Black Box Testing.

Types of Black Box Testing

Practically, there are several types of Black Box Testing that are possible, but if we
consider a major variant of it then only the below mentioned are the two fundamental
ones.

#1) Functional Testing

This testing type deals with the functional requirements or specifications of an


application. Here, different actions or functions of the system are being tested by
providing the input and comparing the actual output with the expected output.

For example, when we test a Dropdown list, we click on it and verify if it expands and
all the expected values are showing in the list.

Few major types of Functional Testing are:

 Smoke Testing
 Sanity Testing
 Integration Testing
 System Testing
 Regression Testing
 User Acceptance Testing

=> Read More on Functional Testing

#2) Non-Functional Testing


Apart from the functionalities of the requirements, there are even several non-
functional aspects that are required to be tested to improve the quality and
performance of the application.

Few major types of Non-Functional Testing include:

 Usability Testing
 Load Testing
 Performance Testing
 Compatibility Testing
 Stress Testing
 Scalability Testing

Black Box Testing Techniques

In order to systematically test a set of functions, it is necessary to design test cases.


Testers can create test cases from the requirement specification document using the
following Black Box Testing techniques:

 Equivalence Partitioning
 Boundary Value Analysis
 Decision Table Testing
 State Transition Testing
 Error Guessing
 Graph-Based Testing Methods
 Comparison Testing
Let’s understand each technique in detail.

#1) Equivalence Partitioning

This technique is also known as Equivalence Class Partitioning (ECP). In this


technique, input values to the system or application are divided into different classes
or groups based on its similarity in the outcome.

Hence, instead of using each and every input value, we can now use any one value
from the group/class to test the outcome. This way, we can maintain test coverage
while we can reduce the amount of rework and most importantly the time spent.

For Example:

As present in the above image, the “AGE” text field accepts only numbers from 18 to
60. There will be three sets of classes or groups.

Two invalid classes will be:

a) Less than or equal to 17.

b) Greater than or equal to 61.


A valid class will be anything between 18 and 60.

We have thus reduced the test cases to only 3 test cases based on the formed
classes thereby covering all the possibilities. So, testing with any one value from
each set of the class is sufficient to test the above scenario.

Recommended Read => What is Equivalence Partitioning?

#2) Boundary Value Analysis

The name itself defines that in this technique, we focus on the values at boundaries
as it is found that many applications have a high amount of issues on the boundaries.

Boundary refers to values near the limit where the behavior of the system changes.
In boundary value analysis, both valid and invalid inputs are being tested to verify the
issues.

For Example:

If we want to test a field where values from 1 to 100 should be accepted, then we
choose the boundary values: 1-1, 1, 1+1, 100-1, 100, and 100+1. Instead of using all
the values from 1 to 100, we just use 0, 1, 2, 99, 100, and 101.

#3) Decision Table Testing

As the name itself suggests, wherever there are logical relationships like:

If
{
(Condition = True)
then action1 ;
}
else action2; /*(condition = False)*/

Then a tester will identify two outputs (action1 and action2) for two conditions (True
and False). So based on the probable scenarios a Decision table is carved to
prepare a set of test cases.

For Example:

Take an example of XYZ bank that provides an interest rate for the Male senior
citizen as 10% and 9% for the rest of the people.

In this example condition, C1 has two values as true and false, C2 also has two
values as true and false. The total number of possible combinations would then be
four. This way we can derive test cases using a decision table.

#4) State Transition Testing


State Transition Testing is a technique that is used to test the different states of the
system under test. The state of the system changes depending upon the conditions
or events. The events trigger states which become scenarios and a tester needs to
test them.

A systematic state transition diagram gives a clear view of the state changes but it is
effective for simpler applications. More complex projects may lead to more complex
transition diagrams thereby making it less effective.

For Example:

#5) Error Guessing

This is a classic example of Experience-Based Testing.

In this technique, the tester can use his/her experience about the application
behavior and functionalities to guess the error-prone areas. Many defects can be
found using error guessing where most of the developers usually make mistakes.

Few common mistakes that developers usually forget to handle:

 Divide by zero.
 Handling null values in text fields.
 Accepting the Submit button without any value.
 File upload without attachment.
 File upload with less than or more than the limit size.
#6) Graph-Based Testing Methods

Each and every application is a build-up of some objects. All such objects are
identified and the graph is prepared. From this object graph, each object relationship
is identified and test cases are written accordingly to discover the errors.

#7) Comparison Testing

In this method, different independent versions of the same software are used to
compare to each other for testing.

White Box Testing and Techniques


If we go by the definition, “White box testing” (also known as clear, glass box or
structural testing) is a testing technique which evaluates the code and the internal
structure of a program.

White box testing involves looking at the structure of the code. When you know the
internal structure of a product, tests can be conducted to ensure that the internal
operations performed according to the specification. And all internal components
have been adequately exercised. 

White Box Testing is coverage of the specification in the code:


1. Code coverage

2. Segment coverage: Ensure that each code statement is executed once.

3. Branch Coverage or Node Testing: Coverage of each code branch in from all


possible was.

4. Compound Condition Coverage: For multiple conditions test each condition with


multiple paths and combination of the different path to reach that condition.

5. Basis Path Testing: Each independent path in the code is taken for testing.

6. Data Flow Testing (DFT): In this approach you track the specific variables through
each possible calculation, thus defining the set of intermediate paths through the
code.DFT tends to reflect dependencies but it is mainly through sequences of data
manipulation. In short, each data variable is tracked and its use is verified. This
approach tends to uncover bugs like variables used but not initialize, or declared but
not used, and so on.

7. Path Testing: Path testing is where all possible paths through the code are defined
and covered. It’s a time-consuming task.

8. Loop Testing: These strategies relate to testing single loops, concatenated loops,


and nested loops. Independent and dependent code loops and values are tested by
this approach.

3 Main White Box Testing Techniques:


 Statement Coverage
 Branch Coverage
 Path Coverage

Note that the statement, branch or path coverage does not identify any bug or defect
that needs to be fixed. It only identifies those lines of code which are either never
executed or remains untouched. Based on this further testing can be focused on.

Let’s understand these techniques one by one with a simple example.

#1) Statement coverage:

In a programming language, a statement is nothing but the line of code or instruction


for the computer to understand and act accordingly. A statement becomes an
executable statement when it gets compiled and converted into the object code and
performs the action when the program is in a running mode.

Hence “Statement Coverage”, as the name itself suggests, it is the method of


validating whether each and every line of the code is executed at least once.

#2) Branch Coverage:

“Branch” in a programming language is like the “IF statements”. An IF statement has


two branches: True and False.
So in Branch coverage (also called Decision coverage), we validate whether each
branch is executed at least once.

In case of an “IF statement”, there will be two test conditions:

One to validate the true branch and,

Other to validate the false branch.

Hence, in theory, Branch Coverage is a testing method which is when executed


ensures that each and every branch from each decision point is executed.

#3) Path Coverage

Path coverage tests all the paths of the program. This is a comprehensive technique
which ensures that all the paths of the program are traversed at least once. Path
Coverage is even more powerful than Branch coverage. This technique is useful for
testing the complex programs.

Difference Between White-Box and Black-Box Testing


To put it in simple terms:

Under Black box testing, we test the software from a user’s point of view, but in White
box, we see and test the actual code.

In Black box testing, we perform testing without seeing the internal system code, but
in WBT we do see and test the internal code.

White box testing technique is used by both the developers as well as testers. It
helps them to understand which line of code is actually executed and which is not.
This may indicate that there is either a missing logic or a typo, which eventually can
lead to some negative consequences.

Defect Seeding
Defect seeding is a practice in which defects are intentionally inserted into a
program by one group for detection by another group. The ratio of the number
of seeded defects detected to the total number of defects seeded provides a
rough idea of the total number of unseeded defects that have been detected.

Suppose on GigaTron 3.0 that you intentionally seeded the program with 50
errors. For best effect, the seeded errors should cover the full breadth of the
product’s functionality and the full range of severities—ranging from crashing
errors to cosmetic errors.

Suppose that at a point in the project when you believe testing to be almost
complete you look at the seeded defect report. You find that 31 seeded
defects and 600 indigenous defects have been reported. You can estimate
the total number of defects with the formula:
IndigenousDefectsTotal = ( SeededDefectsPlanted / SeededDefectsFound ) *
IndigenousDefectsFound

This technique suggests that GigaTron 3.0 has approximately 50 / 31 * 600 =


967 total defects.

Process
The idea behind Fault Seeding is to simply insert artificial faults into the
software and after testing count the number of artificial and real faults
discovered by the software tests. The complete process of Fault Seeding
consists of the following five steps:

 Fault Modeling
 Fault Injection
 Test Execution
 Calculation
 Interpretation

Fault Modeling

The first step is to model some artificial faults. You can e.g. replace arithmetic,
relational or logical operators, remove function calls or change the datatype of
variables. There are many types of faults, be creative.

Fault Injection

These artificial faults are then seeded into the software. The number of
seeded faults is self-determined and depends on the size of the software. For
example, if your software has 1,000 lines of code five artificial faults might be
insufficient. An exact rule to determine the number of artificial faults does not
exist, sorry folks.

The injection of artificial faults into the master branch should be avoided. It is
recommended to create a new branch and do the fault seeding activities in
this branch.

Test Execution

Now your manual or automated tests are executed. The aim of the tests is to
discover as much faults as possible.

Calculation
Your tests hopefully discover a proper number of real and artificial faults.
Now, the number of discovered seeded faults, the number of seeded faults
and the number of discovered real faults are known.

Interpretation

The main objective of Fault Seeding is to evaluate both the software and test
quality. On the basis of this information you can identify deficiencies of your
tests and define and execute improvement activities to reduce the identified
deficiencies.

Reference:

https://www.arbourgroup.com/blog/2015/verification-vs-validation-whats-the-
difference/

https://www.guru99.com/static-dynamic-testing.html

https://softwaretestingfundamentals.com/

https://www.browserstack.com/guide/test-planning

https://www.softwaretestinghelp.com/black-box-testing/

https://www.softwaretestinghelp.com/white-box-testing-techniques-with-example/

https://stevemcconnell.com/articles/gauging-software-readiness-with-defect-
tracking/#:~:text=Defect%20seeding%20is%20a%20practice,defects%20that
%20have%20been%20detected.

https://medium.com/@michael_altmann/fault-seeding-why-its-a-good-idea-to-insert-
bugs-in-your-software-part-1-245827e840b3

Group 2 Presentation

Alfuerto, Keith Yancy

Guiterez, Dylan

Limbing, Abegail

Mendoza, Kathleen

Saraga, Edsyl Jhon

You might also like