Professional Documents
Culture Documents
sWhat is Verification?
Verification is a process that determines the quality of the software. Verification
includes all the activities associated with producing high quality software, i.e.: testing,
inspection, design analysis, specification analysis, and so on. Verification is a
relatively objective process, in that if the various processes and documents are
expressed precisely enough, no subjective judgement should be needed in order to
verify software.
Advantages of Software:
What is Validation?
Validation is a process in which the requirements of the customer are actually met by
the software functionality. Validation is done at the end of the development process
and takes place after verifications are completed.
Advantages of Verification:
During verification if some defects are missed, then during the validation
process they can be caught as failures.
If during verification some specification is misunderstood and development
has already occurred then during the validation process the difference
between the actual result and expected result can be identified and corrective
action taken.
Validation is done during testing like feature testing, integration testing,
system testing, load testing, compatibility testing, stress testing, etc.
Validation helps in building the right product as per the customer’s
requirement which in turn will satisfy their business process needs.
Requirement specifications
Design document
Source Code
Test Plans
Test Cases
Test Scripts
Help or User document
Web Page content
KEY DIFFERENCE
Static testing was done without executing the program whereas Dynamic testing is
done by executing the program.
Static testing checks the code, requirement documents, and design documents to
find errors whereas Dynamic testing checks the functional behavior of software
system, memory/CPU usage and overall performance of the system.
Static testing is about the prevention of defects whereas Dynamic testing is about
finding and fixing the defects.
Static testing does the verification process while Dynamic testing does the validation
process.
Static testing is performed before compilation whereas Dynamic testing is performed
after compilation.
Static testing techniques are structural and statement coverage while Dynamic
testing techniques are Boundary Value Analysis & Equivalence Partitioning.
Validation planning; documentation for validation
Validation is the documented process of demonstrating that a system or process
meets a defined set of requirements. There are a common set of validation
documents used to provide this evidence. A validation project usually follows this
process:
Validation Planning – The decision is made to validate the system. A project lead is
identified, and validation resources are gathered.
Requirement Gathering – System Requirements are identified. Requirements are
documented in the appropriate specifications. Specification documents are reviewed
and approved.
System Testing – Testing Protocols are written, reviewed, and approved. The
protocol is executed to document that the system meets all requirements.
System Release – The Summary Report is written and system is released to the
end-users for use.
Change Control – If changes need to be made after validation is complete, Change
Control ensures that the system changes does not affect the system in unexpected
ways.
Different Types Of Software Testing
Unit Testing
Integration Testing
System Testing
Sanity Testing
Smoke Testing
Interface Testing
Regression Testing
Beta/Acceptance Testing
Non-functional Testing types include:
Performance Testing
Load Testing
Stress Testing
Volume Testing
Security Testing
Compatibility Testing
Install Testing
Recovery Testing
Reliability Testing
Usability Testing
Compliance Testing
Localization Testing
Different Kinds of Testing and their definitions:
#1) Alpha Testing
It is the most commonly used testing in the Software industry. The objective of this
testing is to identify all possible issues or defects before releasing it into the market
or to the user.
Alpha Testing will be carried out at the end of the software development phase but
before the Beta Testing. Still, minor design changes may be made as a result of such
testing.
An Acceptance Test is performed by the client and it verifies whether the end to end
flow of the system is as per the business requirements or not and if it is as per the
needs of the end-user.
Client accepts the software only when all the features and functionalities work as
expected. This is the last phase of testing, after which the software goes into
production. This is also called User Acceptance Testing (UAT).
The name itself suggests that this testing is performed on an ad-hoc basis i.e., with
no reference to the test case and also without any plan or documentation in place for
this type of testing.
The objective of this testing is to find the defects and break the application by
executing any flow of the application or any random functionality.
Here, disability means deafness, color blindness, mentally disabled, blind, old age
and other disabled groups. Various checks are performed such as font size for
visually disabled, color and contrast for color blindness, etc.
Beta Testing is carried out to ensure that there are no major failures in the software
or product and it satisfies the business requirements from an end-user perspective.
Beta Testing is successful when the customer accepts the software.
Usually, this testing is typically done by the end-users or others. This is the final
testing done before releasing the application for commercial purposes. Usually, the
Beta version of the software or product released is limited to a certain number of
users in a specific area.
So the end-user actually uses the software and shares the feedback with the
company. The company then takes necessary action before releasing the software
worldwide.
There are different databases like SQL Server, MySQL, and Oracle, etc. Database
Testing involves testing of table structure, schema, stored procedure, data structure
and so on.
In Back-end Testing, GUI is not involved, the testers are directly connected to the
database with proper access and testers can easily verify data by running a few
queries on the database.
There can be issues identified like data loss, deadlock, data corruption etc during this
back-end testing and these issues are critical to fix before the system goes live into
the production environment
Browser Compatibility Testing is performed for web applications and ensures that the
software can run with a combination of different browsers and operating systems.
This type of testing also validates whether a web application runs on all versions of
all browsers or not.
Backward Compatibility Testing checks whether the new version of the software
works properly with the file format created by an older version of the software; it also
works well with data tables, data files, and data structure created by the older version
of that software.
If any of the software is updated then it should work well on top of the previous
version of that software.
Internal system design is not considered in this type of testing. Tests are based on
the requirements and functionality.
Detailed information about the advantages, disadvantages, and types of Black Box
testing can be found here.
This type of testing checks the behavior of the application at boundary level.
If testing requires a test range of numbers from 1 to 500 then Boundary Value
Testing is performed on values at 0, 1, 2, 499, 500 and 501.
This is a type of White box Testing and is carried out during Unit Testing. Branch
Testing, the name itself, suggests that the code is tested thoroughly by traversing at
every branch.
This is a testing type in which it validates how software behaves and runs in a
different environment, web servers, hardware, and network environment.
The aim of this testing is to remove redundant test cases within a specific group
which generate the same output but not any defect.
Suppose the application accepts values between -10 and +10, then using
equivalence partitioning the values picked for testing are zero, one positive value,
and one negative value. So the Equivalence Partitioning for this testing is -10 to -1,
0, and 1 to 10.
This means real-time testing. Example Testing includes real-time scenarios, it also
involves scenarios based on the experience of the testers.
Exploratory Testing is informal testing performed by the testing team. The objective
of this testing is to explore the application and look for defects that exist in the
application.
Sometimes it may happen that during this testing a major defect discovered can even
cause a system failure. During Exploratory Testing, it is advisable to keep a track of
what flow you have tested and what activity you did before the start of a specific flow.
This type of testing ignores the internal parts and focuses only on the output to check
if it is as per the requirement or not.
This is a black-box type testing that is geared towards the functional requirements of
an application. For detailed information about Functional Testing, you can check
it here.
#21) Graphical User Interface (GUI) Testing
The objective of this GUI Testing is to validate the GUI as per the business
requirement. The expected GUI of the application is mentioned in the Detailed
Design Document and GUI mockup screens.
GUI Testing includes the size of the buttons and input fields present on the screen,
alignment of all text, tables, and content in the tables.
It also validates the menu of the application, after selecting different menu and menu
items, it validates that the page does not fluctuate and the alignment remains the
same after hovering the mouse on the menu or sub-menu.
In Gorilla Testing, one module or the functionality in the module is tested thoroughly
and heavily. The objective of this testing is to check the robustness of the application.
It does not look for negative or error conditions. The focus is only on valid and
positive inputs through which the application generates the expected output.
Testing of all integrated modules to verify the combined functionality after integration
is termed as Integration Testing.
Modules are typically code modules, individual applications, client and server
applications on a network, etc. This type of testing is especially relevant to the
client/server and distributed systems.
#27) Load Testing
Load Testing helps to find the maximum capacity of the system under specific load
and any issues that cause software performance degradation. Load testing is
performed using tools like JMeter, LoadRunner, WebLoad, Silk performer, etc.
Monkey Testing is carried out by a tester assuming that if the monkey uses the
application then how random input and values will be entered by the Monkey without
any knowledge or understanding of the application.
Mutation Testing is a type of white box testing in which the source code of one of the
programs is changed and verifies whether the existing test cases can identify these
defects in the system.
The change in the program source code is very minimal so it does not impact the
entire application, only the specific area having the impact and the related test cases
should be able to identify those errors in the system.
Testers have the mindset of “attitude to break” and using Negative Testing they
validate if the system or application breaks.
Negative Testing technique is performed using incorrect data, invalid data or input. It
validates if the system throws an error of invalid input and behaves as expected.
This is the type of testing for which every organization has a separate team which is
usually called a Non-Functional Test (NFT) team or Performance team.
It should not take much time to load any page or system and should be sustained
during peak load.
#32) Performance Testing
This term is often used interchangeably with ‘stress’ and ‘load’ testing.
Performance Testing is done to check whether the system meets the performance
requirements. Different performance and load tools are used to do this testing.
It is a type of testing which validates how well the application or system recovers
from crashes or disasters.
Recovery Testing determines if the system is able to continue its operation after a
disaster. Assume that the application is receiving data through a network cable and
suddenly that network cable has been unplugged.
Sometime later, plug in the network cable; then the system should start receiving
data from where it lost the connection due to network cable being unplugged.
Priority decisions are based on business needs, so once priority is set for all
functionalities, then high priority functionality or test cases are executed first followed
by medium and then low priority functionalities.
Low priority functionality may be tested or not tested based on the available time.
Risk-Based Testing is carried out if there is insufficient time available to test the
entire software and the software needs to be implemented on time without any delay.
This approach is followed only by the discussion and approval of the client and senior
management of the organization.
If an application is crashing for initial use then the system is not stable enough for
further testing. Hence a build or an application is assigned to fix it.
#37) Security Testing
Security Testing is done to check how the software, application or website is secure
from internal and external threats. This testing includes how much software is secure
from malicious programs, viruses and how secure & strong the authorization and
authentication processes are.
It also checks how software behaves for any hackers attack & malicious programs
and how software is maintained for data security after such a hacker attack.
Whenever a new build is provided by the development team, then the Software
Testing team validates the build and ensures that no major issue exists.
The testing team will ensure that the build is stable and a detailed level of testing will
be carried out further. Smoke Testing checks for no show stopper defects exist in the
build which will prevent the testing team from testing the application in detail.
If the testers find that the major critical functionality is broken down at the initial stage
itself then the testing team can reject the build and inform accordingly to the
development team. Smoke Testing is carried out to a detailed level of any Functional
or Regression Testing.
Static Testing is a type of testing which is executed without any code. The execution
will be performed on the documentation during the testing phase.
Static Testing is also applicable for test cases, test plans, and design documents. We
need to perform static testing with the testing team as the defects identified during
this type of testing are cost-effective from a project perspective.
This testing is done when a system is stressed beyond its specifications in order to
check how and when it fails.
This is performed under heavy load like putting large number beyond storage
capacity, complex database queries, continuous input to the system or database
load.
The application flow is tested to see if a new user can understand the application
easily or not. Proper help is documented if a user gets stuck at any point. Basically,
system navigation is checked in this testing.
The testing, which involves identifying weaknesses in the software, hardware and the
network, is known as Vulnerability Testing. In malicious programs, the hacker can
take control of the system, if it is vulnerable to such kind of attacks, viruses, and
worms.
The software or application undergoes a huge amount of data and Volume Testing
checks the system behavior and response time of the application when the system
came across such a high volume of data.
This high volume of data may impact the system’s performance and speed of
processing time.
White Box Testing is based on the knowledge about the internal logic of an
application’s code.
It is also known as Glass box Testing. Internal software and code work should be
known to perform this type of testing. Under this, tests are based on the coverage of
code statements, branches, paths, conditions, etc.
SOFTWARE TESTING Fundamentals (STF) is a platform to gain (or refresh)
basic knowledge in the field of Software Testing. If we are to ‘cliche’ it, the site
is of the testers, by the testers, and for the testers. Our goal is to build a resourceful
repository of Quality Content on Quality. YES, you found it: the not-so-ultimate-but-
fairly-comprehensive site for software testing enthusiasts. Have fun!
Software that does not work correctly can lead to many problems such as:
Testing helps in ensuring that software work correctly and reduces the risk of
software failure, thereby avoiding the problems mentioned above.
Defect Detection: Find defects / bugs in the software during all stages of its
development (earlier, the better).
Defect Prevention: As a consequence of defect detection, help anticipate and
prevent defects from occurring at later stages of development or from recurring
in the future.
User Satisfaction: Ensure customers / users are satisfied that their requirements
(explicit or implicit) are met.
. Product Analysis
. Designing Test Strategy
. Defining Objectives
. Establish Test Criteria
. Planning Resource Allocation
. Planning Setup of Test Environment
. Determine test schedule and estimation
. Establish Test Deliverables
1. Product Analysis
Start with learning more about the product being tested, the client, and the end-users
of similar products. Ideally, this phase should focus on answering the following
questions:
3. Defining Objectives
This phase defines the goals and expected results of test execution. Since all testing
intends to identify as many defects as possible, the objects must include:
Test Criteria refers to standards or rules governing all activities in a testing project.
The two main test criteria are:
Suspension Criteria: Defines the benchmarks for suspending all tests. For
example, if QA team members find that 50% of all test cases have failed, then all
testing is suspended until the developers resolve all of the bugs that have been
identified so far.
Exit Criteria: Defines the benchmarks that signify the successful completion of a
test phase or project. The exit criteria are the expected results of tests and must
be met before moving on to the next stage of development. For example, 80% of
all test cases must be marked successful before a particular feature or portion of
the software can be considered suitable for public use.
This phase creates a detailed breakdown of all resources required for project
completion. Resources include human effort, equipment, and all infrastructure
required for accurate and comprehensive testing.
This part of the test plan decides the measure of resources (number of testers and
equipment) the project requires. This also helps test managers formulate a correctly
calculated schedule and estimation for the project.
The test environment refers to software and hardware setup on which QAs run their
tests. Ideally, test environments should be real devices so that testers can monitor
software behavior in real user conditions. Whether it is manual testing or automation
testing, nothing beats real devices, installed with real browsers and operating
systems are non-negotiable as test environments. Do not compromise your test
results with emulators or simulators.
For test estimation, break down the project into smaller tasks and allocate time and
effort required for each.
Then, create a schedule to complete these tasks in the designated time with the
specific amount of effort.
Creating the schedule, however, does require input from multiple perspectives:
Test Deliverables refer to a list of documents, tools, and other equipment that must
be created, provided, and maintained to support testing activities in a project.
Documentation on
Test Plan
Test Design
Documentation on
Test Scripts
Simulators or Emulators (in early stages)
Test Data
Error and execution logs
Test Results
Defect Reports
Release Notes
A test plan in software testing is the backbone on which the entire project is built.
Without a sufficiently extensive and well-crafted plan, QAs are bound to get confused
with vague, undefined goals and deadlines. This hinders fast and accurate testing
unnecessarily, slowing down results, and delaying release cycles.
The main focus of Black Box Testing is on the functionality of the system as a whole.
The term ‘Behavioral Testing’ is also used for Black Box Testing.
Practically, there are several types of Black Box Testing that are possible, but if we
consider a major variant of it then only the below mentioned are the two fundamental
ones.
For example, when we test a Dropdown list, we click on it and verify if it expands and
all the expected values are showing in the list.
Smoke Testing
Sanity Testing
Integration Testing
System Testing
Regression Testing
User Acceptance Testing
Usability Testing
Load Testing
Performance Testing
Compatibility Testing
Stress Testing
Scalability Testing
Equivalence Partitioning
Boundary Value Analysis
Decision Table Testing
State Transition Testing
Error Guessing
Graph-Based Testing Methods
Comparison Testing
Let’s understand each technique in detail.
Hence, instead of using each and every input value, we can now use any one value
from the group/class to test the outcome. This way, we can maintain test coverage
while we can reduce the amount of rework and most importantly the time spent.
For Example:
As present in the above image, the “AGE” text field accepts only numbers from 18 to
60. There will be three sets of classes or groups.
We have thus reduced the test cases to only 3 test cases based on the formed
classes thereby covering all the possibilities. So, testing with any one value from
each set of the class is sufficient to test the above scenario.
The name itself defines that in this technique, we focus on the values at boundaries
as it is found that many applications have a high amount of issues on the boundaries.
Boundary refers to values near the limit where the behavior of the system changes.
In boundary value analysis, both valid and invalid inputs are being tested to verify the
issues.
For Example:
If we want to test a field where values from 1 to 100 should be accepted, then we
choose the boundary values: 1-1, 1, 1+1, 100-1, 100, and 100+1. Instead of using all
the values from 1 to 100, we just use 0, 1, 2, 99, 100, and 101.
As the name itself suggests, wherever there are logical relationships like:
If
{
(Condition = True)
then action1 ;
}
else action2; /*(condition = False)*/
Then a tester will identify two outputs (action1 and action2) for two conditions (True
and False). So based on the probable scenarios a Decision table is carved to
prepare a set of test cases.
For Example:
Take an example of XYZ bank that provides an interest rate for the Male senior
citizen as 10% and 9% for the rest of the people.
In this example condition, C1 has two values as true and false, C2 also has two
values as true and false. The total number of possible combinations would then be
four. This way we can derive test cases using a decision table.
A systematic state transition diagram gives a clear view of the state changes but it is
effective for simpler applications. More complex projects may lead to more complex
transition diagrams thereby making it less effective.
For Example:
In this technique, the tester can use his/her experience about the application
behavior and functionalities to guess the error-prone areas. Many defects can be
found using error guessing where most of the developers usually make mistakes.
Divide by zero.
Handling null values in text fields.
Accepting the Submit button without any value.
File upload without attachment.
File upload with less than or more than the limit size.
#6) Graph-Based Testing Methods
Each and every application is a build-up of some objects. All such objects are
identified and the graph is prepared. From this object graph, each object relationship
is identified and test cases are written accordingly to discover the errors.
In this method, different independent versions of the same software are used to
compare to each other for testing.
White box testing involves looking at the structure of the code. When you know the
internal structure of a product, tests can be conducted to ensure that the internal
operations performed according to the specification. And all internal components
have been adequately exercised.
5. Basis Path Testing: Each independent path in the code is taken for testing.
6. Data Flow Testing (DFT): In this approach you track the specific variables through
each possible calculation, thus defining the set of intermediate paths through the
code.DFT tends to reflect dependencies but it is mainly through sequences of data
manipulation. In short, each data variable is tracked and its use is verified. This
approach tends to uncover bugs like variables used but not initialize, or declared but
not used, and so on.
7. Path Testing: Path testing is where all possible paths through the code are defined
and covered. It’s a time-consuming task.
Note that the statement, branch or path coverage does not identify any bug or defect
that needs to be fixed. It only identifies those lines of code which are either never
executed or remains untouched. Based on this further testing can be focused on.
Path coverage tests all the paths of the program. This is a comprehensive technique
which ensures that all the paths of the program are traversed at least once. Path
Coverage is even more powerful than Branch coverage. This technique is useful for
testing the complex programs.
Under Black box testing, we test the software from a user’s point of view, but in White
box, we see and test the actual code.
In Black box testing, we perform testing without seeing the internal system code, but
in WBT we do see and test the internal code.
White box testing technique is used by both the developers as well as testers. It
helps them to understand which line of code is actually executed and which is not.
This may indicate that there is either a missing logic or a typo, which eventually can
lead to some negative consequences.
Defect Seeding
Defect seeding is a practice in which defects are intentionally inserted into a
program by one group for detection by another group. The ratio of the number
of seeded defects detected to the total number of defects seeded provides a
rough idea of the total number of unseeded defects that have been detected.
Suppose on GigaTron 3.0 that you intentionally seeded the program with 50
errors. For best effect, the seeded errors should cover the full breadth of the
product’s functionality and the full range of severities—ranging from crashing
errors to cosmetic errors.
Suppose that at a point in the project when you believe testing to be almost
complete you look at the seeded defect report. You find that 31 seeded
defects and 600 indigenous defects have been reported. You can estimate
the total number of defects with the formula:
IndigenousDefectsTotal = ( SeededDefectsPlanted / SeededDefectsFound ) *
IndigenousDefectsFound
Process
The idea behind Fault Seeding is to simply insert artificial faults into the
software and after testing count the number of artificial and real faults
discovered by the software tests. The complete process of Fault Seeding
consists of the following five steps:
Fault Modeling
Fault Injection
Test Execution
Calculation
Interpretation
Fault Modeling
The first step is to model some artificial faults. You can e.g. replace arithmetic,
relational or logical operators, remove function calls or change the datatype of
variables. There are many types of faults, be creative.
Fault Injection
These artificial faults are then seeded into the software. The number of
seeded faults is self-determined and depends on the size of the software. For
example, if your software has 1,000 lines of code five artificial faults might be
insufficient. An exact rule to determine the number of artificial faults does not
exist, sorry folks.
The injection of artificial faults into the master branch should be avoided. It is
recommended to create a new branch and do the fault seeding activities in
this branch.
Test Execution
Now your manual or automated tests are executed. The aim of the tests is to
discover as much faults as possible.
Calculation
Your tests hopefully discover a proper number of real and artificial faults.
Now, the number of discovered seeded faults, the number of seeded faults
and the number of discovered real faults are known.
Interpretation
The main objective of Fault Seeding is to evaluate both the software and test
quality. On the basis of this information you can identify deficiencies of your
tests and define and execute improvement activities to reduce the identified
deficiencies.
Reference:
https://www.arbourgroup.com/blog/2015/verification-vs-validation-whats-the-
difference/
https://www.guru99.com/static-dynamic-testing.html
https://softwaretestingfundamentals.com/
https://www.browserstack.com/guide/test-planning
https://www.softwaretestinghelp.com/black-box-testing/
https://www.softwaretestinghelp.com/white-box-testing-techniques-with-example/
https://stevemcconnell.com/articles/gauging-software-readiness-with-defect-
tracking/#:~:text=Defect%20seeding%20is%20a%20practice,defects%20that
%20have%20been%20detected.
https://medium.com/@michael_altmann/fault-seeding-why-its-a-good-idea-to-insert-
bugs-in-your-software-part-1-245827e840b3
Group 2 Presentation
Guiterez, Dylan
Limbing, Abegail
Mendoza, Kathleen