You are on page 1of 64

Manual Testing

Table of Contents
• Software Development Life Cycle
• Software Testing
• SDLC Models
o Waterfall Model in SDLC

o Spiral Model in SDLC

o V Model in SDLC

o Agile Scrum Methodology

• Software Testing Life Cycle

• SDLC Vs STLC

• Principles of Software Testing

• Verification And Validation


Software
Development
Life Cycle
Software Testing
• Software testing is a process, to evaluate the
functionality of a software application with an intent to
find whether the developed software met the specified
requirements or not and to identify the defects to ensure
that the product is defect-free in order to produce a
quality product

• Software testing is needed for


1. Cost-effectiveness
2. Customer Satisfaction
3. Security
4. Product Quality
SDLC
Models -
Waterfall
Model
SDLC
Models -
Spiral Model
SDLC
Models - V
Model
SDLC Models - Agile Scrum Methodology
Roles
• Product owner
• Scrum Master
• Scrum Development Team
Artifacts
• User Stories
• Product Backlog
• Sprint Backlog
• Product Burndown Chart
• Sprint Burndown Chart
• Defect Burndown Chart
• Release Burndown Chart
Meetings
• Sprint Planning Meeting
• Daily Scrum Meeting
• Sprint Review Meeting
• Sprint Retrospective Meeting
Software
Testing Life
Cycle
SDLC Vs
STLC
Principles of Software
Testing
1. Testing shows presence of defects
2. Exhaustive testing is impossible
3. Early testing
4. Defect clustering
5. Pesticide paradox
6. Testing is context dependent
7. Absence of error – fallacy
Verification
Vs
Validation
Types of Software
Testing
 Functional Testing
 Non-Functional Testing
 Static Testing - Review, Walk Through, Inspection, Analysis
 Dynamic Testing
 Positive Testing
 Negative Testing
 End-To-End Testing
 Regression Testing
 Smoke Testing

 Sanity Testing
Functional Testing/Levels
of Testing
• Unit Testing
• Integration Testing
• System Testing
• Acceptance Testing
Non Functional Testing
• Performance Testing
• Compatibility Testing
• Usability Testing
• Security Testing
• Localization Testing
Static
Testing
Dynamic
Testing
• Equivalence Partitioning Testing
Technique
• Boundary Value Analysis Testing
Technique
Blackbox • Decision Table Testing Technique
Testing • State Transition Testing
Technique
Techniques
Equivalence Partitioning Testing Technique

• In equivalence partitioning, inputs


to the software or system are
divided into groups that are
expected to exhibit similar
behavior, so they are likely to be
proposed in the same way. Hence
selecting one input from each
group to design the test cases.
• Examples : Test a field which
accepts Age 18 – 56
• Test a filed which accepts a
Mobile Number of ten digits
Boundary Value Analysis

• Boundary value analysis (BVA) is based


on testing the boundary values of valid
and invalid partitions. The Behavior at
the edge of each equivalence partition is
more likely to be incorrect than the
behavior within the partition, so
boundaries are an area where testing is
likely to yield defects
• Every partition has its maximum and
minimum values, and these maximum
and minimum values are the boundary
values of a partition.
• A boundary value for a valid partition is
a valid boundary value. Similarly a
boundary value for an invalid partition is
an invalid boundary value
Decision Table Testing Technique

• Decision Table is aka Cause-Effect


Table. This test technique is
appropriate for functionalities
which has logical relationships
between inputs (if-else logic)
• In Decision table technique, we
deal with combinations of inputs.
• To identify the test cases with
decision table, we consider
conditions and actions. We take
conditions as inputs and actions as
outputs.
State Transition Testing Technique

• Using state transition testing, we


pick test cases from an application
where we need to test different
system transitions.
• We can apply this when an
application gives a different
output for the same input,
depending on what has happened
in the earlier state.
• Statement Coverage
• Decision/Branch Coverage
• Path Coverage
White Box
Testing
Techniques
• This technique is used to make
sure that each line of source code
has been executed and tested at
least once. Covering all lines of
code points out the buggy code.
Statement
Coverage • Statement Coverage = (Number
of statements executed / Total
number of statements executable
) x 100%
• This technique checks every
possible path (if-else and other
conditional loops) of a software
application
Decision/
Branch • Branch Coverage = (Number of
decisions outcomes tested /
Coverage Total Number of decision
Outcomes) x 100 %
• Path coverage is where all
possible paths through the code
are defined and covered.
• It’s a time-consuming task
Path
• Path Coverage = (Number paths
Coverage exercised / Total Number of
paths in the program) x 100 %
Regression
Testing

• Repeated testing of an
already tested program,
after modification, to
discover any defects
introduced or uncovered
as a result of the changes
in the software being
tested or in another
related or unrelated
software components.
Retesting

• Retesting is running the previously failed test cases again on the new software to verify whether the
defects posted earlier are fixed or not.
• When do we do retesting
o When there is a particular bug fix specified in the Release Note
o When a Bug is rejected
o When a Client calls for a retesting
Acceptance Testing
Alpha Testing Beta Testing
It is performed onsite in the developer’s test environment by It is performed at the client-side by the real users or
the users outside the development organization customers outside the development organization
Test
Deliverables
Test Strategy
• Test Strategy is a high level document (static document) and usually developed by project manager. It is
a document which captures the approach on how we go about testing the product and achieve the goals.
• It is normally derived from the Business Requirement Specification (BRS).
• Documents like Test Plan are prepared by keeping this document as base
Contents of Test Strategy document • Testing tools
• Scope and overview • Industry standards to follow

• Test Approach • Test deliverables


• Testing metrics
• Test Levels
• Requirement Traceability Matrix
• Test Types
• Risk and mitigation
• Roles and responsibilities
• Reporting tool
• Environment requirements
• Test Summary
Test Plan
• Test plan document is a document which contains the plan for all the testing activities to be
done to deliver a quality product.
• Test Plan document is derived from the Product Description, SRS, or Use Case documents for
all future activities of the project. It is usually prepared by the Test Lead or Test Manager

Contents of the Test Plan document o Suspension Criteria

o Test Plan Identifier o Test Deliverables

o References o Testing Tasks


o Environmental Needs
o Introduction
o Responsibilities
o Test Items
o Staffing and Training Needs
o Features To Be Tested
o Schedule
o Features Not To Be Tested
o Risks and Contingencies
o Approach
o Approvals
o Pass/Fail Criteria
• Entry Criteria gives the prerequisite • Exit Criteria defines the items that
Entry and items that must be completed
before testing can begin
must be completed before testing
can be concluded

Exit Criteria Ex : Ex :
o Deadlines meet or budget depleted.
o Complete or partially testable code
is available. o Execution and updating of all test
cases.
o Requirements are defined and
o Desired and sufficient coverage of the
approved.
requirements and functionalities
o Availability of sufficient and desired under the test.
test data. o All the identified defects are corrected
o Test cases are developed and ready. and closed
o No high priority or severity or critical
o Test environment has been set-up bug has been left out.
and all other necessary resources
such as tools and devices are o Maintaining defects with their current
available status.
• Test scenarios are high level descriptions on how a user will
interact with an application during software testing.
Test Scenario and • Test Cases are detailed descriptions on how to test an
Test Cases application.

TEST CASE TEST SCENARIO

Test case consists of Test case name, Precondition, Test steps, Expected result Test scenario are one liner but it is associated with multiple test cases
and Post condition

Test case guide a user on “how to test” Test scenario guide a user on “what to test”

Purpose of test case is to validate the test scenario by executing a set of steps Purpose of test scenario is to test end to end functionality of a software
application

Creating test cases is important when working with testers off-site Creating test scenarios helps you in a time-sensitive situation (especially
working in Agile)

Software applications change often. It leads to redesigning the pages and Test scenarios are easy to maintain due to its high level design
adding new functionalities. It hard to maintain test cases

More time consumption compared to test scenarios Less time consumption compared to test cases

Required more resources to create and execute test cases Relatively less resources enough to create and test using test scenarios

It helps in exhaustive testing of application It helps in agile way of testing end to end functionality

Test cases are derived from test scenarios Test scenarios are derived from use cases

Test cases are low level actions Test scenarios are high level actions

Test cases are written by Testers Test scenarios are written by Test Leads, Business Analysts, and Testers.

Test case may or may not be associated to multiple Test scenarios. Test Scenarios have multiple test cases.
Login Page
Scenarios Testcases
• Login:
o Verify that the User is able to Login with Valid Credentials
o Verify that the User is not able to Login with an invalid Username and
Verify Verify Verify Verify Verify Verify invalid Password
Verify that Verify that Verify that Verify that Verify that Verify that o Verify that the User is not able to Login with a Valid Username and
user is user can user is user can user can user can invalid Password
able to signup/cre navigated place an search for check
login to ate new to home order products order o Verify that the User is not able to log in with an invalid Username and
the account in page upon history Valid Password
applicatio the login
n applicatio o Verify that the User is not able to log in with a blank Username or
successfull n Password
y
o Verify that the User is not able to Login with inactive credentials
• Order Placement:
o Verify that user can add products to cart
o Verify that total amount is displayed in Cart for all products
o Verify that different payment menthods are available for user
o Verify that user can add shipping address
Test Scenarios for GMail
Verify that all the read and unread emails are displayed in the inbox

Verify that the recently received email or unread emails are highlighted in bold in
the Inbox section.

Verify that the recently received email has correct sender‘s name or email id,
subject of the email, its preview and date or time.

Verify that the recently received email’s sender’s name or email id, subject of the
email, and date or time should be in bold and preivew text shouldn’t be in bold.
Test cases for ATM
Verify Verify the ‘ATM Card Insertion Slot’ is as per the specification

Verify Verify the ATM machine accepts card and PIN details

Verify Verify the error message by inserting a card incorrectly

Verify Verify the error message by inserting an invalid card (Expired Card)

Verify Verify the error message by entering an incorrect PIN

Verify Verify that the user is asked to enter the PIN after inserting a valid ATM Card
Error/Defect/Bug/Failure
• A mistake in coding is called Error,
error found by tester is called
Defect, defect accepted by
development team then it is called
Bug, build does not meet the
requirements then it Is Failure
Defect Life
Cycle
Defect Report Template
Severity and Priority levels
Severity Priority

• How the bug impacts the application. How critical • It defines the priority in which the defects should
defect is and what is the impact of the defect on be resolved. if there are multiple defects, the
the whole system’s functionality. The severity is a priority decides which defect has to be fixed and
parameter set by the tester while he opens a defect verified immediately versus which defect can be
and is mainly in control of the tester fixed a bit later. It is usually set by the lead
Types Types
• Critical: This defect indicates complete shut- • Low: The Defect is an irritant, but repair can be
down of the process, nothing can proceed further done once the more serious Defect has been fixed
• Major: It is a highly severe defect and collapses • Medium: During the normal course of the
the system. However, certain parts of the system development activities defect should be
remain functional resolved. It can wait until a new version is created
• Medium: It causes some undesirable behavior, • High: The defect must be resolved as soon as
but the system is still functional possible as it affects the system severely and
• Low: It won’t cause any major break-down of the cannot be used until it is fixed
system
Severity Priority
Severity is a parameter to denote the Priority is a parameter to decide
impact of a particular defect on the the order in which defects should
software. be fixed.

Severity means how severe defect is Priority means how fast defect has
affecting the functionality. to be fixed.

Severity Vs Severity is related to the quality Priority is related to scheduling to


standard. resolve the problem.
Priority Testing engineer decides the severity Product manager decides the
level of the defect. priorities of defects.
Its value is objective. Its value is subjective.
Its value doesn’t change from time Its value changes from time to
to time. time.

Severity is of 5 types: Critical, Major, Priority is of 3 types: Low, Medium,


Moderate, Minor, and Cosmetic. and High.
Severity Vs Priority
Examples
• High Priority, Low Severity :- If the company name is
misspelled in the home page of the website, then the
priority is high and severity is low to fix it.
• High Severity, Low Priority :- Web page not found
when user clicks on a link (user's does not visit
that page generally)
• Low Priority, Low Severity :- Any cosmetic or
spelling issues which is within a paragraph or in the
report
• High Priority, High Severity :- An error which occurs
on the basic functionality of the application and will not
allow the user to use the system (E.g. user is not able to
login to the application)
Software test metrics is to monitor
and control process and product. It

Test
helps to drive the project towards our
planned goals without deviation.

Metrics Types of Metrics


Process
Metrics
Product
Metrics
• Test Case Preparation Productivity
Test Case Preparation Productivity = (No of Test Case)/ (Effort spent for Test Case
Preparation)
• Test Design Coverage:
Test Design Coverage = ((Total number of requirements mapped to test cases) /
(Total number of requirements)*100

Process
• Test Execution Productivity:
(No of Test cases executed)/ (Effort spent for execution of test cases)

Metrics
• Test Execution Coverage:
Test Execution Coverage = (Total no. of test cases executed / Total no. of test cases
planned to execute)*100
• Test Cases Passed:
Test Cases Pass = (Total no. of test cases passed) / (Total no. of test cases executed)
* 100
• Test Cases Failed:
Test Cases Failed = (Total no. of test cases failed) / (Total no. of test cases executed)
* 100
• Test Cases Blocked
Test Cases Blocked = (Total no. of test cases blocked) / (Total no. of test cases
executed) * 100
• Error Discovery Rate
Error Discovery Rate = (Total number of defects found /Total no. of test cases
executed)*100
• Defect Fix Rate
Defect Fix Rate = (Total no of Defects reported as fixed - Total no. of defects
reopened) / (Total no of Defects reported as fixed + Total no. of new Bugs due to

Product fix)*100
• Defect Density:
Metrics Defect Density = Total no. of defects identified / Actual Size (requirements)
• Defect Leakage
Defect Leakage = ((Total no. of defects found in UAT)/(Total no. of defects found
before UAT)) * 100
• Defect Removal Efficiency
Defect Removal Efficiency = ((Total no. of defects found pre-delivery) /( (Total
no. of defects found pre-delivery )+ (Total no. of defects found post-delivery)))*
100
Requirement Traceability Matrix

• Requirement Traceability
Matrix (RTM) is used to trace the
requirements to the tests that are
needed to verify whether the
requirements are fulfilled.
Non-Functional Testing

1 2 3 4 5 6
Performance Testing Security Testing Usability Testing Accessibility Testing API Testing Database Testing
• Load Testing
• Stress Testing
Performance Testing
• Performance testing (also called Perf
Testing) determines or validates the
speed, scalability, and/or stability
characteristics of the system or
application under test.
• Performance is concerned with
achieving response times,
throughput, and resource-utilization
levels that meet the performance
objectives for the project or product
• Capacity Testing - Capacity Testing is to determine how many users a
system/application can handle successfully before the performance
goals become unacceptable
• Load Testing - Load Testing is to verify that a system/application can
handle the expected number of transactions and to verify the
system/application behavior under both normal and peak load
conditions (no. of users)

Types of • Volume Testing - Volume Testing is to verify whether a


system/application can handle a large amount of data. This testing

Performance focuses on Data Base


• Stress Testing - Stress Testing is to verify the behavior of the system

Testing once the load increases more than the system’s design expectations
• Soak/Endurance Testing - Soak Testing is aka Endurance Testing.
Running a system at high load for a prolonged period of time to
identify the performance problems is called Soak Testing
• Spike Testing - Spike Testing is to determine the behavior of
the system under a sudden increase of load (a large number of users)
on the system
• Security testing is a process to determine whether the system protects data and maintains
functionality as intended.
• Top Security Vulnerabilities:
• SQL Injection
• Cross-Site Scripting (XSS)
• Session Management
• Broken Authentication
• Cross-Site Request Forgery (CSRF)
• Security Misconfiguration

Security • Failure to Restrict URL Access


• Secure Data Exposure

Testing • Insecure Direct Object Reference


• Missing Function Level Access Control
• Using Components with Known Vulnerabilities
• Unvalidated Redirects and Forwards

• Open Source Security Testing Tools:


• Zed Attack Proxy, Wfuzz, Wapiti, etc.,
• Commercial Security Testing Tools:
• GrammaTech, Appscan, Veracode, etc.,
• Usability Testing is a testing technique used to evaluate how easily the user can use the software.
• Parameters considered for usability testing
• Accessibility
• Simple text
• Helpful error messages
• Easy navigation

Usability • Usability Testing Test Cases


• Some sample test cases for usability testing for a website :
• Is the navigation within the software clear and simple?

Testing • Is the design clean and clutter-free?


• Is there too much white space?
• Does the logo link back to home?
• Are there visual clues for links?
• Do the look and feel of the application please the user’s eye?
• Is the design consistent throughout the application?
• Does the tooltip help the user?
• Software applications should be designed in such a way that it
is accessible by every differently-abled customer,
understanding their needs and behavior so that they can use
that product with ease
• Websites and applications should be accessible to users with
sight, hearing, movement, and cognitive diverse abilities
• Points to test during Accessibility testing

Accessibility
o It should be able to meet the needs of the users who are visually ,
physically, cognitively , and aurally challenged.
o The link texts should be expressive and can be reached by the tab
Testing key.
o The pictures,icons or any visual indicator should be available
wherever possible since they can describe the content for the users
having literacy issues.
o The content should be designed such that users with learning
disabilities can also understand it.
o Avoid pop-ups as it creates problems for the users taking help from
screen readers.
o Split up large sentences into smaller ones so that they can be
remembered easily for visually challenged users
API Testing

• API stands
for Application Programming Inter
face.
• API acts as an interface between
two software applications and
allows the two software
applications to communicate with
each other.
• API is a collection of software
functions that can be executed by
another software program
API Testing Cont..

• What needs to verified in API Testing:


o Data accuracy
o HTTP status codes
o Response time
o Error codes in case API return any errors
o Authorization checks
o Non-functional testing such as performance testing, security testing
• Tools:
• Postman
• SOAPUI
• Jmeter
• Rest-Assured
Database Testing

• Database testing checks the integrity and


consistency of data by verifying the schema,
tables, triggers, etc., of the application’s
database that is being tested.
• In Database testing, we create complex
queries to perform the load or stress test on
the database and verify the database’s
responsiveness.
Structural Testing

• Structural database testing deals with testing components that are not accessible by the end-user.
• You should possess a good amount of knowledge in SQL queries to execute this testing.
• Ex :
• Schema Testing
• Trigger Testing
• Stored Procedure and View testing
• Table and Column Testing
• Database Server Validations
• Keys and Indexes Testing
• Functional Testing focuses • Non- Functional Testing
Functional/Non- on the functionalities such as performs load testing,
Functional transactions and operations stress testing, checks
performed by the end-user in minimum system
Database Testing the application. It makes sure requirements to meet the
that these functionalities are business specification,
as per business requirements detects risks, and
• Types: optimizes the performance
of the database.
• Black Box Testing
• White Box Testing • Types:
• Load Testing
• Stress Testing
The primary goal of database testing is to validate

• data mapping
• data integrity
• accuracy of business rules
• transaction properties(ACID)

Database Tools

Testing Cont.. •

DataFactory
MockupData
• DTM Data Generator
• MS SQL Server
• tSQL
• SQLite
• SQL Test
Quality Assurance Vs Quality Control
Quality Assurance Quality Control
QA uses a static testing technique and it falls under verification which means to make QC uses a dynamic testing technique and it falls under validation which means that
sure that product is being developed as per the requirements. performs all user’s expectations are met in the developed product.
QA aims to prevent defects. QC aims to identify and fix defects.
QA is a preventative technique. QC is a corrective technique.
QA is a procedure-based methodology. QC is a product based methodology.
QA is done before Quality Control. QC is done only after Quality Assurance.
QA is to manage the quality. QC is to verify the quality.
QA is responsible for full Software Development Life Cycle. QC is responsible for Software Testing Life Cycle.
All the team members are responsible for QA. Mostly only testing is responsible for QC.
QA doesn’t involve in executing the tests. QC involves in executing the tests.
QA is the process where weaknesses are identified early in the process. QC is the process where weaknesses are identified after product is delivered in other
words in the production environment.
Quality Assurance works its way toward software development by improving quality of Quality Control is the arrangement of strategies used to confirm the quality of the end
the product under development. product been delivered.
QA is to a higher degree an extensive action wherein setting up long haul great quality QC is more a product related procedure wherein we’ll make sure that client's
administration frameworks – and surveying that those frameworks assist us with prerequisites are consistently met. In this way, QC is a product focused.
adjusting to the client prerequisites – is finished. In this manner, QA is process-focused.

QA focuses on implementing procedures in such a way that defects are preventing QC focuses on implementing procedure in order to find more defects from current
from arising. running system and eventually fix them hence improving quality.
Besides comparison purposes, QA involves human auditing of findings or records, such QC is generally performed using techniques for traditionally operating the software or
as quality plans or test plans. scripting, only a quick timeframe later determining if the system is working as per
expectations.
The statistical technique applied on QA is known as Statistical Process Control (SPC). The statistical technique applied to QC is known as Statistical Quality Control (SQC).
QA is a Low-Level Activity that is less time consuming. QC is a High-Level Activity that is more time cosuming.
WRT software, QA becomes Software Quality Assurance (SQA). WRT software, QC becomes Software Testing.
Manual Vs Automation Testing
Automation Testing Manual Testing
Automated testing is more reliable. It performs same operation each time. It Manual testing is less reliable. Due to human error, manual testing is not
eliminates the risk of human errors. accurate all the time.
Initial investment of automation testing is higher. Investment is required for Initial investment of manual testing is less than automation. Investment is
testing tools. In the long run it is less expensive than manual. ROI is higher in required for human resources. ROI is lower in the long run compared to
the long run compared to Manual testing. Automation testing.

Automation testing is a practical option when we do regressions testing. Manual testing is a practical option where the test cases are not run
repeatedly and only needs to run once or twice.
Execution is done through software tools, so it is faster than manual testing and Execution of test cases is time consuming and needs more human resources
needs less human resources compared to manual testing.
Exploratory testing is not possible Exploratory testing is possible
Performance Testing like Load Testing, Stress Testing etc. is a practical option in Performance Testing is not a practical option in manual testing
automation testing.
It can be done in parallel and reduce test execution time. Its not an easy task to execute test cases in parallel in manual testing. We need
more human resources to do this and becomes more expensive.
Programming knowledge is a must in automation testing Programming knowledge is not required to do manual testing.
Build verification testing (BVT) is highly recommended Build verification testing (BVT) is not recommended
Human intervention is not much, so it is not effective to do User Interface It involves human intervention, so it is highly effective to do User Interface
testing. testing.
Smoke Vs Sanity Testing
SMOKE TESTING SANITY TESTING
Smoke Test is done to make sure if Sanity Test is done during the release phase
the build we received from the to check for the main functionalities of the
development team is testable or not application without going deeper

Smoke Testing is performed by both Sanity Testing is performed by Testers alone


Developers and Testers

Smoke Testing exercises the entire Sanity Testing exercises only the particular
application from end to end component of the entire application

Smoke Testing, build may be either Sanity Testing, build is relatively stable
stable or unstable

It is done on initial builds. It is done on stable builds.


It is a part of basic testing. It is a part of regression testing.
Usually it is done every time there is It is planned when there is no enough time to
a new build release. do in-depth testing.
• SDLC Vs STLC
Few Manual
• What is Software Testing?
• How many test cases you can execute in a
day?
• Functional Vs Non-Functional

Testing • How many Test cases you can write in a day


or how much time is required to write a test
• Regression Vs Retesting
• Defect Life Cycle
Interview •
case?
How many defects did you detect in your last
project?
• Agile Methodology

Questions • What are Quality Assurance and Quality


Control?
• Performance Testing
• Severity Vs Priority
• Verification Vs Validation ?
• API Testing
• Static Vs Dynamic Testing
• Blackbox Vs Whitebox Testing
• Database/SQL Queries
• Test Strategy Vs Test Plan • Write Sample Test cases for a
• Levels of Testing
given scenario
• Alpha Vs Beta Testing • RTM
Sample Application for Testing - https://travel.testsigma.com/

You might also like