You are on page 1of 97

SOFTWARE TESTING

BY: PROF. SUSHMA VANKHEDE


SOFTWARE TESTING

 Software Testing is the process which uncovers defects in a Software Application / Website, in order to ensure
User/Client requirements are met with all Quality standards.
 Testing is the measurement of Software Quality. Testers measures how closely one has achieved quality by testing
the relevant factors such as correctness, reliability, usability, maintainability, reusability and testability.
 In other words testing is validating a system against an expected result set and verifying them with the actual
result. Also a Tester needs to provide with meaningful observation/suggestion, which should help to improve
quality
MISTAKE – DEFECT – FAILURE

 Humans make Mistakes/Errors because of working in pressure; it can be as a result of meeting up with Deadlines,
Technical complexity of system, requirements changing, and processes not being followed over time
 As a result of these errors left in the code or documentation, when a system is executed at run time, user will
face a Defect, which can be a result of wrong implementation of functionality, improper Impact Analysis of
functionalities implemented, incorrect Technical implementation.
 These Defects when they are not uncovered while performing testing activity, and overall system is not behaving
the way it is expected, it will lead to a System Failure.
ERROR

 Error: A human action that produces an incorrect result.


 Is an undesirable deviation from requirements?” Any problem or cause for many problems which stops the system
to perform its functionality is referred as Error
FAILURE

 Failure: Deviation of the component or system from its expected delivery, service or result.
 Failure: Any Expected action that is supposed to happen if not can be referred as failure or we can say absence of
expected response for any request.
BUG

 Any Missing functionality or any action that is performed by the system which is not supposed to be performed is
a Bug. “Is an error found BEFORE the application goes into production?”
 Any of the following may be the reason for of Bug
1. Wrong functionality
2. Missing functionality
3. Extra or unwanted functionality
DEFECT

 A defect is a variance from the desired attribute of a system or application. “Is an error found AFTER the
application goes into production?”
 Defect will be commonly categorized into two types:
1. Defect from product Specification
2. Variance from customer/user expectation.
OBJECTIVE

 Testing is the process to identify defects in Software Application.


 Testing also ensures valid Observation and Suggestions to improve on the Quality of Software.
 Testing ensures Quality is of optimal level, by making sure application behaves as per client/user requirements.
SEVEN TESTING PRINCIPLES

 Testing shows presence of bugs


 Exhaustive testing is Impossible
 Early Testing
 Defect Clustering
 Pesticide Paradox
 Testing is context dependent
 Absence of Errors Fallacy
CLIENT’S REQUIREMENT & EXPECTATIONS ARE AS IMPORTANT AS
PRODUCT QUALITY
SOFTWARE QUALITY

 Quality Governance is the set of regulations and requirements that provides a framework within which quality
assurance and testing should be conducted. It is the overall management of successful quality delivery.
 Quality software is reasonably bug free, delivered on time and within budget, meets user/client requirements and
expectations, also maintainable.
CMMI LEVEL

 CMM is called as Capability Maturity Model, whereas CMMI is Capability Maturity Model Integration. It was developed
by SEI (Software Engineering Institute). SEI is initialized by US defence department to improve the software development
process.
 Level 1: Initial: It is characterized by chaos, periodic panics and heroic efforts required to success the project. Project
success depends on luck. Successes may not be repeatable.
 Level 2: Repeatable: Success practices can be repeated. Software project tracking, requirement management and realistic
process and configuration management are in place.
 Level 3: Defined: Standard software development and maintenance are integrated throughout the SDLC. SEPG (Software
Engineering Process Group) present there to improve for the entire process and quality.
 Level 4: Managed: Roles are used to track the productivity, processes and products. Project performance is predictable.
Quality is constantly high.
 Level 5: Optimized: The focus is on continuously process improvement. The impact of new technologies & processes are
predicted & effectively implemented when required
VERIFICATION AND VALIDATION

 What is Verification?
 Verification is a process to check/verify at each phase of SDLC, if we are going in the right direction or not, to
create the application. At every phase we verify the static areas as in BRD, SRS, and Technical Design and as well
do Code review.

 What is Validation?
 Determining correctness of the product with respect to the user needs and requirements. It can be said as
determining whether a fully developed system conforms to its SRS document.Validation is concerned with final
product to be error free.
VERIFICATION AND VALIDATION
VERIFICATION AND VALIDATION
SOFTWARE TEST LEVELS

 Testing helps to ensure that the work-products are being developed in the right way (Verification) and that the
product will meet the user needs (Validation).
 Unit (component) Testing
 Integration Testing
 System Testing
 Acceptance Testing

Each of these levels will include its own Tests designed to uncover problems specifically at that stage of
development.
UNIT TESTING:

 Generally, the code is written in parts or units. The units are usually constructed in isolation for integration at a
later stage.
 Units are also called programs, modules or components.
 Unit testing is intended to ensure that the code written for the unit meets its specification, prior to its integration
with other units.
 Unit testing is usually performed by the developer who wrote the code
ADVANTAGES OF UNIT TESTING:

 Issues are found at early stage if unit testing is carried out by developers, where they test their individual code
before the integration. This will enable the code to get fixed at the same moment so that it is not being carried
out in next phases of software development.
 Unit testing helps in maintaining and changing the code. This is possible by making the codes less interdependent
so that unit testing can be executed.
LIMITATIONS OF UNIT TESTING

 Testing cannot catch each and every bug in an application. It is impossible to evaluate every execution path in
every software application. The same is the case with unit testing.
 There is a limit to the number of scenarios and test data that the developer can use to verify the source code. So
after he has exhausted all options there is no choice but to stop unit testing and merge the code segment with
other units
INTEGRATION TESTING

 Testing of combined parts of an application to determine they function together correctly. The integrated parts
can be code modules, individual applications, client and server applications on a network etc.
 As an example employee apply for leave and his leaves are approved. Now as part of Integration testing Leave
module and Payroll module needs to be tested in order to ensure correct salary is done at the end of the month.
 For Integration of two units/modules/systems, it must be ensured that each of the units / modules / systems is
independently being tested and working correctly to be later integrated and tested.
INCREMENTAL INTEGRATION TESTING

 Incremental Integration Testing: Integration testing where system components are integrated into the system one
at a time until the entire system is integrated.
 Continuous testing of an application as new functionality is added; requires that various aspects of applications
functionality is independent enough to work separately before all parts of the program are completed, or that
test drivers be developed as needed.
TOP-DOWN INTEGRATION / BOTTOM-UP INTEGRATION

 Top-Down Integration: An approach to Integration


Testing where the component at the Top is tested
first, lower level components being simulated by
stubs. The process is repeated until the lowest level
components have been tested.
 Bottom-Up Integration: An approach to Integration
Testing where the lowest level components are
tested first then used to facilitate the testing of
higher level components. The process is repeated
until the component at the top of the hierarchy is
tested first.
BIG BANG INTEGRATION

 In Big Bang integration testing all components or module is integrated simultaneously, after which everything is
tested as a whole
SYSTEM TESTING

 System testing is the type of testing to check the behavior of a complete and fully integrated software product
based on the software requirements specification (SRS) document. The main focus of this testing is to evaluate
Business / Functional / End-user requirements.
 This is black box type of testing where external working of the software is evaluated with the help of
requirement documents & it is totally based on Users point of view. For this type of testing it does not require
knowledge of internal design or structure or code.
 This testing is to be carried out only after System Integration Testing is completed where both Functional & Non-
Functional requirements are verified.
ACCEPTANCE TESTING

 Acceptance Testing is also called User Acceptance Testing (UAT).


 In UAT, end users checks system in order to ensure their requirements are fulfilled or not successfully. They
confirm whether system meets the requirements as per requirement specification or not.
 UAT is performed at the end of all testing activities have been completed.
 This is done as a Final stage before System goes Live. Acceptance testing is a Black box testing and doesn’t involve
negative scenarios as such.
 It is taken as a conformance from end user as to if the system build is as per User Requirement.
TYPES OF ACCEPTANCE TESTING

 Alpha: It is carried out by customers at Developer’s site/network. It is generally carried out by developer, end
users or organisation users. The version of the release on which Alpha testing is performed is called ‘Alpha
Release’.
 Beta: Beta Testing is conducted at end users site without the developers help. This ensures the application is being
tested in an environment which is not similar to developers. This helps in detecting defects which might not get
identified if application is running in developer’s environment. Any software or hardware dependencies will be
identified in Beta testing.
Types of Testing
FUNCTIONAL TESTING:

 It checks if all the user requirements documented, are successfully implemented in the application correctly.
 The tests are designed based on the Objectives derived from requirements.
 Requirements testing must verify that the system can perform its function correctly and that the correctness can
be sustained over a continuous period of time. The system can be tested for correctness throughout the lifecycle,
but it is difficult to test the reliability until the program becomes operational.
FUNCTIONAL TESTING….CONT

 Successfully implementing user requirements is only one aspect of requirements/functional testing


 The 5 steps involved when testing an application for functionality.
 Step 1: Determination of the functionality that the intended application is meant to perform
 Step 2: Creation of Test data based on the specifications of the application.
 Step 3: Output based on the Test Data and the specifications of the application.
 Step 4: Writing of Test Scenarios and the execution of Test cases.
 Step 5: Comparison of Actual and Expected results based on the executed Test cases.
REGRESSION TESTING

 Whenever there is a change in a software application or its code, it is quite possible that other areas within the
application have been affected. To verify that a fixed bug hasn’t resulted in another functionality or business
violation is Regression Testing. The intention of Regression Testing is to ensure that a change, such as a bug fix did
not result in another fault being uncovered in the application.
RETESTING:

 Retesting is a process to retest fixes to ensure that issues have been resolved before development can progress.
So, retesting is the act of repeating a test to verify that a found defect has been correctly fixed.
SMOKE TESTING

 Smoke testing is done in order to check if the application under test is stable when the initial builds are given. This
is to ensure testers can perform rest of the application testing without having major blockages and can execute
the entire test pack smoothly. Smoke test are generally automated as they are more frequently repetitive in
nature.
 It is executed "before" any detailed functional or regression tests are executed on the software build. The purpose
is to reject a badly broken application, so that the QA team does not waste time installing and testing the
software application
SANITY TESTING:

 Sanity testing is yet again ensuring release is stable after few regressions testing being carried out. Once a new
build is obtained with minor revisions, instead of doing a through regression, sanity is performed so as to
ascertain the build has indeed rectified the issues and no further issue has been introduced by the fixes.
 It’s generally a subset of regression testing and a group of test cases are executed that are related with the
changes made to the app. Sanity testing exercises only on the particular component of the entire system.
SMOKE AND SANITY TESTING
SMOKE AND SANITY TESTING
AD HOC

 This is very informal and unplanned testing. It doesn’t follow any specific design techniques to create test cases. It
doesn’t follow any structured way and it is randomly done on any part of the application.
 Error guessing can be used to help in carrying out Adhoc testing.
EXPLORATORY TESTING

 Compared to Adhoc, Exploratory is a planned and formal way of testing. Test design and test execution are all
done at the same time. Exploratory testing as the name states is all about discovery, investigation and learning. As
you think over scenarios, you execute and check the functionality of the application. The scenarios run in
exploratory have to be different w.r.t. the ones written and executed when Functional Testing is done.

 Test Charter needs to be created wherein timeline, functionality to be tested, no. of testers to be involved is all
created in.
LOCALIZATION TESTING (L10N):

 Localization testing is done in order to check the current application is working fine according to the given locale
(locale of the nation it is built for).
 As an example for a site to be only seen in Germany, the web pages will be designed having German fonts,
depicting its currency and its flag.
 Even at time colour code is considered while designing the page.
Non-Functional Testing
USABILITY TESTING:

 Testing the ease with which users can learn and use a product.
 All aspects of user interfaces are tested:
 Display screen resolution, fonts and alignment
 Navigation and Selection problems
 Formatting issues, Alignment of web elements.
 Tab order
ENDURANCE TESTING:

 Endurance testing involves testing a system with a significant load extended over a significant period of time, to
discover how the system behaves under sustained use.
 For example, in software testing, a system may behave exactly as expected when tested for 1 hour but when the
same system is tested for 3 hours, problems such as memory leaks cause the system to fail or behave randomly.
COMPATIBILITY TESTING:

 It checks for an application working perfectly the same in all the browsers stated and all the OS mentioned by
client. With compatibility, we need to ensure the application functionality is working fine on all the stated
browsers.
 Mostly Javascript issues are found in the web application which is used to handle pop-up and validation messages.
LOAD TESTING:

 A load test is conducted to understand the behaviour of the application under a specific expected load. It helps to
identify the maximum operating capacity of an application as well as any bottlenecks and determine which
element is causing degradation. E.g. If the number of users are increased then how much CPU, memory will be
consumed, what is the network and bandwidth response time.
 The primary goal of load testing is to define the maximum amount of work a system can handle without
significant performance degradation.
STRESS TESTING:

 With Stress Testing, the robustness of system is determined. Stress testing tries to break the system under test by
overwhelming its resources or by taking resources away from it.
 Gradually the load is increased and saturation points are noted when any of the parameters like memory, server
response, bandwidth are compromised.
SECURITY TESTING

 Security testing is a testing technique to determine if an information system protects data and maintains
functionality as intended. It also aims at verifying 6 basic principles as listed below:

 Confidentiality
 Integrity
 Authentication
 Authorization
 Availability
 Non-repudiation
SECURITY TESTING TECHNIQUE

 SQL Injection : Broken Authentication and Session Management


 Cross-Site Scripting (XSS)
 Insecure Direct Object References
 Cross-Site Request Forgery (CSRF)
 Using Components with Known Vulnerabilities
 Invalidated Redirects and Forwards
SQL INJECTION ATTACK
SQL injection (SQLi) is a web security vulnerability that allows an attacker to interfere with the queries that an
application makes to its database.

SQL injection attack occurs when:


 An unintended data enters a program from an untrusted source.
 The data is used to dynamically construct a SQL query
 The main consequences are:
 Confidentiality: Since SQL databases generally hold sensitive data, loss of confidentiality is a frequent problem with SQL Injection
vulnerabilities.
 Authentication: If poor SQL commands are used to check user names and passwords, it may be possible to connect to a system
as another user with no previous knowledge of the password.
 Authorization: If authorization information is held in a SQL database, it may be possible to change this information through the
successful exploitation of a SQL Injection vulnerability.
 Integrity: Just as it may be possible to read sensitive information, it is also possible to make changes or even delete this information
with a SQL Injection attack.
TEST DESIGN TECHNIQUE

 Static Testing : As the name states in Static testing, code is not executed. Instead the code is manually verified.

 Dynamic Testing: Dynamic testing is a more formal testing approach for different testing activities such as test
execution, coverage consideration, reporting and test case identification.
ROLES AND RESPONSIBILITIES IN A REVIEW

1. The Moderator: The moderator (or review leader) leads the review process. His role is to determine the type of
review, approach and the composition of the review team. The moderator also schedules the meeting, disseminates
documents before the meeting, coaches other team members, paces the meeting, leads possible discussions and
stores the data that is collected.
2. The Author: As the writer of the ‘document under review’, the author’s basic goal should be to learn as much as
possible with regard to improving the quality of the document.
3. The Scribe/ Recorder: The scribe (or recorder) has to record each defect found and any suggestions or feedback
given in the meeting for process improvement.
4. The Reviewer: The role of the reviewers is to check defects and further improvements in accordance to the
business specifications, standards and domain knowledge.
PHASES OF A FORMAL REVIEW

 A formal review consists of 6 main steps.


1) Planning
2) Kick-Off
3) Preparation
4) Review Meeting
5) Rework
6) Follow-Up
TYPES OF REVIEW:

 1. Peer Review / Informal Review


 2. Technical Review
 3. Walkthrough
 4. Inspection
PEER REVIEW

 In case of peer review, the document is shared with peer (colleague), suggestions and improvement areas are
been shared and accordingly implemented in the respective document.
 Key characteristics:
 a) There is no formal process underpinning the review
 b) The review may be implemented using pair programming (one programmer reviews the code of other)
TECHNICAL REVIEW

 Technical person having good yrs. of experience in a particular domain is shared with the technical design or
automation scripts. Based on their input, changes are implemented in the design document.
 Key characteristics:
a) Technical reviews are documented & use well defined defect detection process that includes peers and technical
experts.
b) It may vary in practice from quite informal to very formal and have a no. of purposes, including: discussions,
decision making, evaluation of alternatives, finding defects, conformance to standards & specifications.
WALKTHROUGH

 This is led by Author (who writes document). Not a formal process. Author shares the document with the
project team and guides through them with document. The process is to share the document and make them
understand how the requirements are laid out and functional aspects are explained in detail.
 Key characteristics:
 a) Review sessions are open ended.
 b) Main purpose is to enable learning about the content of the document under review, to help team member’s
gain an understanding of the content of the document and to find defects.
INSPECTION

 Inspection is the most formal review type. It is usually led by a trained moderator (certainly not by the author).
The document under inspection is prepared and checked thoroughly by the reviewers before the meeting,
comparing the work product with its sources and other referenced documents, and using rules and checklists. In
the inspection meeting the defects found are logged.
 Key characteristics:
a) Help the author to improve the quality of the document under inspection.
b) Improve product quality, by producing documents with a higher level of quality.
c) Create a common understanding by exchanging information among the inspection participants.
Dynamic Testing
BLACK BOX TESTING

 Black box testing is Testing Software based on Output Requirements and without any knowledge of the internal
structure or coding in the program. User sends input data, code gets processed and an output is generated.
BLACK BOX TESTING…CONT
 Black Box – Code Invisible
 Testing based on an analysis of the specification of a piece of software without reference to its internal workings.
The goal is to test how well the component conforms to the published requirements for the component.
 It attempts to find:
Incorrect or missing functions
Interface errors / UI Errors
Performance errors
Initialization and Termination errors

Black box testing is solely on the knowledge of the system requirements. It is usually described as focusing on
testing Functional Requirements. Specifically, this technique determines whether combinations of Inputs and
operations produce expected result.
Test case design Technique:
EQUIVALENCE CLASS PARTITIONING

 A Test case design technique for a component, which divides the input data of a software unit into partitions of
equivalent data from which test cases can be derived. In principle, test cases are designed to cover each partition
at least once. An advantage of this approach is reduction in the time required for testing software due to lesser
no. of test cases.
 This may be best explained by the example of a function which takes a parameter "month". The valid range for the
month is 1 to 12, representing January to December. This valid range is called a partition. In this example there are
two further partitions of invalid ranges. The first invalid partition would be <= 0 and the second invalid partition
would be >= 13.
BOUNDARY VALUE ANALYSIS

 For the most part, errors are observed in the extreme ends of the input values, so these extreme values like
start/end or lower/upper values are called Boundary values and analysis of these Boundary values is called
“Boundary value analysis”. It is also sometimes known as ‘range checking’.
 Boundary value analysis is another black box test design technique and it is used to find the errors at boundaries
of input domain rather than finding those errors in the centre of input.
 Let’s take same above example to understand the Boundary value analysis concept:
One test case for exact boundary values of input domains each means 1 and 12.
One test case for just below boundary value of input domains each means 0 and 11.
One test case for just above boundary values of input domains each means 2 and 13
DECISION TABLE

 A decision table is a best way to deal with combinations of inputs. It is also referred as cause-effect graph.
Using Decision Table:
 The first task is to identify a suitable function or subsystem which reacts according to a combination of inputs or
events. It is better to deal with large numbers of conditions by dividing them into subsets and dealing with the
subsets one at a time.
 If you are a new customer and you want to open a credit card account then there are three conditions first you
will get a 15% discount on all your purchases today, second if you are an existing customer and you hold a loyalty
card, you get a 10% discount and third if you have a coupon, you can get 20% off today (but it can’t be used with
the ‘new customer’ discount).
DECISION TABLE …CONT.
WHITE BOX TESTING

 In case of White box testing, internal logic of the Software needs to be understood. Code behind is tested as well
as functional level testing is too carried out.
 Structural tests (also known as white-box tests and glass-box tests) find bugs in low-level structural elements
such as lines of code, database schemas. Structural testing involves a detailed technical knowledge of the system.
For software, testers create structural tests by looking at the code and the data structures themselves. White box
testing is also called white box analysis, clear box testing or clear box analysis. It is a strategy for software
debugging.
TYPES OF WHITE BOX TESTING

Types of White box Testing


STATEMENT COVERAGE
 A test case design technique for a component in which test cases are designed to execute statements. Design test
cases so that every statement in a program is executed at least once. Unless a statement is executed, we have no
way of knowing if an error exists in that statement.
 Example

 By choosing the test set {(x=5, y=5), (x=6, y=7), (x=8, y=4)} all statements are executed at least once
DECISION COVERAGE:

 Branch: A conditional transfer of control from any statement to any other statement in a component.
 Branch Testing: A test case design technique for a component in which test cases are designed to execute branch
outcomes.
 Branch testing guarantees statement coverage, 100% decision coverage always guarantees 100% statement coverage.

 Example: Test cases for branch coverage can be: {(x=5, y=5), (x=7, y=6), (x=6, y=7)}
CONDITION COVERAGE:

 Condition: A Boolean expression containing no Boolean operators.


 For instance, A < B is a condition but A and B is not.
 Condition coverage reports the true or false outcome of each condition.
 Test cases are designed such that each component of a composite conditional expression given both true and
false values.
 Example Consider the conditional expression ((condt1 && condt2) or condt3). Each of condt1, condt2 and
condt3 are exercised at least once i.e. given true and false values. Condition testing is stronger than branch
testing. Branch testing is stronger than statement coverage testing.
PATH COVERAGE:
 Path: A sequence of executable statements of a component, from an entry point to an exit point.
 Path testing: A test case design technique in which test cases are designed to execute paths of a component.
 Notation for representing control flow

On a flow graph:
• Arrows called edges represent flow of control ·
• Circles called nodes represent one or more actions ·
• Areas bounded by edges and regions called regions ·
• A predicate node is a node containing a condition
CYCLOMATIC COMPLEXITY:

 The Cyclomatic complexity gives a quantitative measure of the logical complexity. Introduced by Thomas McCabe
in 1976, it measures the number of linearly-independent paths through a program module.
 This measure provides a single ordinal number that can be compared to the complexity of other programs. This
value gives the number of independent paths in the Basis set, and an upper bound for the number of tests to
ensure that each statement is executed at least once.
 An independent path is any path through a program that introduces at least one new set of processing statements
or a new condition (i.e., a new edge)
 Cyclomatic complexity (CC) = E - N + p
Where,
E = the number of edges of the graph
N = the number of nodes of the graph
p = the number of connected components
CYCLOMATIC COMPLEXITY:

 Cyclomatic complexity provides upper bound


for number of tests required to guarantee
coverage of all program statements.

Cyclomatic complexity is 4.
Independent paths:
1. 1 , 8
2. 1 , 2, 3, 7b, 1, 8
3. 1 , 2, 4, 5, 7a, 7b, 1, 8
4. 1 , 2, 4, 6, 7a, 7b, 1, 8
GREY BOX TESTING:

 Grey box Testing is the new term, which evolved due to the different architectural usage of the system.
 This is just a combination of both Black box & White box testing.
 Tester should have the knowledge of both the internals and externals of the function.
EXPERIENCE BASED TESTING

 In experience based techniques, people’s knowledge, skills and background are of prime importance to the
test conditions and test cases. Experienced-based testing is where tests are derived from the tester’s skill and
intuition and their experience with similar applications and technologies.
ERROR GUESSING

 Generally testers anticipate defects based on experience. A structured approach to the error guessing technique
is to enumerate a list of possible errors and to design tests that attack these errors.
EXPLORATORY TESTING

 Exploratory testing is concurrent test design, test execution, test logging and learning, based on a test charter
containing test objectives, and carried out within time-boxes.
 It is an approach that is most useful where there are few or inadequate specifications and severe time pressure,
or in order to augment or complement other, more formal testing.
 It can serve as a check on the test process, to help ensure that the most serious defects are found.
SOFTWARE TESTING LIFE CYCLE
STLC Model Phases

1.Requirement Analysis

2.Test Planning

3.Test case development

4.Test Environment setup

5.Test Execution

6.Test Cycle closure


WHAT IS ENTRY AND EXIT CRITERIA IN STLC?

 Entry Criteria: Entry Criteria gives the prerequisite items that must be completed before testing can begin.
 Exit Criteria: Exit Criteria defines the items that must be completed before testing can be concluded
You have Entry and Exit Criteria for all levels in the Software Testing Life Cycle (STLC)
REQUIREMENT ANALYSIS

 Analyzing clients requirements in detail


 Deciding testing properties
 Checking up automation feasibility
TEST PLANNING (TEST STRATEGY PHASE)

Activities
 Preparation of Test Plan
 Selection of Testing tool
 Test effort estimation
 Resource planning and determine roles and responsibilities
 Training requirements
 Staffing requirements
 Defining Scope and Approach

Test Deliverables
 STP Document
 Gantt Chart of testing
TEST CASE PREPARATION AND DEVELOPMENT

Activities
 Development of test cases
 Create Test Data for test cases
 Approval of Test case document
 Development of automation testing script
Deliverables
 Preparation of Test Case
 Preparation of Test data
 Test script development
TEST ENVIRONMENT SET UP

Test Environment Setup Activities


 Understand the required architecture, environment set-up and prepare hardware and software requirement list
for the Test Environment.
 Setup test Environment and test data
 Perform smoke test on the build
Deliverables of Test Environment Setup
 Environment ready with test data set up
 Smoke Test Results.
TEST EXECUTION AND BUG REPORTING

Test Execution Activities


 Execute tests as per plan
 Document test results, and log defects for failed cases
 Map defects to test cases in RTM
 Retest the Defect fixes
 Track the defects to closure
Deliverables of Test Execution
 Completed RTM with the execution status
 Test cases updated with results Requirement Traceability Matrix
 Defect reports
TEST CLOSURE

 Test closure activities are done when software is delivered. It consists of:
a). Submitting STR
b). Submitting Test Summary report, Test Plan Document, Test cases/ Scipts etc
c). Sharing experiences with team like Inclusions of challenges faced, corrective actions taken and risk mitigation
activities.
Test Case Design
TEST CASE

 A test case is a set of conditions or steps prepared to achieve and validate specific scenario or functionality of a
given application under test (AUT).
 A Test Case is simply a list of steps/actions which need to be executed to verify a particular functionality or
feature of your application under test.
 A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to
exercise a particular program path or to verify compliance with a specific requirement.
TEST CASE TEMPLATE

Click here for test case template


WRITING GOOD TEST CASES

 As far as possible, write test cases in such a way that you test only one thing at a time. Do not overlap or
complicate test cases. Attempt to make your test cases ‘singular’.
 Ensure that all positive scenarios and negative scenarios are covered.
 Language:
 Write in simple and easy to understand language.
 Use active voice: Do this, do that.
 Use exact and consistent names (of forms, fields, etc).
CHARACTERISTICS OF A GOOD TEST CASE:

 Accurate: Exacts the purpose,


 Economical: No unnecessary steps or words,
 Traceable: Capable of being traced to requirements,
 Repeatable: Can be used to perform the test over and over,
 Reusable: Can be reused if necessary.
DEFECT /BUG

Defect:
 A flaw in a component or system that can cause the component or system to fail to perform its required
function. A defect, if faced during execution, may cause a failure of the component or system.
When can a Defect be detected?
 We can detect defects through static testing, which can start as soon as we have a draft requirement specification.
We can detect failures, being the symptoms of defects, through dynamic testing, which can start as soon as we
have an executable unit.
 When we see a failure, we should not automatically assume that this indicates a defect in the system under test.
Defects can exist in tests too.
DEFECT LIFE CYCLE
SEVERITY

 Severity: The degree of impact that a defect has on the development or operation of a component or system.
 Severity can be grouped into several parts depending on the impact of defect on the functionality. Most used
Severity types are
1) Blocker: application is not working/ major functionality is completely brown. Tester cannot do further testing.
Tester is blocked.
2) Critical: some part of functionality is brown, tester cannot test some part of functionality and there is no
workaround.
3) Major: in this type, defects are logical defects which do not block any functionality. Major type usually contains
functional and major UI defects.
4) Minor: it mostly contains UI defects, minor usability defects. Defects which does not harm to application under
test
PRIORITY

 Priority: The level of (business) importance assigned to a defect. Priority is defined on the basis of business
impact, development efforts and some other factors.
1) High : it has high business value, end user can not work, unless the defect gets fixed. in this case Priority should
be High, means immediate fix of the defect.
2) Medium: end user can work using workaround but some functionality end user cannot use and that functionality
is not regularly used by the user.
3) Low: No or very less impact on end user
EXAMPLES

High Priority and High Severity:


 Major functionality failure like login is not working; crashes in basic workflow of the software are the best
example of High priority and High Severity
1) Application crashed while opening.
2) Website Menu links landing to error page.
High Priority and Low Severity:
1) Spelling mistake on company logo, menu names, client’s names or any important name which is getting highlighted
to the end user.
EXAMPLES:

 Low Priority and High Severity:


 1) Crashed in application if end user does some weird steps which are not usual or invalid steps.
 Low Priority and Low Severity:
 1). Spelling mistake in paragraph which is not seen much.
 2). Form fields Tab ordering incorrect.
QUESTIONS:
1. What is the difference between Load and Stress testing?
2. What are the different types of Integration testing
3. Who does Unit Testing and what is it?
4. What all types of testing forms part of Non-Functional Testing?
5. What is the difference between Regression and Retesting?
6. What is the difference between Smoke and Sanity Testing?
7. What is the difference between Adhoc and Exploratory Testing?

You might also like